Emily M. Bender

Profil AI Expert

Nationalité: 
Américain(e)
AI spécialité: 
NLP
Occupation actuelle: 
Professeur, Directrice, Université de Washington
Taux IA (%): 
38.69'%'

TwitterID: 
@emilymbender
Tweet Visibility Status: 
Public

Description: 
Professeur et Directrice, en linguistique assisté par ordinateur, Emily travaille sur l'ingénierie de la grammaire multilingue. Ses papiers sont très appréciés de la communauté IA, et nourrissent les débats notamment autour des facultés de généralisation de GPT-3. Emily à introduit l'experte Yejin Choinka au concept tres surprennant du "web garbage english" découvert lors de ses recherches sur la traitement du language naturelle.

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2024-04-29 13:26:07 TFW you'

2024-04-29 04:08:58 @hipsterelectron You got this!

2024-04-26 12:59:22 Not just corporate capture, but TESCREAL corporate capture. Ugh.https://www.wsj.com/tech/ai/openais-sam-altman-and-other-tech-leaders-to...

2024-04-25 14:01:10 Ready for more Mystery AI Hype Theater 3000?Climate change has reached AI Hell (frozen over last we checked in December) and now we'

2024-04-24 14:32:55 @arjen thank you!

2024-04-24 00:24:14 Really, really loved this episode of Our Opinions Are Correct -- and encourage all authors to consider joining AABB (I did, after listening).https://www.ouropinionsarecorrect.com/shownotes/2024/4/18/fascism-and-bo...

2024-04-20 13:33:24 @alex @timnitGebru Also available as video on peertube!https://peertube.dair-institute.org/w/nfzPjaf1VrWRpc4t5em325

2024-04-20 13:32:56 Mystery AI Hype Theater 3000, episode 30! @alex and guest host @timnitGebru read the Techno-Optimism Manifesto so you don'

2024-04-18 20:20:25 The absolute obliviousness of Meta (the company)'

2024-04-18 13:27:03 @minimalparts It is supposed to be open access already -- frustratingly something got lost in communication and we'

2024-04-18 13:01:39 @marinheiro It should be open access. We'

2024-04-17 17:22:29 The best thing you will read about "

2024-04-17 17:16:06 So much of the sales around "

2024-04-17 12:49:47 Bender &

2024-04-15 12:52:39 Join us today!Ready to learn all about how the AIs are going to do our science for us? Join @alex and me in welcoming Molly Crockett and Lisa Messeri onto Mystery AI Hype Theater 3000 to wade through the hype. Monday April 15, 9am Pacific (note special time): https://www.twitch.tv/dair_institute

2024-04-14 12:56:18 Join us tomorrow!Ready to learn all about how the AIs are going to do our science for us? Join @alex and me in welcoming Molly Crockett and Lisa Messeri onto Mystery AI Hype Theater 3000 to wade through the hype. Monday April 15, 9am Pacific (note special time): https://www.twitch.tv/dair_institute

2024-04-12 12:35:11 Ready to learn all about how the AIs are going to do our science for us? Join @alex and me in welcoming Molly Crockett and Lisa Messeri onto Mystery AI Hype Theater 3000 to wade through the hype. Monday April 15, 9am Pacific (note special time): https://www.twitch.tv/dair_institute

2024-04-12 12:26:54 What if universities responded to AI hype with confidence in their core mission rather than FOMO?https://buttondown.email/maiht3k/archive/more-collegiate-fomo/

2024-04-11 20:40:23 Today in the Mystery AI Hype Theater 3000 newsletter:https://buttondown.email/maiht3k/archive/more-collegiate-fomo/

2024-04-10 23:55:37 @soypunk What could possibly go wrong?

2024-04-04 15:40:59 Look what @alex and I got to do! (Hang out with the cool kids over at @ouropinions :)https://www.ouropinionsarecorrect.com/shownotes/2024/4/4/the-turing-test...

2024-04-04 11:35:10 @pa27 yes, I'

2024-04-04 10:26:54 @alex @ctaylsaurus Also available as video!https://peertube.dair-institute.org/w/4h9s6GXxyTMszoBLUm6QCuIf your podcast app auto-downloaded the episode and you'

2024-04-04 10:26:42 Mystery AI Hype Theater 3000 episode 29 has dropped! @alex &

2024-04-04 03:58:51 Must-read reporting by +972 on how the IDF are using “AI” in their indiscriminate murder in Gaza. It’s horrific, and we must not look away. And it’s an absolute nightmare of the usual sorts of AI harms cranked up to the extreme: mass surveillance, "

2024-04-03 17:15:17 @keydelk But the irony of invoking Dunning-Kruger while mansplaining is particularly rich.

2024-04-03 15:30:08 @keydelk and you think I need enlightening on this particular topic why exactly?

2024-04-03 14:31:06 Amazing example of contextualizing synthetic text from The Verge:https://www.theverge.com/2024/4/2/24117976/best-printer-2024-home-use-of...

2024-04-02 13:18:48 In the Mystery AI Hype Theater Newsletter this morning: A take-down of proxy hype in the higher ed press.https://buttondown.email/maiht3k/archive/doing-their-hype-for-them/

2024-03-31 12:51:43 @alex_leathard @alex that is pretty awful, indeed

2024-03-29 22:06:16 @jamiemccarthy We don'

2024-03-29 22:05:23 @hollie @gregtitus Oh thanks for catching that! I'

2024-03-29 18:47:15 Finally, as is usual and *completely unacceptable* the public does not have information about the training data used to build this thing, just the info that Microsoft made it.

2024-03-29 18:46:29 It seems to bear repeating: chatbots based on large language models are designed to *make shit up*. This isn'

2024-03-29 18:45:50 There'

2024-03-25 12:25:14 Join us today!!--Mystery AI Hype Theater 3000 fans, get ready for our next episode! @alex and I will be joined by the inimitable Karen Hao to talk about AI hype and journalism!Live stream Monday March 25, 5pm Pacifichttps://www.twitch.tv/dair_institute

2024-03-24 17:05:58 @tomstoneham Unless you can find a way to calculate only the additional energy required by a person to do a task, the comparison is not just spurious, but drastically dehumanizing. A person'

2024-03-23 14:51:42 @tomstoneham We get into it in the podcast episode I linked to but basically: humans exist (and have a right to do so) and by existing we consume energy. So comparisons between humans (existing and) doing some task and LLMs doing the task are spurious.

2024-03-23 13:40:43 @tomstoneham This is an ill-formed question. See:https://www.buzzsprout.com/2126417/13931174-episode-19-the-murky-climate...

2024-03-21 17:03:03 @kfort That'

2024-03-21 16:53:05 @kfort Oh that'

2024-03-21 13:42:32 Mystery AI Hype Theater 3000 fans, get ready for our next episode! @alex and I will be joined by the inimitable Karen Hao to talk about AI hype and journalism!Live stream Monday March 25, 5pm Pacifichttps://www.twitch.tv/dair_institute

2024-03-21 10:09:14 @cmeinel Hi! I appreciate folks flagging articles they might think I'

2024-03-19 00:59:33 @dmr1848 Thanks -- fixed.

2024-03-18 20:54:55 @cmeinel say what now?

2024-03-18 17:31:36 @MarkRDavid 1) I don'

2024-03-18 17:26:01 @MarkRDavid If you read what I write, you'

2024-03-18 17:16:52 Today the US DHS announced three "

2024-03-18 14:24:00 @martinicat ugh

2024-03-17 03:12:34 @virtualinanity Indeed.

2024-03-15 18:22:46 @MarkRDavid The topic of our next episode of the podcast!

2024-03-15 16:12:16 @whoseknowledge Thank you!

2024-03-15 15:36:34 It'

2024-03-15 15:36:25 Part of the plan with our new #MAIHT3k newsletter is to redirect the energy that we'

2024-03-14 21:36:53 @Nodami @cazencott Thank you!

2024-03-14 21:25:56 @Nodami @cazencott Thank you!

2024-03-14 19:16:08 @cazencott Thank you!

2024-03-14 19:06:43 @cazencott And do you remember it as a hoax that got taken seriously?

2024-03-14 19:00:43 So .... at NeurIPS 2016 (or maybe 2015) there was apparently a prank/hoax presentation where some researchers presented a fake pitch for an '

2024-03-14 13:20:47 Mystery AI Hype Theater 3000 - ep 28: LLMs Are Not Human Subjects, in which @alex and I are beyond dismayed at social scientists seeking to use word distributions from unknown corpora as a data source.https://www.buzzsprout.com/2126417/14677380-episode-28-llms-are-not-huma... to Christie Taylor for production!Also available as video: https://peertube.dair-institute.org/w/eWG2Me6QABWHbXjfKhGyYcAnd check out our new newsletter! https://buttondown.email/maiht3k

2024-03-13 14:13:16 Want to keep up with all things Mystery AI Hype Theater 3000? @alex and I have got you covered! Check our our new newsletter for episode announcements, AI hype take-downs, periodic rants, and occasional samples of fresh AI hell. Subscribe here:https://buttondown.email/maiht3k

2024-03-10 01:27:36 Big Tech likes to push the trope that things are moving and changing too quickly and there'

2024-03-06 01:20:34 @hipsterelectron On Mastodon it was just exactly not giving them too many extra clicks. On Xitter and BlueSky about not providing the link card.

2024-03-05 16:16:50 I realize the latest open letter from the "

2024-03-05 14:12:20 I appreciate the opportunity to speak with an actual journalist (Karan Mahadik) about my experience finding a fabricated quote attributed to me in what turned out to be a fully fabricated (using the Gemini system) "

2024-03-04 13:43:40 Join us today!---Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.Monday March 4, noon Pacifichttps://www.twitch.tv/dair_instituteSee you there!

2024-03-03 13:53:03 Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.Monday March 4, noon Pacifichttps://www.twitch.tv/dair_instituteSee you there!

2024-03-03 03:18:41 @zanchey I see -- it'

2024-03-03 01:03:19 @zanchey You gave the answer right in your post: The source of this problem is the insurance companies.

2024-03-02 21:03:54 p.s. We covered robo-therapy on Mystery AI Hype Theater 3000 back in September, with Hannah Zeavin:https://www.buzzsprout.com/2126417/13544940-episode-13-beware-the-robo-t...

2024-03-02 20:44:00 What if -- instead of seeing the process of creating clinical documentation as mere busywork -- the tech bros understood it as possibly part of the process of care?What if -- instead of leading with the '

2024-03-02 20:43:52 “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?” YIKESThis is a completely unrealistic expectation about what goes into verifying that kind of note and sounds like a recipe for overburdening the medical workforce/setting up errors./6

2024-03-02 20:43:40 When the author finally gets around to reporting on what **actual psychologists** have to say, it'

2024-03-02 20:43:25 The only studies cited are co-authored by the companies selling this crap.One of the supposedly positive findings is that people form a "

2024-03-02 20:42:36 They talk up the idea that this is effective because people are more willing to open up to a "

2024-03-02 20:42:30 For the first ~1500 words, exactly 0 people with expertise in psychotherapy are quoted./2

2024-03-02 20:42:21 Arghh - more problematic reporting, this time about robo-therapists.https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-ther... thread: /1

2024-03-01 16:17:47 Ready for some more Mystery AI Hype Theater 3000? Join me and @alex for our next live stream in which we take on the AI hype infecting the social sciences.Monday March 4, noon Pacifichttps://www.twitch.tv/dair_instituteSee you there!

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-22 22:04:22 My point here is that it's always worth looking at the tradeoffs, even with products that seem "free" and generally empowering. And maybe asking how we empower communities and connections as well as individuals.

2023-05-22 22:02:39 And there are certainly times when a person needs to figure out how to get somewhere but can't leverage the kind of person to person connection they would need without the automated system (incl folks facing discrimination). >

2023-05-22 22:01:08 I'm not saying one way is better than the other. Some businesses might prefer to attract visitors directly while some neighborhoods might resent Google maps inspired traffic. >

2023-05-22 21:57:11 The tourist center model in particular located some power over the direction of tourist attention in a specific kind of institution. >

2023-05-22 21:56:03 Before we had Google maps, getting around required sharing knowledge with people--maybe going to a visitor center as a tourist or calling the business we intended to get to near home. >

2023-05-22 21:54:37 I get it. I appreciate that too. But this made me think about what other values we are sacrificing in this case. Here, I think it's social connection and interdependence. (And this puts me in mind of @abebab 's work on relational ethics: https://t.co/Fc91OtowsZ ) >

2023-05-22 21:50:07 @techwontsaveus @mjnblack At one point, @mjnblack is talking about the concept of "augmented humans" and mentions that she really appreciates Google Maps because of the independence it gives her when exploring new places. >

2023-05-22 21:48:36 I really enjoyed this episode of @techwontsaveus with @mjnblack -- interesting insight into the values that are shaping the technology we use (and thus are shaping social structures &

2023-05-22 14:05:30 @timnitGebru (cont) Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”

2023-05-22 14:05:16 @timnitGebru re AI doomerism in 2023: “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ >

2023-05-22 14:03:01 Great new profile of Dr. @timnitGebru in the Guardian. “I’m not worried about machines taking over the world

2023-05-22 12:57:23 RT @emilymbender: And not fall for either- Myth #1: The tech is moving to fast! Regulation can't keep up. Myth #2: The 'real' concern is…

2023-05-22 12:56:57 RT @emilymbender: So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separatel…

2023-05-21 15:40:21 Hey @kifleswing and @CNBC fact check: 1) I have never worked at Google (nor has McMillan-Major) 2) It's spelled "stochastic" https://t.co/7y2vGM1fNs https://t.co/xkzw3QskPJ

2023-05-21 14:00:56 @jpalasz I think in many regulatory contexts, talking about "automation" rather than "AI" might be clarifying.

2023-05-21 13:50:47 And not fall for either- Myth #1: The tech is moving to fast! Regulation can't keep up. Myth #2: The 'real' concern is rogue AGI that poses 'existential risk' to humanity.

2023-05-21 13:49:35 But I strongly doubt that saying "AI" is so new it needs its own "FDA" is going to get us there. Let's sit with and use the power that existing regulations already give us for collective governance. >

2023-05-21 13:48:05 Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?) Maybe we already have what we need, maybe there's something to add. >

2023-05-21 13:45:56 A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems. >

2023-05-21 13:44:53 But the story changes when tech bros mistake "free for me to enjoy" for "free for me to collect" and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections. >

2023-05-21 13:42:24 Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal. >

2023-05-21 13:39:44 Re (2), I'm thinking of the kinds of risks that happen when data is amassed (risks to privacy, e.g. around deanonymization being possible after just a few data points are collected) and also risks connected to the ease of data collection. >

2023-05-21 13:38:27 (That last point follows from the value sensitive design principle of considering pervasiveness: what happens when the technology is used by many?) >

2023-05-21 13:37:32 Re (1), we should be asking (as I think many are): how to ensure that people have recourse if automated systems make decisions that are detrimental them --- and how to ensure that communities have recourse if patterns of decision create/worsen inequity. >

2023-05-21 13:36:33 I am not a policymaker (nor a lawyer) but my sense of it is that the gaps largely come up in cases where (1) automation obfuscates accountability or (2) data collection creates new risks. >

2023-05-21 13:35:14 Beyond that, we should be reasoning from identified harms to see how existing laws &

2023-05-21 13:34:01 Which is another way of saying: existing regulatory agencies should maintain their jurisdiction. And assert it, like the FTC (and here EEOC, CFPB and DOJ) are doing: https://t.co/d8HeeOAsse >

2023-05-21 13:33:29 I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for. >

2023-05-21 13:31:32 So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things. >

2023-05-21 12:36:17 RT @timnitGebru: "he warns that AI poses a massive threat through "accidental misuse." There is a surrealness to his candor — like if an o…

2023-05-20 22:21:42 RT @samhawkewrites: Tech bros: hey writers! guess what? We've solved your biggest problem! Writers: OMG that's so awesome! We're going to b…

2023-05-20 21:59:25 RT @mmitchell_ai: I've mostly not spoken up about longertermism, effective altruism, and AI. But when it comes to affecting what we priorit…

2023-05-20 13:44:20 @histoftech In response to the prompt "What detracted from your learning?": "830 am" . . . . It was an afternoon class.

2023-05-20 03:10:12 @timnitGebru @Rogntuudju That one is not mine. I suspect you're thinking of this one: https://t.co/77kIgQizn1

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-04-24 15:37:44 RT @MicroSFF: "You've been chosen," the spirit said. "What?" "Save the world, make it kinder, cleaner, safer." "Me?" "Yes." "Alone?" "We ch…

2023-04-24 15:35:37 Got just a moment for 10 delightful stories? @MicroSFF has you covered: https://t.co/d3D5maESUf

2023-04-24 14:58:08 RT @daveyalba: “If you want to stay on at Google, you have to serve the system and not contradict it,” @L_badikho told me. "You have to bec…

2023-04-23 21:35:43 @mmitchell_ai Ugh, I'm so sorry.

2023-04-23 01:39:41 RT @bobehayes: .@Google CEO peddles #AIhype on CBS @60Minutes "You know what approaching this with humility would mean @sundarpichai? It…

2023-04-22 19:08:44 RT @bgzimmer: In this weekend's @WSJ Review section: New chatbots have been plagued by "hallucinations," generating text that seems plausib…

2023-04-22 13:14:54 RT @emilymbender: "The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- t…

2023-04-21 21:36:32 Thank you, @daveyalba for this reporting.

2023-04-21 21:36:15 “When Google’s management does grapple with ethics concerns publicly, they tend to speak about hypothetical future scenarios about an all-powerful technology that cannot be controlled by human beings” So tempting to focus on fictional future harms over current, real ones. >

2023-04-21 21:35:51 “But now that the priority is releasing generative AI products above all, ethics employees said it’s become futile to speak up.” And it shows… >

2023-04-21 21:35:39 “One former employee said they asked to work on fairness in machine learning and they were routinely discouraged — to the point that it affected their performance review.” Not a good look, Google. >

2023-04-21 21:35:25 “Employees say they’re concerned that the speed of development is not allowing enough time to study potential harms.” Employees are correct. >

2023-04-21 21:35:11 “On the same day, [Google] announced that it would be weaving generative AI into its health-care offerings.” >

2023-04-21 21:34:58 “But ChatGPT’s remarkable debut meant that by early this year, there was no turning back.” False. We turned back from lead in gasoline. We turned back from ozone-destroying CFCs. We can turn back from text synthesis machines. >

2023-04-21 21:34:40 “Silicon Valley as a whole is still wrestling with how to reconcile competitive pressures with safety.” Are they though? It seems to me that those in charge (i.e. VCs and C-suite execs) are really only interested in competition (for $$). >

2023-04-21 21:33:48 @daveyalba @Bloomberg "Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings" We don’t tolerate “experiments” that pollute the natural ecosystem, nor should we tolerate those that pollute the information ecosystem. >

2023-04-21 21:32:44 @daveyalba @Bloomberg “The group working on ethics that Google pledged to fortify is now disempowered and demoralized, the current and former workers said.” >

2023-04-21 21:32:17 "The trusted internet-search giant is providing low-quality information in a race to keep up with the competition," --- this phrasing makes it starkly clear that it's a race to nowhere good. https://t.co/iJPiKhZZ7W From @daveyalba at @Bloomberg

2023-04-21 20:58:35 RT @timnitGebru: “So now, all of a sudden, we have a research effort where we’re now trying to get to a thousand languages.” How their #AI…

2023-04-21 17:58:42 @xriskology The fact that it's hard to tell is ... something though.

2023-04-21 17:58:21 @xriskology I suspect the Twitter account behind "Dr. Gupta" is actually a hoax account.

2023-04-21 17:29:56 @MldDistractions @CriticalAI I would have objected just as strenuously to "paper caught my eye" or "paper jumped out at me" -- in all cases this is assigning agency to the paper and not its authors, while in the next breath naming some irrelevant man and suggesting he deserves credit.

2023-04-21 15:07:12 RT @emilymbender: No, emphatically not. Treating "AIs" as things to be nurtured and raised is NOT the path to constraining the actions of c…

2023-04-21 13:12:26 RT @emilymbender: Really infuriating article (not quite reporting --- it's mostly just the journo's own musings) in the Economist today. Th…

2023-04-21 13:02:18 RT @sfmnemonic: If you think about it, it's kind of heartening that it has taken only a couple of years for the general public to begin to…

2023-04-21 01:11:19 RT @bgzimmer: When chatbots produce responses untethered from reality, AI researchers call those responses "hallucinations." But the term h…

2023-04-21 00:00:01 CAFIAC FIX

2023-04-14 23:45:44 RT @acmsigcas: Informative and extremely thought provoking response to the "AI pause" letter that has been circulating. E.g. "We should be…

2023-04-14 21:10:31 @DrTechlash @timnitGebru @mmitchell_ai @mcmillan_majora Otherwise, one is left with the impression that the only voices are AI Doomers and AI Boosters (plus maybe @STS_News who is quoted and I would say is neither).

2023-04-14 21:09:53 @DrTechlash We lay this out somewhat more thoroughly in our statement (from the listed authors of the Stochastic Parrots paper, @timnitGebru @mmitchell_ai @mcmillan_majora and me) to the "AI pause" letter: https://t.co/YsuDm8AHUs >

2023-04-14 21:08:44 I appreciate this guide to the AI Doomer and AI Doomerism from @DrTechlash https://t.co/goLBKj2W0H But I wish it also included more about the actual present harms being done in the name of "AI", one function of AI Doomerism being to avoid dealing with those. >

2023-04-14 18:13:53 RT @mmitchell_ai: I have loved @haydenfield's coverage of tech work+culture. @CNBC is lucky to have her! I forgot to share this great piece…

2023-04-14 17:38:04 h/t @SashaMTL for finding this, ahem, gem of a paper. https://t.co/9l9CDZgP0l

2023-04-14 16:38:23 RT @ProfNoahGian: Such a deep, nuanced, historically-grounded convo about language and AI (and hype, marketing, ethics, longtermism, corpor…

2023-04-14 14:20:19 And here's a new twist on "we used ChatGPT to write our paper". Of course. https://t.co/nFeJziBLI1

2023-04-14 14:18:41 I can't believe this needs to be said, but: LLMs are *optional*. Humans are not. >

2023-04-14 14:17:58 Look, you can't count the carbon emissions that people have for (check notes) existing as the "carbon cost" of the work that they do. >

2023-04-14 14:17:11 Okay, so are these 8 pages of motivated reasoning formatted like they've been submitted to Science or to Nature? https://t.co/h95redFrB1 >

2023-04-14 14:05:24 RT @amyjko: I'm excited to speak next Friday at Carnegie Mellon, unveiling my sabbatical work on Wordplay! It's one humble attempt to cente…

2023-04-14 14:05:01 RT @ShanaVWhite: "Society should build technology that helps us, rather than simply adjusting to whatever technology comes our way." -@timn

2023-04-14 13:49:41 It seems we've entered a whole new phase of #AIHype discourse. The good news: There seems to be some movement towards creating regulation. The bad: A lot of it seems to be informed by #AIHype coming from BigTech --- even among those who would work to limit the power of tech cos. https://t.co/crXXmlDbhB

2023-04-14 13:46:42 Oh, and Sen @BernieSanders -- don't get your news about "AI" from the @nytimes. They've been absolutely terrible about how they cover this. For example: https://t.co/0Xc7WVwBKi

2023-04-14 13:44:34 On making sure the regulation reflects the input, interests and needs of those who are the most impacted, see this statement from the listed authors of the Stochastic Parrots paper: https://t.co/YsuDm8AHUs https://t.co/R4JyKUbG1g

2023-04-14 13:42:29 @BernieSanders The move of anthropomorphizing "Sydney" or any other one of these "AIs" opens up room to displace that accountability. But accountability sits with corporations and the people that make them up. >

2023-04-14 13:41:21 Sen @BernieSanders you are right that we need regulation to ensure tech development that actually benefits everyone. Machine's aren't gathering info. Big Tech is using machines to gather it. Let's keep the focus on keeping corporations accountable. >

2023-04-14 13:35:16 RT @emilymbender: "Congress needs to ensure corps are not using people’s data w/p their consent, &

2023-04-14 13:35:12 RT @emilymbender: @timnitGebru "Congress needs to focus on regulating corporations and their practices, rather than playing into their hype…

2023-04-14 04:59:55 @mosermusic https://t.co/7YYD3QgF5R

2023-04-14 04:31:19 Case in point: #OpenAI's terms of service *still* say that the user is somehow responsible for what comes out of ChatGPT etc in response to their prompts. Let's get some regulation fixing this, stat. https://t.co/vMYe6GxLf6

2023-04-14 04:28:57 instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks." https://t.co/wtM9tRVQ2D >

2023-04-14 04:28:48 "Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) off the hook, >

2023-04-14 04:27:11 "GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer." >

2023-04-14 04:23:41 @timnitGebru "Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of “powerful digital minds.” This, by design, ascribes agency to the products rather than the organizations building them." -@timnitGebru https://t.co/LTgpTJILkr

2023-04-14 04:20:52 "Congress needs to ensure corps are not using people’s data w/p their consent, &

2023-04-14 01:15:01 @davidschlangen Looks amazing!

2023-04-14 00:49:25 Hey @JeffDean -- you can't "make nice" after causing harm without first making reparations. Meg's well deserved recognition by Time isn't yours to comment on. Not until you've addressed the harm you did. https://t.co/8LKwO0vBDV

2023-04-13 22:36:39 RT @DAIRInstitute: Did you miss #StochasticParrotsDay? You can now find the recordings here. https://t.co/W8PWYHVA2m

2023-04-13 22:22:49 #StochasticParrotsDay recordings are now available! Huge thanks to DAIR institute for hosting this, @timnitGebru for organizing such amazing panels, @mmitchell_ai and @mcmillan_majora for moderating, @alexhanna for producing and all the amazing panelists! https://t.co/rlwmLpYQsu

2023-04-13 18:41:56 I've known for a long time that @mmitchell_ai is AWESOME so it's very satisfying to see her awesomeness validated in this way :) https://t.co/Vf6e0ZmDjj

2023-04-13 18:35:52 RT @alienelf: I JUST LEARNT THAT @alexhanna &

2023-04-13 16:41:40 @mmitchell_ai So well deserved!!! Congrats

2023-04-13 15:02:59 RT @AJLUnited: The government is still using IDme to access tax accounts after promising to stop after many complaints. Read @jovialjoy 's…

2023-04-13 13:36:03 RT @emilymbender: @SashaMTL p.s. re "the horse is out of the barn". That metaphor is used to express helplessness, a "there's-nothing-we-ca…

2023-04-13 13:35:57 RT @emilymbender: @SashaMTL Is the horse out of the barn? Do we just have to stand by and watch this go down? Indeed not. We've collective…

2023-04-13 13:35:47 RT @emilymbender: This is a great summary by @SashaMTL of the environmental and human costs of so-called "AI" technology. https://t.co/Yji

2023-04-13 13:35:27 RT @parismarx: If you’re still trying to wrap your head around ChatGPT, you should listen to my conversation with @emilymbender. She lays…

2023-04-13 13:14:33 RT @techwontsaveus: This week @emilymbender joins @parismarx to discuss why large language models and tools like ChatGPT are not intelligen…

2023-04-12 20:46:41 RT @tante: If you use the "Microsoft Sparks of AGI" paper to argue for whatever "AI" hype at least be aware that you put yourself in a euge…

2023-04-12 20:19:51 RT @mmitchell_ai: Okay. Inspired by news &

2023-04-12 17:01:47 @AngliPartners @SashaMTL Credit for identifying the #TESCREAL bundle (and naming it) goes to @timnitGebru and @xriskology !

2023-04-12 16:49:50 @SEFrench @SashaMTL That's so perfect!

2023-04-12 16:22:30 @SashaMTL p.s. re "the horse is out of the barn". That metaphor is used to express helplessness, a "there's-nothing-we-can-do-now" attitude. But do people with escaped horses really say "Oh well, was nice knowing ya horsey"? I'd guess probably not.

2023-04-12 16:21:31 @SashaMTL Let's heed @SashaMTL 's call to get engaged with the regulatory process! Relatedly: https://t.co/Zkfvos9G1Z >

2023-04-12 16:19:57 @SashaMTL Is the horse out of the barn? Do we just have to stand by and watch this go down? Indeed not. We've collectively handled other sources of pollution (e.g. lead in gasoline, CFCs harming the ozone layer) before and we can do it again. >

2023-04-12 16:18:39 @SashaMTL Unknown environmental costs, non-reproducible science, data theft, and exploitative labor practices. And for what? A shiny toy to play with for the masses + the ability to claim "AGI" (while blocking scrutiny of the claim) for OpenAI and other #TESCREAL adherents. >

2023-04-12 16:17:07 @SashaMTL "it’s difficult to carry out external evaluations and audits of these models since you can’t even be sure that the underlying model is the same every time you query it. It also means that you can’t do scientific research on them, given that studies must be reproducible." >

2023-04-12 16:15:56 @SashaMTL "with ChatGPT, [...] thousands of copies of the model are running in parallel [...] generating metric tons of carbon emissions. It’s hard to estimate the exact quantity of emissions this results in, given the secrecy and lack of transparency around these big LLMs." >

2023-04-12 16:14:10 This is a great summary by @SashaMTL of the environmental and human costs of so-called "AI" technology. https://t.co/YjiGbflBnu >

2023-04-12 15:27:53 RT @jennaburrell: This interview with @alondra on @ezrakelin was very satisfying, particularly hearing her call for more public participati…

2023-04-12 04:27:56 RT @NoraPoggi: I attempted to summarize Stochastic Parrots Day, full of brilliant experts sharing invaluable insights on AI and calls to ac…

2023-04-11 20:56:25 RT @alexhanna: To me, it speaks to something of the lack of an epistemic core to AI research. There's a desire to be grounded in what only…

2023-04-11 20:56:12 RT @alexhanna: Been thinking a lot lately about the irreverence of citation that AI researchers give to non-technical texts. Citations are…

2023-04-11 19:25:03 RT @timnitGebru: Essential reading!! https://t.co/aNesE17CMS

2023-04-11 13:26:36 RT @emilymbender: @timnitGebru @xriskology Case in point: Did you know that the "sparks of AGI" paper takes it definition of "intelligence"…

2023-04-11 13:26:29 RT @emilymbender: Ever found the discourse around "intelligence" in "A(G)I" squicky or heard folks pointing out the connection w/eugenics &

2023-04-10 20:44:44 @SebastienBubeck No, the roots of the issues here are racism and white supremacy.

2023-04-10 20:20:40 @SebastienBubeck Those atrocious claims aren't just "litter" on an otherwise blameless field, but rather part of its fabric.

2023-04-10 20:20:06 @SebastienBubeck You might find some useful starting points in the references from this talk: https://t.co/3KDiNyaM4a >

2023-04-10 20:19:15 @SebastienBubeck I'm glad you are disavowing, but that is only the start --- as I lay out in this thread that you are replying to. You need to read up on the harms caused by race science (of which "IQ" is a major part) and work through how those harms relate to the work you are doing. >

2023-04-10 18:39:55 RT @alexhanna: Take the "AGI don't cite Charles Murray" challenge https://t.co/9GZBshZBx6

2023-04-10 18:29:19 @clairesonos Yeah -- I wanted to illustrate the point I was making without also giving their words more life. This seems like a decent strategy. (Learned it from @LeonDerczynski )

2023-04-10 18:28:07 So, if you'd like not to be racist (and I hope &

2023-04-10 18:26:29 That's at the *foundation* of the "sparks of AGI" paper, since that question is asking "is GPT-4 intelligent" and using the definition of intelligence from the editorial given above. >

2023-04-10 18:25:45 @timnitGebru @xriskology Case in point: Did you know that the "sparks of AGI" paper takes it definition of "intelligence" from an editorial signed by 52 scholars *defending* IQ as "not racist" and making assertions like those in these screencaps: >

2023-04-10 18:03:12 @timnitGebru @xriskology You can't take work that's been exposed as racist and "clean it up" with a footnote saying "But not the racist bits". You've got to actually work on being anti-racist. >

2023-04-10 18:02:34 @timnitGebru @xriskology If you want to break that connection, you've got to do the work: read those who have been documenting the harms, understand how those harms relate to the work you were pointing to and interrogate how the concepts you've been drawing on could mean your work is perpetuating harm.>

2023-04-10 18:01:13 @timnitGebru @xriskology And just to be very clear: If your work has been exposed as pointing to eugenicist or otherwise racist underpinnings, it's not enough to just "disavow". >

2023-04-10 17:59:40 @timnitGebru @xriskology Also great for understanding what the #TESCREAL bundle of ideologies is, how they connect, and why any serious work towards improving things for people on this planet should be very clearly distanced from any of that. >

2023-04-10 17:58:22 Ever found the discourse around "intelligence" in "A(G)I" squicky or heard folks pointing out the connection w/eugenics &

2023-04-10 16:16:54 RT @emilymbender: @ChrisMurphyCT https://t.co/G3hpHgEKeQ

2023-04-10 14:26:08 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Thank you. And that is my point exactly: linguistics is relevant to the broader discourse of the social impact of these technologies. I am here **as a linguist** making my contribution. To say that I am a CS or AI researcher is to erase the relevance of linguistics.

2023-04-10 14:24:54 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Yes, I endeavor to speak only from my expertise, which is (computational) linguistics. Here are some publications that are representative: https://t.co/z1F7fEBCMn https://t.co/kwACyKvDtL https://t.co/rkDjc4kDxj

2023-04-10 14:19:01 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Yeah -- I definitely am spending a lot of time talking to the media these days, but my expertise is in linguistics and the media come to me to cut through the #AIhype, which I use linguistics to do. That's not the same thing as being an AI researcher (nor computer scientist).

2023-04-10 14:03:38 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 No website that I maintain says that. Here is my website: https://t.co/gMe04yP96Q

2023-04-10 13:49:49 @skdevitt @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 So reply to @sarahkendrew then? I'm neither in AI nor in CS.

2023-04-10 04:59:06 How is this even legal? https://t.co/CThoHAMNBO

2023-04-10 04:49:42 @ChrisMurphyCT https://t.co/G3hpHgEKeQ

2023-04-09 20:51:27 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 And everything about AI (and a lot about CS) these days is enormous power grabs. So I think it is really important to stand up for the value of research *outside* these areas.

2023-04-09 20:50:55 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 I am mixed up in the "AI ethics" conversation because I find that the perspective of linguistics is important to help steer things away from societal harm. But that doesn't make me an AI researcher. >

2023-04-09 20:50:24 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 I appreciate that you framed this as a not the superset --- but it's still a miss. My work in linguistics (including in compling) has never been motivated by the project of "AI". >

2023-04-09 19:39:48 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Finally, to say that "counting" me as something I'm not is a compliment is saying that what I actually am is somehow less than. No thank you.

2023-04-09 19:39:18 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 If you think those things are "AI", what's the difference between them and say spreadsheet software? >

2023-04-09 19:38:49 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 Yeah, it's not a compliment to erase my field and claim it as CS. Linguistics is worthwhile in its own right. Similarly, do you think of spell check as "AI"? How about computational methods in support of lexicography? Data mining of EHRs to match patients to clinical trials? >

2023-04-09 14:25:50 @j2bryson @sarahkendrew @mmitchell_ai @CGMundell @timnitGebru @MelMitchell1 I am a linguist, not a computer scientist. My degrees are all in linguistics and I have been a prof in the Dept of Linguistics at UW since 2003. My field (computational linguistics/NLP) is an interdisciplinary field and not a subfield of CS (much less a subfield of AI).

2023-04-09 13:24:30 RT @emilymbender: .@ChrisMurphyCT I'd like to set the record straight. I can understand how the reaction of the tech world to your tweet wa…

2023-04-08 23:29:24 RT @TaliaRinger: Just caught this incredible talk by @timnitGebru on eugenics, "AGI," and the TESCREAL ideologies. I'm so glad this exists

2023-04-08 16:38:49 RT @anggarrgoon: @emilymbender @ChrisMurphyCT You're my senator, @ChrisMurphyCT . Prof. Bender is right, and I or other linguists watching…

2023-04-08 13:42:46 Postscriptum: My offer to speak with you or someone in your office stands! https://t.co/6wz5bJzEzH

2023-04-08 13:42:00 I look to you as a Senator who represents the interests of the people --- not corporations, and not just the wealthy --- and so I hope that you will bring that perspective and policy-making expertise to this issue as well.

2023-04-08 13:41:16 My frustration on seeing your tweet was not with you, but with the way that your tweet reflected the view points of the corporate interests (Google, Microsoft) and longermist AI cultists (OpenAI, Future of Life Institute) --- suggesting that they had your ear. >

2023-04-08 13:40:11 But whatever the regulatory outcome is, it should be produced through a democratic process that centers the perspectives of those experiencing the harms of so-called AI systems now, as we lay out in our statement. >

2023-04-08 13:38:50 @ChrisMurphyCT I advocate for transparency (see prev tweet), accountability (purveyors of so-called generative AI systems should be accountable for their output -- be it slander, dangerous medical advice, privacy violations, etc). >

2023-04-08 13:37:53 @ChrisMurphyCT From the statement put out by the four listed authors of the Stochastic Parrots paper recently: >

2023-04-08 13:35:47 @ChrisMurphyCT I and many others have been calling for regulation of so-called AI systems, based on shared governance, meaning that we absolutely need our elected officials to be centrally involved. >

2023-04-08 13:34:38 .@ChrisMurphyCT I'd like to set the record straight. I can understand how the reaction of the tech world to your tweet was unpleasant, but please know that for myself (and many others) we were emphatically not trying to keep policymakers away from this topic. >

2023-04-07 23:52:16 @LeonDerczynski @SashaMTL Kinda exists? https://t.co/0mqzLjBTxs

2023-04-07 23:47:54 @mmitchell_ai Bummer &

2023-04-07 20:22:36 RT @mmitchell_ai: It's always an honor to be covered in any news publication. At the same time, I am pretty frustrated with the NYT. Let's…

2023-04-07 20:22:11 @alex And thank you @mmitchell_ai for calling out how this is part of a larger system that erases the work of people on the lower end of power differentials. https://t.co/oROngobMqX

2023-04-07 20:21:38 Thank you, @alex. I'm fully fed up with this pattern. I came up with that phrase, and used it (with my co-authors) in our paper. But then someone with a lot of fame &

2023-04-07 17:49:08 I'd say it was cathartic, at least for me. I hope the audience thought so too! https://t.co/5pGxWCyYDl

2023-04-07 17:45:14 RT @alexhanna: If you missed the last Mystery AI Hype Theater 3000 with @emilymbender and me, no worries! We figured out how to do VOD, so…

2023-04-07 14:58:40 RT @alexhanna: Join us in two hours, as we read the GPT-4 "System Card" so you don't have to.

2023-04-07 12:49:47 Today! https://t.co/FK68w2VIXj

2023-04-06 20:10:57 @ShannonVallor @DanielaTafani @mmitchell_ai @ayahbdeir @willie_agnew But "beekeeping" isn't ambiguous between the activity (humans taking are of bees) and some other, mythological autonomous entity. "AI" is, and that's why the phrase become insidious.

2023-04-06 19:23:56 RT @haleyhaala: Any good studies on positivism and interpretivism in #NLProc and computational social science? Looking for sources!

2023-04-06 17:54:56 Tomorrow! https://t.co/FK68w2VIXj

2023-04-06 02:52:07 @TaliaRinger It's so frustrating.

2023-04-06 02:02:07 @mmitchell_ai @ShannonVallor @ayahbdeir @willie_agnew For UW RAISE I advocated for the acronym actually being Responsibility in AI Systems and Experiences, rather than Responsible AI Systems and Experiences, since I don't like the ambiguity of "Responsible AI" where one reading is that the AI itself is responsible.

2023-04-05 21:59:09 RT @ross_minor: Welp, This is it folks. Twitter has blocked API access for third-party clients, including those that make the site more acc…

2023-04-05 21:36:15 @DaniShanley @alexhanna Yes -- You can see previous episodes here. (But note there's some delay .. Ep 9 &

2023-04-05 13:17:04 RT @schock: https://t.co/K7MZuAnV1l

2023-04-05 13:06:30 RT @emilymbender: If it seems like the world of "AI" and "AI ethics" is moving too fast, I'd like to point out that the fundamental problem…

2023-04-05 13:06:16 RT @emilymbender: Thank you @SashaMTL for pointing the spotlight where it matters!

2023-04-05 13:03:02 RT @emilymbender: Who else is feeling buried under #AIhype after the past few weeks? If you’re ready for some cathartic BS shoveling, come…

2023-04-05 13:02:56 RT @emilymbender: #MAIHT3k Ep 8 is now up! @alexhanna and I greeted the new year by taking on the #ChatGPT hype + of course, some Fresh AI…

2023-04-04 19:16:33 RT @alexhanna: By the way, @DAIRInstitute videos have a new home! https://t.co/qSImGM2EaN and MAIH3K has a new channel -- https://t.co/bnWj

2023-04-04 19:07:05 @alexhanna And if you want to catch the next episode live, deets are here: https://t.co/FK68w2VIXj

2023-04-04 19:06:42 #MAIHT3k Ep 8 is now up! @alexhanna and I greeted the new year by taking on the #ChatGPT hype + of course, some Fresh AI Hell. https://t.co/G3n0Ku89dg >

2023-04-04 19:05:39 RT @rachelmetz: A really smart, nuanced piece by ⁦@SashaMTL⁩. As she notes, ⁦@timnitGebru⁩, ⁦@ruha9⁩, ⁦@rajiinio⁩ (and many more!) have pus…

2023-04-04 18:04:06 @alexhanna We plan to dig through the GPT4 "system card", the "sparks" fan fiction novella, and the "skynet is falling" letter.

2023-04-04 18:02:29 Who else is feeling buried under #AIhype after the past few weeks? If you’re ready for some cathartic BS shoveling, come join me and @alexhanna as we dig out from under all of this on the next episode of MAIHT3k. Friday April 7 9:30-10:30am Pacific Time https://t.co/ETRqVjeTrh

2023-04-04 17:45:21 RT @AINowInstitute: As @timnitgebru, @emilymbender, @mcmillan_majora and @mmitchell_ai made clear, we need more“focus on the very real and…

2023-04-04 17:44:41 Thank you @SashaMTL for pointing the spotlight where it matters! https://t.co/UUlFoLjFge

2023-04-04 13:20:42 RT @merbroussard: Tomorrow! AI Cyber Lunch: Meredith Broussard on "Confronting Race, Gender, &

2023-04-04 13:01:19 RT @emilymbender: "Imagine looking at the list of your published papers 10 years from now: do you want it to be longer, or containing more…

2023-04-03 19:41:03 "Imagine looking at the list of your published papers 10 years from now: do you want it to be longer, or containing more things that you are proud of long-term?" Wise words from several #NLProc scholars thinking about what it means to do science in our field: https://t.co/07Rbd2ltmd

2023-04-03 18:24:51 RT @AASchapiro: A good reminder from @DAIRInstitute to stay alert to current AI harms. https://t.co/F7b5HhYmKC Lots of stuff is under the…

2023-04-03 18:00:44 @benoitfrenay Not exactly framed that way, but this talk is relevant, I think: https://t.co/3KDiNyaM4a

2023-04-03 17:32:30 We recorded this episode of Factually! with @adamconover before Blake Lemoine got the press all riled up by claiming that LaMDA was sentient. Everything I said was still relevant: https://t.co/iVmcVmkISO

2023-04-03 17:31:35 We recorded this interview (for @marketplace tech) *before* the "AI pause" letter dropped. Everything I said was still relevant. https://t.co/NOZ4hUKktK >

2023-04-03 17:30:58 If it seems like the world of "AI" and "AI ethics" is moving too fast, I'd like to point out that the fundamental problems are in fact relatively unchanging and keeping an eye on the people involved can be a good anchor. Two cases in point: >

2023-04-03 16:59:43 RT @Marketplace: Thousands of experts are sounding alarms about a potential dark future created by AI. Computational linguist @emilymbend…

2023-04-03 15:35:24 RT @zeitonline: Muss man Angst vor superintelligenten Maschinen haben? Blödsinn, sagt die KI-Ethikerin @emilymbender im Interview. Gefährli…

2023-04-03 13:36:59 RT @emilymbender: To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI saf…

2023-04-03 13:24:10 RT @shashib: Got a lot of good #ai insights from @emilymbender in conversation with @meghamama https://t.co/viGUh7ySmJ

2023-04-03 03:09:12 A bunch of AI researchers high on their own supply wrote a ridiculous letter and got famous people including a certain billionaire man-child to sign, and in the process misappropriated our work. So we speak up and somehow we're at fault? I think NOT.

2023-04-03 02:49:17 RT @timnitGebru: What kills me is that THE SAME DUDES who call themselves such &

2023-04-03 02:39:15 RT @timnitGebru: "Why would you, a CEO or executive at a high-profile technology company...proclaim how worried you are about the product…

2023-04-03 02:28:35 Yes, we need regulation. But as we said: "We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities." https://t.co/YsuDm8AHUs

2023-04-03 02:27:11 It's frankly infuriating to read a signatory to the "AI pause" letter complaining that the statement we released from the listed authors of the Stochastic Parrots paper somehow squandered the "opportunity" created by they "AI pause" letter in the first place. >

2023-04-03 02:25:54 If the call for "AI safety" is couched in terms of protecting humanity from rogue AIs, it very conveniently displaces accountability away from the corporations scaling harm in the name of profits. >

2023-04-03 02:24:45 If (even) the people arguing for a moratorium on AI development do so bc they ostensibly fear the "AIs" becoming too powerful, they are lending credibility to every politician who wants to gut social services by having them allocated by "AIs" that are surely "smart" and "fair".>

2023-04-03 02:23:00 #AIhype isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT). >

2023-04-03 02:21:55 To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology. >

2023-04-02 23:54:29 RT @timnitGebru: I recommend that everyone read this entire thread &

2023-04-02 22:46:20 @jamesofputney @TaliaRinger Yeah, I haven't read it but from all I hear, his book is terrible. That seems kinda orthogonal to this discussion tho?

2023-04-02 22:09:37 @TaliaRinger My guess is that's two-fold: 1) ChatGPT made the experience of playing with these models much more widely accessible 2) MacAskill's recent promo activities for his book (and thus weird longtermist AGI doom fantasies)

2023-04-02 01:16:55 RT @mmitchell_ai: The AI ethics idea of "think about short-, mid- and long-term harms" is constantly regurgitated as if it's "JUST think ab…

2023-04-01 20:36:44 I appreciate this (humorous, but also informative!) explainer video by @adamconover --- and am especially tickled by the Stochastic Parrots shout out (and quote) https://t.co/o49c4Qm42j

2023-04-01 15:13:00 RT @mmitchell_ai: TIRED: AI Apocalypse WIRED: Governance structures! Wait why is no one excited.

2023-04-01 15:05:42 RT @schock: This statement is very powerful: https://t.co/risQH5CtsK

2023-04-01 14:33:51 RT @emilymbender: "Accountability properly lies not with the artifacts but with their builders."

2023-04-01 13:21:05 RT @cfiesler: So anyway, as a reminder, whereas I think that speculation is a key skill for technologists, the point of e.g. the Black Mirr…

2023-04-01 13:10:38 RT @cfiesler: Some of my work focuses on ethical speculation. How can we think through potential harm of tech before it's released instead…

2023-04-01 13:09:18 RT @timnitGebru: Means a lot coming from THE Sherrilyn Ifill. I think we're going to claim @emilymbender at DAIR even though she's at Unive…

2023-04-01 13:09:10 RT @techwontsaveus: “The current race towards ever larger ‘AI experiments’ is not a preordained path where our only choice is how fast to r…

2023-04-01 13:07:44 RT @STS_News: It’s nice to have voices of reason on this stuff.

2023-03-31 23:44:56 RT @DiverseInAI: #StochasticParrots, the 2021 @FAccTConference paper on large language models by @emilymbender @timnitGebru @mcmillan_major…

2023-03-31 22:08:49 RT @mmitchell_ai: There are clearly foreseeable long-term AI harms. To address them, regulatory efforts should focus on transparency, accou…

2023-03-31 21:47:25 RT @SIfill_: This letter by @timnitGebru &

2023-03-31 21:17:35 RT @amyjko: My absolute favorite line: 'We should be building machines that work for us, instead of "adapting" society to be machine readab…

2023-03-31 21:12:40 RT @xriskology: This statement ,a response to the recent FLI "open letter" on AI, is so very important. I wish @TIME would give one of thes…

2023-03-31 20:46:34 RT @_alialkhatib: finally, an open letter about AI actually worth retweeting

2023-03-31 20:46:18 RT @kharijohnson: “The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, whi…

2023-03-31 20:46:08 RT @alexhanna: "We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are r…

2023-03-31 20:20:25 RT @mmitchell_ai: As I was privately babbling out my own diatribe on the FLI letter yesterday, I was honored to be pinged by @emilymbender…

2023-03-31 20:07:09 RT @DAIRInstitute: Read the statement from #StochasticParrots authors @emilymbender @timnitGebru @mcmillan_majora and @mmitchell_ai here:…

2023-03-31 19:59:52 RT @timnitGebru: Since we've been looking for more things to do, @emilymbender @mmitchell_ai @mcmillan_majora and I wrote a statement about…

2023-03-31 19:58:51 "Accountability properly lies not with the artifacts but with their builders." https://t.co/VgHmh8VdoW

2023-03-31 19:55:15 Statement from the listed authors of Stochastic Parrots on the “AI pause” letter https://t.co/YsuDm8AHUs "Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices." w/@timnitGebru @mmitchell_ai and @mcmillan_majora

2023-03-31 18:59:03 @martinjanello @scyrusk Huh? I am definitely standing up to AI hype and not part of an organization that is building it AI (slow or fast).

2023-03-31 17:41:13 @martinjanello @scyrusk Would you care to clarify what you mean by "both sides" here?

2023-03-31 16:50:41 A journalist recently reflected to me that those of us standing up to #AIhype are generally not paid to do so --- in stark contrast to those peddling the hype. So it's particularly gratifying to know we're being effective. Thank you, @scyrusk! https://t.co/2SUjZZ1mcV

2023-03-31 12:22:25 RT @Soccermatics: The Future of Life Institute is a problem. Being in same age-group (lower end maybe)/cultural background (8-bit progra…

2023-03-31 12:10:34 RT @erikve: Two research fellowships in #NLProc available at the University of Oslo, focusing on event extraction in the domain of armed co…

2023-03-31 11:25:24 RT @Soccermatics: For the Future of Lifers the rules don't seem to apply. They don't need to write detailed articles explaining their think…

2023-03-31 11:25:21 RT @Soccermatics: There are lots more... and if I get some time later I will share more. But as I write the list my embarrassment turns to…

2023-03-31 02:56:15 RT @scyrusk: In my ethics class, I presented "the letter". First, I showed the content of the letter and the first few signatures. Then,…

2023-03-31 01:40:49 RT @mmitchell_ai: It's so weird to me that it's the AI Ethics crowd getting constantly bashed for "fear mongering" because we describe syst…

2023-03-30 23:49:57 RT @emilymbender: Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long…

2023-03-30 21:29:36 RT @billt: Yes to this. SF narratives don’t make a good basis for public discourse, and we journalists should resist them and look more wid…

2023-03-30 19:55:08 @mmitchell_ai Oh no! I hope you heal quickly and get some time to rest.

2023-03-30 13:01:12 RT @emilymbender: My thread from last night on the hypey "open letter" as a blog post: https://t.co/zuE2A39W5F #LLM #AIhype #GPT4

2023-03-30 00:52:13 RT @mmitchell_ai: Super helpful article about "the letter" from @chloexiang! Makes the connection to longtermism that some have been confus…

2023-03-30 00:05:05 Now as a blog post: https://t.co/zuE2A39W5F

2023-03-29 23:25:59 RT @timnitGebru: https://t.co/8DnD2Kc9ye

2023-03-29 21:31:43 My thread from last night on the hypey "open letter" as a blog post: https://t.co/zuE2A39W5F #LLM #AIhype #GPT4

2023-03-29 19:57:42 RT @solarpunkcast: I'm begging anyone interested in AI to listen to the researchers and not the tech bros. AI is already dangerous without…

2023-03-29 19:32:56 RT @tante: PR as open letter https://t.co/aEYmgMRhLb

2023-03-29 19:10:55 @MarkBrakel @FLIxrisk You wanna argue that you aren't longtermist? List your funders, make sure you don't have any longtermists on your board, and stop publishing alarmist open letters that are dripping with xrisk-style AI hype.

2023-03-29 19:10:01 @MarkBrakel @FLIxrisk From your "Funding" page (which doesn't actually list your funders): "With the exception of Jaan Tallinn, who has served on FLI’s Board of Directors since its founding, these donors do not influence FLI’s positions" And re Jaan Tallinn: https://t.co/VCiap7aka7

2023-03-29 17:26:52 @fabio_cuzzolin It's exhausting.

2023-03-29 16:04:35 RT @SashaMTL: Best take on "the letter" so far by @ruchowdh, who else? (alluding to signatories of the letter such as John Wick, Sam Altma…

2023-03-29 13:24:37 RT @danmcquillan: "It turns out that AI is harmful, but we really, really want it to work and be the future of humanity, so can we please p…

2023-03-29 13:20:50 RT @xriskology: Absolutely amazing thread here. Very much worth reading to the end: https://t.co/oiAT36m8Dl

2023-03-29 13:20:24 RT @timnitGebru: This is what they don't want to talk about. https://t.co/THy1EjMOiW

2023-03-29 12:58:32 RT @emilymbender: Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerf…

2023-03-29 12:57:36 RT @emilymbender: Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripp…

2023-03-29 12:46:16 RT @SashaMTL: My favorite part of yet another amazing thread from @emilymbender ! There are definitely parts of the letter that I can get b…

2023-03-29 12:44:14 RT @djleufer: Hot take on the letter for a moratorium on training systems more powerful than GPT-4 Anything worthwhile in it was already s…

2023-03-29 04:49:19 Always check the footnotes https://t.co/pOjvn2rGFl

2023-03-29 04:24:09 Broke the threading: https://t.co/nquBe2nzMY

2023-03-29 04:04:56 Two corrections: 1) Sorry @schock for misspelling your name!! 2) I meant to add on "general tests" see: https://t.co/kR4ZA1k7uz

2023-03-29 03:51:03 Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Constanza-Chock and journalists like Karen Hao and Billy Perrigo.

2023-03-29 03:50:25 Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerful." Listen instead to those who are studying how corporations (and govt) are using technology (and the narratives of "AI") to concentrate and wield power. >

2023-03-29 03:47:29 Also "the dramatic economic and political disruptions that AI will cause". Uh, we don't have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment). >

2023-03-29 03:47:10 Yes, there should be robust public funding but I'd prioritize non-CS fields that look at the impacts of these things over "technical AI safety research". >

2023-03-29 03:46:48 Yes, there should be liability --- but that liability should clearly rest with people &

2023-03-29 03:46:38 Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you've encountered synthetic text, images, voices, etc.) >

2023-03-29 03:46:26 Some of these policy goals make sense: >

2023-03-29 03:45:56 Uh, accurate, transparent and interpretable make sense. "Safe", depending on what they imagine is "unsafe". "Aligned" is a codeword for weird AGI fantasies. And "loyal" conjures up autonomous, sentient entities. #AIhype >

2023-03-29 03:45:44 They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal." >

2023-03-29 03:45:34 Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources). >

2023-03-29 03:44:59 Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI". >

2023-03-29 03:44:47 Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they're really building AI will consider it framed like this? >

2023-03-29 03:44:05 I'm mean, I'm glad that the letter authors &

2023-03-29 03:43:39 On the GPT-4 ad copy: https://t.co/OcWAuEtWAZ >

2023-03-29 03:42:29 On the "sparks" paper: https://t.co/5jvyk1qocE >

2023-03-29 03:42:11 Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the "Sparks paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO. >

2023-03-29 03:40:54 And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes. >

2023-03-28 04:02:38 @n_vpatel @TonyHoWasHere Thank you!!

2023-03-28 03:55:56 Oops: sentiment machines should have been sentient machines. Thar was my typo but maybe @TonyHoWasHere can fix it. I so doubt the existence of such things I can't even type it, apparently. Also, I quickly wrote those comments between prepping two classes that start this week.

2023-03-28 03:28:24 @TonyHoWasHere @thedailybeast Quote continues: “But the folks selling those systems (notably OpenAI) would rather have policymakers worried about doomsday scenarios involving sentiment machines.”

2023-03-28 03:27:53 “We desperately need smart regulation around the collection and use of data, around automated decision systems, and around accountability for synthetic text and images,” -- me to @TonyHoWasHere at @thedailybeast https://t.co/AXGklM2VcZ >

2023-03-27 21:30:27 RT @NannaInie: Very proud to present a 3 minute teaser for our CHI LBW: Designing Participatory AI: Creative Professionals’ Worries and Ex…

2023-03-27 21:30:23 RT @LeonDerczynski: What do creative professionals think of generative AI? Here's a video from a (peer reviewed!) scientific study, to appe…

2023-03-27 20:49:24 I wonder if the folks who think GPT-X mapping from English to SQL or whatevs means it's "intelligent" also think that Google Translate is "intelligent" and/or "understanding" the input?

2023-03-27 19:42:52 @lathropa @alexhanna That's fine!

2023-03-27 19:09:30 @lathropa @alexhanna Ugh, no. Also, we usually work on textual artifacts, not videos.

2023-03-27 18:59:53 @afsteelersfan Yes: https://t.co/6S0OAthML7

2023-03-27 18:59:00 RT @_alialkhatib: remember when that idiot said he's a stochastic parrot? and now people are trying to say GPT is *more* than a stochastic…

2023-03-27 18:04:49 But people want to believe SO HARD that AGI is nigh. Remember: If #GPT4 or #ChatGPT or #Bing or #Bard generated some strings that make sense, that's because you made sense of them.

2023-03-27 18:01:05 What's particularly galling about this is that people are making these claims about a system that they don't have anywhere near full information about. Reminder that OpenAI said "for safety" they won't disclose training data, model architecture, etc. https://t.co/OcWAuEtWAZ >

2023-03-27 17:58:14 (Some of this I see because it's tweeted at me, but more of it comes to me by way of the standing search I have on the phrase "stochastic parrots" and its variants. The tweets in that column have been getting progressively more toxic over the past couple of months.) >

2023-03-27 17:57:28 Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent." Spoiler alert: It's not. Also, stop being so credulous. >

2023-03-27 17:05:24 @ChrisMurphyCT You can see my public-facing work here: https://t.co/XEc34KgwKG

2023-03-27 17:00:03 @ChrisMurphyCT Senator, that is incorrect, but I'm sure he marketing department at OpenAI appreciates your spreading this misinformation. Please have a staffer read up on what's going on with #AIhype and where the real dangers are. I'm happy to spend time talking with someone in your office.

2023-03-27 14:18:11 @LeonDerczynski Or be beneficial, depending on where in that >

2023-03-27 12:45:56 RT @safiyanoble: All of this. And also, it’s criticism of the models.

2023-03-26 21:38:23 @boydgraber Fun! One more connection to muppets here: https://t.co/kR4ZA1k7uz

2023-03-25 23:13:16 RT @Abebab: i'm SICK SICK SICK of all the hype and inaccurate and actively misleading narrative around LLMs every where i look be warned,…

2023-03-25 20:04:22 @tdietterich The quote tweet functionality is right there, if you want to share your realizations with the world, rather than addressing them to people who already know.

2023-03-25 20:03:42 @pgcorus lol -- you're claiming the mantle of "working for justice" and then in the same tweet pointing to some (unpeer-reviewed, btw) nonsense from one of the most prominent proponents of modern digital physiognomy? This is what the mute button is for. Buh-bye.

2023-03-25 20:00:40 @tdietterich I'm well aware of this. Not sure why you're mansplaining at me about it.

2023-03-25 19:57:39 @chirag_shah Next four posts: #Bing #ChatGPT #privacy #Microsoft https://t.co/InZujzgvZh

2023-03-25 19:55:42 @chirag_shah First four posts: https://t.co/dmh3WhnLsB

2023-03-25 19:53:34 It seems to me that this is yet another inherent problem to the idea that LLMs trained to simulate conversation would be a beneficial approach to information access (also one that @chirag_shah and I did not anticipate in our #CHIIR2022 paper). Screenshots follow. >

2023-03-25 19:52:22 Carl Bergstrom has a banger thread over on Mastodon about some serious #privacy problems with #Bing #GPT. You can find the thread at this link: https://t.co/jbeMgaqVlr And in screencaps below.

2023-03-25 18:58:36 Excellent thread debunking yet more #AIhype from the NYT (the same publication famous as a platform for anti-trans nonsense) https://t.co/1rAgK7IF8m

2023-03-25 18:52:49 RT @ProfNoahGian: these AI apocalypse estimates are completely unscientific, just made-up numbers, there's nothing meaningful to support th…

2023-03-25 15:55:25 @pgcorus I don't understand your point at all, but you did instruct me to "look at" a section of the paper I co-authored... If you're trying to say that our arguments no longer hold, I assure you they are all still valid.

2023-03-25 15:47:24 @pgcorus Are you telling me to read my own paper?

2023-03-25 14:50:51 Your LLMs aren't in need of protecting. They don't have feelings. They aren't little baby proto-AGIs in need of nurturing.

2023-03-25 14:50:02 Love to see how people complain about "criticism lobbed at LLMs" &

2023-03-25 14:45:54 @kathrynbck Thank you!! I'll ask on Monday about cites.

2023-03-25 14:17:24 RT @ShannonVallor: The most depressing thing about GPT-4 has nothing at all to do with the tech. It’s realising how many humans already be…

2023-03-25 14:13:28 RT @danmcquillan: looks like 'usefully wrong' is the new 'alternative facts' #AI #GPT4 #ChatGPT "Microsoft tries to justify A.I.‘s tendenc…

2023-03-25 13:50:37 RT @emilymbender: Q for #sociolinguistics #lazyweb: What are your favorite papers (or books) about the way that speakers negotiate meaning?

2023-03-25 13:50:33 RT @emilymbender: Reading about #ChatGPT plug-ins and wondering why this is framed as plug-ins for #ChatGPT (giving it "capabilities") rath…

2023-03-24 21:59:08 @yvonnezlam Thank you!

2023-03-24 21:58:59 @evanmiltenburg Thank you!

2023-03-24 21:58:51 @heatherklus Thank you!

2023-03-24 20:48:13 @rharang But why "powered"? That is, why is "AI" providing "power", rather than say functionality?

2023-03-24 20:41:42 @othernedwin This was NOT a request for #ChatGPT propaganda, TYVM.

2023-03-24 20:19:22 Another request for references --- what is good to read for the fundamentals of VUI (voice user interface) or chatbot design? Thx!

2023-03-24 20:18:10 Another metaphor I'm curious about: "AI" as "fuel" or "power" --- when people talk about "AI-powered technology" or "AI that fuels your creativity/curiosity". This seems to suggest that the AI is autonomously producing something... Where are my metaphor theorists at?

2023-03-24 20:14:59 Nevermind, I know why: This is #OpenAI yet again trying to sell their text synthesis machine as "an AI". #MathyMath #AIHype

2023-03-24 20:14:51 Reading about #ChatGPT plug-ins and wondering why this is framed as plug-ins for #ChatGPT (giving it "capabilities") rather than #ChatGPT as a plug-in to provide a conversational front-end to other services. https://t.co/OLHluhJ8Gx

2023-03-24 19:57:35 Q for #sociolinguistics #lazyweb: What are your favorite papers (or books) about the way that speakers negotiate meaning?

2023-03-24 19:16:55 RT @SashaMTL: Indeed, this fact is glossed over in all of the sparkles of AGI papers (as well as in all of the propaganda accompanying…

2023-03-24 15:15:53 @ndiakopoulos @emilybell You jumped into my mentions, to be defensive. Meanwhile, I see your pinned tweet. Calling for "nuance" while promoting a book titled "Automating the News"? I'll remain skeptical, thanks.

2023-03-24 15:06:48 @ndiakopoulos @emilybell So-called generative AI is an oil spill in our information ecosystem. I'm countering the people out there selling it as a reliable or useful source of information. If that makes you feel defensive, I wonder what it is that you are up to?

2023-03-24 14:59:07 @sabpenni @emilybell Which academic paper about ChatGPT?

2023-03-24 14:16:54 Lots of wisdom here! https://t.co/C4DMibXSlT

2023-03-24 14:16:47 RT @ruchowdh: I kinda hate that responsible AI has come full circle to the uneducated yet highly opinionated pontificating on topics they k…

2023-03-24 14:16:29 RT @ruchowdh: Fourth and most importantly - we need better ways of curating useful and structured public input on how to improve models WIT…

2023-03-24 14:16:21 RT @ruchowdh: Third, we cannot keep this paradigm where the world is effectively a testing ground for “research”

2023-03-24 14:16:11 RT @ruchowdh: Here’s what I think are the tangible problems and what’s changed - first this tech is easier to access. This revolution is le…

2023-03-24 14:15:47 RT @ruchowdh: It’s fun and easy to talk about things you won’t be accountable for - like a technology that you claim is minimum decades awa…

2023-03-24 02:33:46 @BritneyMuller On construct validity in general: https://t.co/kR4ZA1k7uz On Bar specifically, Ep 10 of Mystery AI Hype Theater 3000 (not yet released, but eventually to be found with the others): https://t.co/yZs162tjbL

2023-03-23 22:37:05 RT @Abebab: "let them eat LLMs" is what I hear every time I see people/orgs say they wan to "alleviate poverty with LLms"

2023-03-23 22:35:16 RT @strubell: Thanks, Vijay! This is absolutely correct. To those who are concerned that I'm not engaging in normal scientific discourse an…

2023-03-23 18:42:32 Well, this promises to be entertaining! (And I also just downloaded the latex source. At least the first claim here checks out.) https://t.co/7aQMjvJXM5

2023-03-23 15:57:59 @xriskology @birchlse https://t.co/mo2XX0x4ZK

2023-03-23 14:19:56 @javisamo Please don't -- even if you are doing this to make a good point, the world does not need any more synthetic text floating around in it.

2023-03-23 13:29:06 And finally: "We conclude with reflections on societal influences of the recent technological leap" --- I'm not sure I even want to look to see what they have to say there.

2023-03-23 13:28:22 Comic interlude "In our exploration of GPT-4, we put special emphasis on discovering its limitations," (But apparently none on the limitations of their 'tests' for AGI.) >

2023-03-23 13:23:17 I guess one function of this novella is as a litmus tests for journalists. Anyone who chooses to cover it as a story about "AGI being just around the corner" rather than "AI hype masquerading as research" clearly is not doing a reliable job covering this beat. https://t.co/mo2XX0x4ZK

2023-03-23 13:19:01 Pièce de résistance: "Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." >

2023-03-23 13:15:43 And "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting." >

2023-03-23 13:15:05 From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models." >

2023-03-23 13:14:02 Remember when you went to Microsoft for stodgy but basically functional software and the bookstore for speculative fiction? arXiv may have been useful in physics and math (and other parts of CS) but it's a cesspool in "AI"—a reservoir for hype infections https://t.co/acxV4wm0vE

2023-03-23 13:05:01 RT @emilymbender: Apropos #openai refusing to disclose any information about the training data for #GPT4 and #Google being similarly cagey…

2023-03-23 13:04:55 RT @emilymbender: Was so looking forward to this episode of @RadicalAIPod with @merbroussard and they did not disappoint! For @merbroussard…

2023-03-23 13:04:48 Hey journalists covering this story, talk to Casey! https://t.co/QaptVwrcbn

2023-03-23 00:29:12 RT @mmitchell_ai: Had fun talking to @strwbilly about Google's Bard release. One thing I explained is how companies say products are "an e…

2023-03-23 00:24:05 @davidchalmers42 And I stand by my statement that your original tweet is carrying water for those peddling AI hype. It doesn't define "AI", it suggests that we should be impressed with the text synthesis machines. And your follow up suggests that these so-called "AI tasks" are likewise valuable.

2023-03-23 00:02:55 RT @cfiesler: Just throwing this out there: I'm a tech ethics &

2023-03-22 23:14:19 Was so looking forward to this episode of @RadicalAIPod with @merbroussard and they did not disappoint! For @merbroussard neither the tech nor its social context is a black box, and she is so good at making the explanations approachable &

2023-03-22 20:58:49 RT @linakhanFTC: 1. Swathes of the economy now seem reliant on a small number of cloud computing providers. @FTC is seeking public input…

2023-03-22 18:06:26 RT @DLilloMartin: Registration is now open for the 2023 LSA Linguistic Institute, themed “Linguistics as Cognitive Science: Universality an…

2023-03-22 17:29:02 RT @mmitchell_ai: Had a big groan on G's framing of Bard. One thing that stood out: Google saying that one "collaborates" with Bard, not th…

2023-03-22 17:28:08 RT @timnitGebru: "Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.…

2023-03-22 15:51:18 Apropos #openai refusing to disclose any information about the training data for #GPT4 and #Google being similarly cagey about #Bard... From the Stochastic Parrots paper, written in late 2020 and published in March 2021: w/@timnitGebru @mmitchell_ai @mcmillan_majora https://t.co/wrOIGKB999

2023-03-22 14:22:58 RT @emilymbender: More from the FTC! https://t.co/AqqIKcYVl8 A few choice quotes (but really, read the whole thing, it's great!): >

2023-03-22 01:37:00 RT @SashaMTL: My dudes, asking an LLM *any* question about itself (its training data, carbon footprint, abilities, etc.) is just contributi…

2023-03-21 14:16:56 Let me again express my gratitude for regulators who refuse to be blown away by so-called "AI capabilities" and instead look to how existing regulation might apply.

2023-03-21 14:15:21 "Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors." "The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them." https://t.co/AqqIKcYVl8 >

2023-03-21 14:14:19 "Should you even be making or selling it?" "Are you effectively mitigating the risks?" "Are you over-relying on post-release detection?" "Are you misleading people about what they’re seeing, hearing, or reading?" https://t.co/AqqIKcYVl8 >

2023-03-21 14:12:36 "The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose." https://t.co/AqqIKcYVl8 >

2023-03-21 14:11:36 More from the FTC! https://t.co/AqqIKcYVl8 A few choice quotes (but really, read the whole thing, it's great!): >

2023-03-21 13:40:57 RT @emilymbender: Several things that can all be true at once: 1. Open access publishing is important 2. Peer review is not perfect 3. Com…

2023-03-21 12:18:30 RT @chirag_shah: #CHIIR2023 folks - here's that paper (with open access) with @emilymbender I was referring to yesterday. You can see how a…

2023-03-20 21:58:56 RT @BritneyMuller: For those unfamiliar@mmitchell_ai is: Leading AI Researcher (focus on ethics, inclusion, diversity, fairness &

2023-03-20 15:57:18 @RadicalAIPod @merbroussard Can't wait to listen!!

2023-03-20 15:57:09 RT @RadicalAIPod: One of those "interviewing @merbroussard about her new amazing book in an hour but have to condense 10 million questions…

2023-03-20 15:48:50 RT @mer__edith: This is great and to the point (finally!) Tldr the problem is the surveillance business model, not the fact that one of th…

2023-03-20 15:47:27 RT @emilymbender: Citing a paper that's available through the @aclanthology by pointing to an arXiv version instead is at least the equival…

2023-03-20 15:46:43 RT @LeonDerczynski: This paper makes a tonne of odd claims about the future. I wonder if it is ever going to be reviewed (and survive), or…

2023-03-20 15:46:19 @aclanthology Meanwhile, Google Scholar pointing to arXiv versions first is like ... governments providing subsidies to oil companies.

2023-03-20 15:45:39 Citing a paper that's available through the @aclanthology by pointing to an arXiv version instead is at least the equivalent of putting something recyclable in the landfill, if not equivalent to littering. Small actions that contribute to the degradation of the environment.

2023-03-20 15:43:03 Shout out to the amazing @aclanthology which provides open access publishing for most #compling / #NLProc venues and to all the hardworking folks within ACL reviewing &

2023-03-20 15:40:52 Yes, this is both a subtweet of arXiv and of every time anyone cites an actually reviewed &

2023-03-20 15:39:12 Several things that can all be true at once: 1. Open access publishing is important 2. Peer review is not perfect 3. Community-based vetting of research is key 4. A system for by-passing such vetting muddies the scientific information ecosystem

2023-03-20 15:36:48 RT @STS_News: Thinking of putting together a reading list for understanding our current technology bubble and its apparent demise. Mine wou…

2023-03-20 15:23:19 RT @sharongoldman: New in The AI Beat: After the launch of GPT-4, the dangers of 'stochastic parrots' remain, said @timnitGebru @emilymbend…

2023-03-20 15:17:12 RT @VentureBeat: It was another epic week in generative AI, including the launch of GPT-4. But the dangers of 'Stochastic Parrots' remain,…

2023-03-20 14:05:02 His follow up tweet doesn't make it any better. What makes these "AI tasks"? Again, critical distance is required. https://t.co/J13YVg9aO8

2023-03-20 14:03:53 Philosopher deep in the "LLMs are magic " cult looks to curry favor with the self-styled magicians. (It's always super disappointing to see a fellow humanist lose all critical distance &

2023-03-20 13:08:18 When we published Stochastic Parrots (subtitle Can Language Models Be Too Big?) People asked how big is too big? Our answer: too big to document is too big to deploy. https://t.co/6hmmyDyVjW

2023-03-20 13:03:41 RT @mmitchell_ai: In Silicon Valley culture, the groupthink seems to be that it's impossible to keep track of the data a language model is…

2023-03-19 14:13:29 RT @chrismoranuk: A quick thread on AI and misinformation. Open AI’s own Safety Card says it “has the potential to cast doubt on the whole…

2023-03-18 20:24:56 RT @jordipc: ChatGPT y otros chatbots hacen cosas increíbles. Pero también pueden liarla mucho. Traerán problemas nuevos. Hay un grupo de…

2023-03-18 14:48:23 En Español, with thanks to @jordipc for reporting: https://t.co/2vX8wvgdEV https://t.co/4GCdRwBlzJ

2023-03-18 14:15:55 Look what arrived!! Really excited to read @merbroussard 's latest https://t.co/KdPhfbUHnn

2023-03-17 23:07:47 RT @asmelashteka: #StochasticParrotsDay was an amazing and insightful event. https://t.co/mkQrKZLHAA 1/n

2023-03-17 20:58:33 @mirabelle_jones @timnitGebru @safiyanoble @mmitchell_ai Thank you for compiling this!

2023-03-17 20:58:18 RT @mirabelle_jones: Absolute pleasure to attend Stochastic Parrots Day with some of my heroes @emilymbender @timnitGebru @safiyanoble @mmi…

2023-03-17 19:43:33 This was amazing -- huge thank you to @timnitGebru for being the driving force behind, and to all of the panelists for sharing their wisdom and all of the audience for joining us &

2023-03-17 19:16:11 RT @timnitGebru: 9)The participants who had great discussions and also created and input this wealth of resources! https://t.co/kKUfLqRqH5

2023-03-17 19:15:47 RT @timnitGebru: And that was a wrap for #StochasticParrotsDay. 1) Thank you so much to my listed co-authors @mcmillan_majora @emilymbend…

2023-03-17 19:12:50 RT @histoftech: “We can’t keep building technologies that collide so violently with our idea of what it means to be human.” —Nanjala Nyabo…

2023-03-17 19:00:54 RT @mmitchell_ai: Can we really become enlightened if we get answers on everything without producing any thoughts ourselves? -- Great point…

2023-03-17 19:00:32 RT @Carmen_NgKaMan: So grateful that @Nanjala1 brought up the need to talk AI across geographies! E.g. many African nations are affected by…

2023-03-17 18:57:02 RT @CopyrightLibn: Nanjala Nyabola (@Nanjala1) talking about AI futures &

2023-03-17 18:48:26 Sarah Andrew at #StochasticParrotsDay calling on all people building tech (esp. 'AI') to really feel the extent to which you are holding everyone's human rights in your hands ... and behave accordingly.

2023-03-17 16:55:52 RT @alexhanna: A reading list is being put together from the chat in #StochasticParrotsDay! A whole syllabus here. https://t.co/25NbKiZHaa

2023-03-17 03:12:59 RT @timnitGebru: "Emily M. Bender...tweeted that this secrecy did not come as a surprise to her. “They are willfully ignoring the most basi…

2023-03-17 02:43:56 RT @mmitchell_ai: Reminder that #StochasticParrotsDay is tomorrow! Come join me, @emilymbender, @timnitGebru, @mcmillan_majora and guests f…

2023-03-16 20:01:39 @CriticalAI @xriskology @timnitGebru @nitashatiku @danmcquillan So I see the value in building solidarity as you describe --- but not with the folks who think they are actually building "intelligence". (And yes, I try to explain 'parrot' as in reference to the metaphorical sense of the verb meaning repeating without understanding.)

2023-03-16 19:43:49 @CriticalAI @xriskology @timnitGebru @nitashatiku @danmcquillan So, again, I think it is valuable to work against normalizing the use of those terms --- because the way in which corporate interests and bizarre EA/longtermist fantasies are infecting this discourse should not be normalized.

2023-03-16 19:43:01 @CriticalAI @xriskology @timnitGebru @nitashatiku @danmcquillan And *most* "AGI" discourse is riddled with "citations to the future" and other pseudo-science --- and I don't see nearly enough distancing from that from those who might be doing serious scientific work under the rubric of "AI". >

2023-03-16 19:42:16 @CriticalAI @xriskology @timnitGebru @nitashatiku It takes 2 second to explain "pattern matching". Seems like a useful act of resistance. (Channeling @danmcquillan here.) >

2023-03-16 19:41:36 @CriticalAI @xriskology @timnitGebru @nitashatiku I think there *is* a lost rhetorically in behaving as if "ANI" were a reasonable term. Not least because of the ways in which the project of "AI" is bound up with eugenicist notions of "intelligence". >

2023-03-16 19:33:16 And lolsob @ @ilyasut taking every last opportunity to brag about "capabilities": “Things get complicated any time you reach a level of new capabilities.” Your trash heap of toxic garbage isn't "capable". It's just a lot of data and a lot of compute.

2023-03-16 19:31:27 @chloexiang @VICE @SashaMTL @rao2z @_willfalcon "@_willfalcon said that although it’s fair to want to prevent competitors from copying your model, OpenAI is following a Silicon Valley startup model, rather than one of academia, in which ethics matter." Stark implication: Ethics don't matter to SV. Good to have that out there

2023-03-16 19:29:42 @chloexiang @VICE @SashaMTL @rao2z @_willfalcon "It really bothers me that the human costs of this research (in terms of hours put in by human evaluators and annotators) as well as its environmental costs (in terms of the emissions generated by training these models) just get swept under the rug" - @SashaMTL >

2023-03-16 19:28:13 I appreciate this reporting from @chloexiang at @VICE on #OpenAI -- with quotes from @SashaMTL @rao2z @_willfalcon and others: https://t.co/gby4Bvy1eU >

2023-03-16 19:19:43 @GFuterman @interacciones So, like, don't ever attach random text synthesis machines to the nuclear command system? That's a very easy risk to prevent --- and deflating #AIhype is a key part of doing so.

2023-03-16 19:17:51 @CriticalAI @xriskology @timnitGebru @nitashatiku Huh? Why define "AGI" as "what may one day exist"? Why even use "ANI" or "AI" for pattern matching at scale? It's all misleading terminology and I see no value in ceding the ground that "AGI" (whatever people fantasize that to be) may be developed at some later date.

2023-03-16 19:14:18 RT @miss_eli: I wrote 'Lawyer Ex Machina #35: Happy Stochastic Parrots Day'. I really wanted to ignore the various GPTs for just one week,…

2023-03-16 19:01:43 Because somehow this isn't clear to many: There's a difference between opting to work for free (e.g. constructing evaluations for #OpenAI, providing labels to them) and having your work stolen (text or art included without consent in training data). https://t.co/qt2gFPHpo2

2023-03-16 18:28:34 @balloonleap Typos mean the tweet is authentic right?

2023-03-16 16:41:08 RT @mmitchell_ai: It will be a lot easier for OpenAI to declare they've solved AGI by hiding the details of their work.

2023-03-16 16:38:01 @Grady_Booch Thank you!

2023-03-16 15:58:48 RT @merbroussard: Hey @themarkup, I’m reading about the UK gov’t ban on TikTok. I’m curious: what kind of data could one get from the TikTo…

2023-03-16 13:38:46 I rather suspect that if we ever get that info, we will see that it is toxic trash. But in the meantime, without the info, we should just assume that it is. To do otherwise is to be credulous, to serve corporate interests, and to set terrible precedent.

2023-03-15 22:45:41 Just in case anyone isn't tracking: this is the clown show that $MSFT chose to be in bed with. https://t.co/vzES2DmUmr

2023-03-15 21:51:38 RT @xriskology: Great thread, worth reading.

2023-03-15 21:16:07 @SashaMTL Yeah...

2023-03-15 20:12:57 @andersonbcdefg Not without a regulator setting up the parameters they should be testing for, before releasing anything. Again -- at their expense.

2023-03-15 20:11:14 @andersonbcdefg We should demand that the companies releasing the oil spills (I mean models) do that at their own expense before releasing them.

2023-03-15 20:09:29 Folks, I encourage you to not work for @OpenAI for free: Don't do their testing Don't do their PR Don't provide them training data https://t.co/xF9eIDo4jT

2023-03-15 20:07:56 Oh look, @openAI wants you to test their "AI" systems for free. (Oh, and to sweeten the deal, they'll have you compete to earn GPT-4 access.) https://t.co/HqmURxF9dT

2023-03-15 20:07:16 @GFuterman @sama Tell me how, exactly, MSFT+OpenAI are fighting against unaccountable corporate power?

2023-03-15 19:53:02 @neilturkewitz @schock @schock is asking great questions as always, but I have a policy to not waste any time reading synthetic text.

2023-03-15 19:52:17 But given all the xrisk rhetorical (and @sama 's blogpost from Feb) it may also be possible that at least some of the authors on this thing actually believe their own hype and really think they are making choices about "safety".

2023-03-15 19:50:52 A cynical take is that they realize that without info about data, model architecture &

2023-03-15 19:46:37 ... Going forward we plan to refine these methods and register performance predictions across various capabilities before large model training begins, and we hope this becomes a common goal in the field." Trying to position themselves as champions of the science here &

2023-03-15 19:46:06 Also LOL-worthy, against the backdrop of utter lack of transparency was "We believe that accurately predicting future capabilities is important for safety. >

2023-03-15 19:44:42 For more on missing construct validity and how it undermines claims of 'general' 'AI' capabilities, see: https://t.co/kR4ZA1k7uz >

2023-03-15 19:44:05 I also lol'ed at "GPT-4 was evaluated on a variety of exams originally designed for humans": They seem to think this is a point of pride, but it's actually a scientific failure. No one has established the construct validity of these "exams" vis a vis language models. >

2023-03-15 19:42:37 But they do make sure to spend a page and half talking about how they vewwy carefuwwy tested to make sure that it doesn't have "emergent properties" that would let is "create and act on long-term plans" (sec 2.9). >

2023-03-15 19:40:42 Things they aren't telling us: 1) What data it's trained on 2) What the carbon footprint was 3) Architecture 4) Training method >

2023-03-15 19:39:40 Okay, taking a few moments to reat (some of) the #gpt4 paper. It's laughable the extent to which the authors are writing from deep down inside their xrisk/longtermist/"AI safety" rabbit hole. >

2023-03-15 17:35:58 RT @timnitGebru: One of the ppl who filtered out outputs of Open AI models told me they wouldn't "wish it on my worst enemy." https://t.co/

2023-03-15 16:16:32 RT @sabagl: I'm excited to dial in to this tomorrow - the conversations between these AI researchers and experts could not be more timely a…

2023-03-15 00:09:03 @annargrs @evanmiltenburg @complingy @LeonDerczynski Exactly: What did we do, how did it go, what did we learn from it.

2023-03-14 21:01:05 RT @mark_riedl: The timing of this makes things interesting. See you there. https://t.co/xL4QGSoQ4M

2023-03-14 20:52:11 Feeling exhausted by the #AIhype press cycles? Finding yourself hiding from GPT-4 discourse? Longing for a dose of reality? Join us on Friday for Stochastic Parrots Day: https://t.co/x4auSSDctW

2023-03-14 20:39:50 It was an easy prediction to make, given @OpenAI's track record for sure. Still, I could wish that it wasn't so thoroughly validated. https://t.co/OPbWQfgHhS

2023-03-14 20:38:56 That is, not beyond the obvious first pass questions of: Is this a use case where synthetic text is even appropriate? Very few use cases are." >

2023-03-14 20:38:46 Without clear and thorough documentation of what is in the dataset and the properties of the trained model, we are not positioned to understand its biases and other possible negative effects, to work on how to mitigate them, or fit between model and use case. >

2023-03-14 20:38:33 Some that would be appropriate to the GPT models include Data Statements for Natural Language Processing (Bender &

2023-03-14 20:38:11 Since at least 2017 there have been multiple proposals for how to do this documentation, each accompanied by arguments for its importance. >

2023-03-14 20:37:49 "One thing that is top of mind for me ahead of the release of GPT-4 is OpenAI's abysmal track record in providing documentation of their models and the datasets they are trained on. >

2023-03-14 20:37:09 A journalist asked me to comment on the release of GPT-4 a few days ago. I generally don't like commenting on what I haven't seen, but here is what I said: #DataDocumentation #AIhype >

2023-03-14 18:58:44 RT @merbroussard: It’s launch time! Happy publication day to my latest book, MORE THAN A GLITCH: CONFRONTING RACE, GENDER, AND ABILITY BIAS…

2023-03-14 17:42:16 @danielzklein @SemanticScholar It was @qpheevr who pointed that out: https://t.co/sxihlRVLs6

2023-03-14 17:41:24 Again: https://t.co/Jexm27DXlK

2023-03-14 17:40:20 Yeah, not surprised in the least. They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity. #OpenAI #DataDocumentation https://t.co/PuLHnPYE0l

2023-03-14 17:39:01 RT @benmschmidt: I think we can call it shut on 'Open' AI: the 98 page paper introducing GPT-4 proudly declares that they're disclosing *no…

2023-03-14 17:37:41 @danielzklein @SemanticScholar It's more that I wondered where they could have come from --- I had probably just been assuming they were the authors' own abstracts, until I perceived the "tldr" tag.

2023-03-14 16:34:36 RT @rcalo: Excited to welcome @aylin_cim as a faculty associate of the @TechPolicyLab. Aylin is a field leader in AI bias. https://t.co/iai

2023-03-14 16:34:27 @SemanticScholar @ai2_allennlp Synthetic text, even synthetic summaries, carry risks - and putting it out into the world unlabeled exacerbates those risks.

2023-03-14 16:33:53 @SemanticScholar I call on @SemanticScholar and @ai2_allennlp to lead by example with transparency here and flag these as "automatic TLDR" in the email -- because I doubt most people would know to check. >

2023-03-14 16:32:39 I noticed that today's alert from @SemanticScholar included a "TLDR" for each paper. Suspicious that that might be automatically produced, I went and checked. And sure enough, it is: https://t.co/Q54p70KvMB >

2023-03-14 16:30:49 @LeonDerczynski @evanmiltenburg @tellarin @annargrs @ryandcotterell (With room for notes on the proceedings of COLING?)

2023-03-14 15:14:30 @Dr_Atoosa @davidchalmers42 @sama Featuring non-information, you mean. Why are you wasting people's time suggesting that they read synthetic text? Why are you platforming #ChatGPT 's non-information?

2023-03-14 14:14:15 @evanmiltenburg @annargrs @LeonDerczynski Probably not, but also: we wrote that thinking that it would also be useful to students or anyone else with limited visibility into the internals of the review process...

2023-03-14 14:07:18 @evanmiltenburg @annargrs @LeonDerczynski Thanks, Emiel! We tried to publish that as a journal paper, but it was rejected (the editor thought the audience for it would be too narrow ¯\ () /¯ ) so we went with the tech report instead. It would be great if such things could be in the Anthology!

2023-03-14 13:04:55 RT @emilymbender: MSFT lays off its responsible AI team The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton…

2023-03-14 02:44:42 @mihaela_v @ZoeSchiffer @CaseyNewton

2023-03-14 02:35:14 @mihaela_v @ZoeSchiffer @CaseyNewton My apologies: the linked article is clearer.

2023-03-14 01:07:03 At the very least, we should be working to educate those around us not to fall for the hype---to never accept "AI" medical advice, legal advice, psychotherapy, etc.

2023-03-14 01:06:15 I call on everyone who is close to this tech: we have a job to do here. The techcos where the $, data and power have accumulated are abandoning even the pretext of "responsible" development, in a race to the bottom. >

2023-03-14 01:03:05 And they will tell us: You can't possibly regulate effectively anyway, because the tech is moving too fast. But (channeling @rcalo here): The point of regulation isn't to micromanage specific technologies but rather to establish and protect rights. And those are enduring. >

2023-03-12 15:49:05 RT @TorrentTiago: @complingy @HadasKotek @linguistMasoud Adding @GlobalFrameNet https://t.co/HnK14ipyIx

2023-03-12 15:48:58 RT @complingy: @linguistMasoud If anyone is looking for such connections beyond Twitter, computational linguists can be found in communitie…

2023-03-12 15:48:17 @eyujis @lizweil Yes, we discuss both Searle's thought experiment and Harnad's work in the octopus paper: https://t.co/jpjJcfR6qh

2023-03-11 21:11:08 RT @DAIRInstitute: Come to our Stochastic Parrots Day event on March 18 to hear more from Steven Zapata and many others. You can sign up he…

2023-03-11 20:04:37 @JeffDean @SashaMTL Are you talking about the Stochastic Parrots authors here? Because that's a very strange way to say "I told them to retract the paper or get fired".

2023-03-11 14:27:26 RT @emilymbender: Since we published Stochastic Parrots two years ago, the issues discussed in it have only become more urgent and salient.…

2023-03-11 14:27:02 RT @emilymbender: Linguistics as a field has a lot to contribute to better understanding what large language models can and can't do and ye…

2023-03-10 16:45:27 RT @lizweil: good morning, word nerds. @emilymbender has some thoughts on how to read the Chomsky op-ed. tl

2023-03-10 16:04:59 @jeffadoctor @Abebab Also, @shoshanazuboff makes a detailed analogy between the data grabs of surveillance capitalism and settler colonialism, especially as practiced in the 16th c in her book on surveillance capitalism.

2023-03-10 16:03:52 @jeffadoctor This paper by @Abebab might fit what you're looking for: https://t.co/Fc91OtowsZ

2023-03-10 14:36:13 So, read this, not that: https://t.co/qgWwqhWmpc And thanks again @lizweil for your reporting!

2023-03-10 14:35:39 What matters about language in the context of this tech is that language and meaning are relational, that communication is a joint activity, and that systems set up to mimic the form of language can provide the illusion that they understand, know things, are reasoning. >

2023-03-10 14:27:05 (And the whole debate about whether or not humans have an innate universal grammar is just completely beside the point here.) >

2023-03-10 14:26:48 The ability to render grammaticality judgments (and based on how much data) really isn't the issue. Corporations aren't out there suggesting that we use #ChatGPT to 'disrupt' the industry of judging grammaticality. >

2023-03-10 14:26:18 So it's real bummer when the world's most famous linguist writes an op-ed in the NYT* and gets it largely wrong. https://t.co/aFyLJvRl7e (*NYT famous for publishing transphobia &

2023-03-10 14:25:50 Linguistics as a field has a lot to contribute to better understanding what large language models can and can't do and yet many don't think to turn to linguists (or don't even really know what linguists do) when trying to evaluate claims about this technology. >

2023-03-10 14:02:50 Since we published Stochastic Parrots two years ago, the issues discussed in it have only become more urgent and salient. Join me, @timnitGebru @mmitchell_ai @mcmillan_majora and an esteemed group of panelists for discussion and reflection, March 17 2023. https://t.co/YuBGS54oWV https://t.co/5EpPJdEZJc

2023-03-10 03:06:49 RT @UCLA_CR_DJ: Stochastic Parrots Day Mar 17 9AM - 12PM PDT w/ @safiyanoble @ 9AM PDT cc: @UCLA

2023-03-09 22:01:06 RT @timnitGebru: This was section 4.2 of stochastic parrots called "Static Data/Changing Social Views" and the analysis was done by @blahti…

2023-03-09 17:46:10 RT @doctorow: This was a dig at the #StochasticParrots paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire…

2023-03-09 17:39:57 RT @doctorow: Gebru's co-author on the Parrots paper was @emilymbender, a computational linguistics specialist at UW, who is one of the bes…

2023-03-09 15:10:22 RT @LucianaBenotti: We took the opportunity of this panel to promote #NAACL2024 which will be in Mexico city in June 2024. I will work so t…

2023-03-09 14:01:49 @philosophybites @TheNewEuropean 3) The paper is jointly first authored, and should properly be cited as Bender, Gebru et al.

2023-03-09 14:01:13 @philosophybites @TheNewEuropean Hey @philosophybites -- 3 corrections: 1) I work at the University of Washington, not Washington University 2) Only one of my co-authors also at UW. The others were at Google and those who refused to take their names off of the paper were famously fired for it. >

2023-03-09 13:40:28 @cocoweixu Congratulations

2023-03-09 13:27:37 RT @rajiinio: Tech policy proposals that depend heavily on the voluntary cooperation of the tech companies being regulated are so frustrati…

2023-03-09 01:48:41 @cmiciek That's what I meant by Washington University.

2023-03-09 00:47:29 @marylgray Well, I can send you a few other 1980s earworms .... Greatest American Hero, FAME, Cheers

2023-03-09 00:24:17 No shade to Washington University, but I don't work there, and I'm really tired of being described in the press as if I do.

2023-03-09 00:23:44 Hey World: The University of Washington, Washington University, Washington State University, and George Washington University are all DIFFERENT institutions. Please make a note of it.

2023-03-08 23:41:32 @marylgray I think what's going on there is that it's a cover term for the text synthesis and image synthesis systems.

2023-03-08 15:06:09 @elazar_g 1) The scale of these models prohibits in-house/on-device use. 2) Even if it didn't the business model does. 3) The "AI" marketing provides the temptation to divulge data, and that's a problem. But thank you for so kindly sharing your insight.

2023-03-08 14:36:34 RT @mmitchell_ai: Come to this!! Tickets here: https://t.co/kHf69wftFO @timnitGebru explains a LOT MORE of what we're going to do: https:/…

2023-03-08 14:36:04 RT @kenarchersf: Where statisticians see noise, CS people see a god to be worshipped.

2023-03-08 14:35:29 So-called generative "AI" is just text manipulation ... but that also means that whatever data you send into it can be folded into the model for future remixes. And then spit back out to a person who can understand it as information. https://t.co/RsOaJswpZO https://t.co/KDNhijGRnd

2023-03-08 14:16:51 "Emily M. Bender, die neben ihrer Tätigkeit als AI-Ornitologin als Professorin für Linguistin an der Universität Washington lehrt" https://t.co/mxk9yuBacj

2023-03-08 13:41:38 RT @AdamCSchembri: Does anyone know if anyone has written guidelines for modality-inclusive language in linguistics? How to use more inclu…

2023-03-07 21:12:59 @haydenfield Followed by: “There seems to be, I would say, a surprising amount of investment in this idea…and a surprising eagerness to deploy it, especially in the search context, apparently without doing the testing that would show it’s fundamentally flawed.”

2023-03-07 21:12:18 “A lot of the coverage talks about them as not yet ready or still underdeveloped—something that suggests that this is a path to something that would work well, and I don’t think it is,” me to @haydenfield in this piece for Tech Brew: https://t.co/KuxSEHPqXX

2023-03-07 18:14:03 RT @lizweil: Humans of earth: time to put on your party hats &

2023-03-07 17:55:50 RT @timnitGebru: Changed the number of tickets for our Stochastic Parrots day event. It was capped at 1k before because zoom webinar doesn'…

2023-03-07 16:27:03 @santiviquez Try again now!

2023-03-07 15:23:51 RT @lizweil: &

2023-03-07 14:03:03 RT @LeonDerczynski: It was a different world in NLP when the paper was written - only two years ago! https://t.co/Nqcq4NLf9A

2023-03-07 14:02:48 RT @emilymbender: I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice…

2023-03-07 14:02:33 RT @emilymbender: This was really fun to do --- @cfiesler is so cool (and so are the @RadicalAIPod hosts :). Thank you again for the opport…

2023-03-07 14:02:25 RT @timnitGebru: A lot has happened since we wrote the paper 2 years ago that got @mmitchell_ai &

2023-03-07 14:02:22 Join us for Stochastic Parrots Day on March 17! https://t.co/YuBGS53R7n https://t.co/2oNGfgygXe

2023-03-06 23:08:24 This was really fun to do --- @cfiesler is so cool (and so are the @RadicalAIPod hosts :). Thank you again for the opportunity! https://t.co/qmFG8htf3y

2023-03-06 23:04:38 RT @RadicalAIPod: wow y'all! last week's episode with @emilymbender and @cfiesler about the limitations of #ChatGPT is already one of our m…

2023-03-06 22:12:31 @geomblog Yeah -- so much of the way we talk about algorithms in general (even quite aside from AI) borrows terms that are more appropriate to human cognition. It takes effort to break this habit!

2023-03-06 22:11:30 @MaryJun71373119 No specific instance prompted this thread, but for a sampling, see the artifacts @alexhanna and I take apart in #MAIHT3k https://t.co/yZs162sLmd

2023-03-06 22:09:16 Meanwhile: Getting off the hype train is just the first step. Once you've done that and dusted yourself off, it's time to ask: how can you help put the brakes on it?

2023-03-06 22:07:56 If you feel like it wouldn't be interesting without that window dressing, it's time to take a good hard look at the scientific validity of what you are doing, for sure. >

2023-03-06 22:07:28 Likewise, describing your own work in terms of unmotivated and aspirational analogies to human cognitive abilities is also a choice. >

2023-03-06 22:06:35 I'd like to point out: Serious AI researchers can get off the hype train at any point. It might not have been your choice that your field was invaded by the Altmans of the world, but sitting by quietly while they spew nonsense is a choice. >

2023-03-06 19:18:26 @tallinzen @mixedlinguist @mmemily17 One of the main functions of my FAQ is that it lets me give myself permission to just not reply to certain things...

2023-03-06 18:16:15 RT @DAIRInstitute: Here's the agenda of the event with @mcmillan_majora @mmitchell_ai @emilymbender @timnitGebru @mark_riedl @safiyanoble @…

2023-03-06 16:23:09 RT @myrthereuver: New blog post! My highlights of the 2023 HPLT winter school on Large Language Models, including talks by @emilymbende…

2023-03-06 15:20:31 RT @MiaD: @emilymbender @60Minutes @timnitGebru Didn’t realize this wasn’t part of the main segment. Looks like @60Minutes is doing the bar…

2023-03-06 14:50:21 RT @emilymbender: MSFT and OpenAI (and Google with Bard) are doing the equivalent of an oil spill into our information ecosystem. And then…

2023-03-06 03:01:13 RT @bobehayes: A good, long read about @emilymbender and her views on #AIHype, #ArtificialIntelligence and more >

2023-03-06 01:39:49 RT @parismarx: now with generative AI, there’s @timnitGebru, @emilymbender, @danmcquillan, just to name a few, and i’m sure more in the pro…

2023-03-05 22:45:10 RT @emilymbender: @techwontsaveus @timnitGebru Listening to @timnitGebru reflect on how surprisingly fast the things we warned about in the…

2023-03-05 22:45:05 @techwontsaveus @timnitGebru Like, hey, what if the narcissistic billionaires with savior complexes focused on actually saving the planet, instead of building machines they imagine to be gods?

2023-03-05 22:43:35 @techwontsaveus @timnitGebru Listening to @timnitGebru reflect on how surprisingly fast the things we warned about in the Stochastic Parrots paper came to pass made me wonder: What if instead $Billions were being poured into clean energy (or making the grid alternative energy ready or carbon capture or...)?

2023-03-05 22:42:00 I really enjoyed this episode of @techwontsaveus with @timnitGebru https://t.co/RXrcZry2KM >

2023-03-05 21:42:29 RT @schock: "You don’t need a machine to predict what the FTC might do when those claims are unsupported."

2023-03-05 21:34:59 @ShumingHu The blog post is <

2023-03-05 21:00:46 @DavidJPoole @aaas @AAASmeetings I see -- my apologies for making assumptions. (Your tweet sounded to me like the kind of joke a hearing person would make at deaf people's expense.)

2023-03-05 20:46:29 @DavidJPoole @aaas @AAASmeetings Please delete this tweet. Your "joke" turns on the idea that not being able to hear means not being able to attend to what people are telling you, which is denigrating to Deaf people.

2023-03-05 20:30:56 @StenoMatt @aaas @AAASmeetings @mezmalz Please read the quoted thread to see why this suggestion is completely inappropriate.

2023-03-05 20:09:04 @aaas @AAASmeetings Accessibility isn't impossible, it just requires planning and dedication of resources. I thank @mezmalz for raising this issue and call on @AAASmeetings to get their act together. The time to start planning for accessibility for the 2024 meeting is NOW.

2023-03-05 20:07:48 When D/deaf scientists explained to @AAAS @AAASmeetings what they needed in terms of how to work with the interpreters to make the event a success--so we all could benefit from their science--@AAASmeetings should have listened. >

2023-03-05 20:07:14 I am super disappointed in @AAASmeetings here -- isn't @AAAS at its core about science communication? If we care about communication, then we prioritize what's needed to make it successful. >

2023-03-05 15:08:26 @cfiesler I like how he thinks LLMs are "generic NLP models". As if LLMs are all there is to NLP. Clearly a well-versed expert.

2023-03-05 14:03:54 RT @emilymbender: Finally had a moment to read this statement from the FTC and it is https://t.co/DVBEJLcv6C A few choice quotes:

2023-03-05 14:03:30 @cfiesler Random dudes: "enlightening" the world on every platform.

2023-03-05 13:40:58 RT @mezmalz: #AAASmtg I need to say something. My experience was lousy y’all dropped the ball on deaf people. None of us connected with…

2023-03-05 13:40:34 RT @mezmalz: Our work today- we were trying to tell people that deaf children who do not receive early sign language exposure struggle with…

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-28 05:29:25 @EricHallahan So, your story is that you looked at my profile and noted my gender, but missed the part where it says "Professor"?

2023-02-28 05:24:14 @EricHallahan But every last bit of your (uninvited) engagement with me has been concern trolling at best, and weirdly disrespectful. So, I guess it's fitting that you also signal disrespect in this way.

2023-02-28 05:23:13 @EricHallahan It is not a sign of respect to go out of your way to mention my gender. It is a sign of flagrant DISrespect to use an honorific where none is expected and pass right over the ones that a) I've earned and b) reflect my expertise.

2023-02-28 05:22:30 @EricHallahan I indicate my pronouns in my profile so that anyone who is referring to me in the third person doesn't have to guess what they are. I appreciate it when other people do the same.

2023-02-28 05:06:59 So, for those who don't know, using 'Ms' when 'Dr' or 'Prof' would be applicable is not respectful. Quite the opposite really. And that goes doubly when it's in a context where you wouldn't normally use an honorific at all (like prepended to a Twitter handle).

2023-02-28 01:12:24 RT @rharang: You know what makes data really secure, and all but impossible to lose via a breach? Not collecting it in the first place.

2023-02-27 23:42:03 Who's ready for some more Mystery AI Hype Theater 3000? This Friday March 3, 9:30am PT, @alexhanna and I will be joined by special guest @KendraSerra who will share their expertise and help us deflate #AIhype in the legal domain. https://t.co/VF7TD6sYfE #MAIHT3k #MathyMath #LLM

2023-02-27 22:51:17 @CriticalAI See next tweet (after the one you QT'd).

2023-02-27 22:47:44 Which is a very weird way of deciding who to listen to. But also: I don't do predictions, but I have stated some warnings (along with my co-authors) and been dismayed to see them go unheeded.

2023-02-27 22:46:26 And then there are the people who want to know my "credentials" in terms of how many predictions I've made about AI in the last 5 years that have come true. >

2023-02-27 22:45:28 Computational linguistics is and will be just fine, though it's worth working to hold space for visions of our science that see it as something other than a "component of AI" (and I'm working on that, too). >

2023-02-27 22:44:30 I am angry --- but not about that. I'm angry at our system that allows tech brows to concentrate power and create tech that is exploitative and harmful and somehow claim they are doing it for the benefit of humanity. >

2023-02-27 22:43:32 Another common pattern is people thinking I'm "angry" because my field (computational linguistics) has been made obsolete by LLMs. >

2023-02-27 22:42:44 OpenAI isn't listening and won't not matter what I say. But the rest of the world might, and I think it's worth giving OpenAI and Sam Altman exactly as much derision as they are due, to pop this hype bubble. >

2023-02-27 22:41:55 A variant of this seems to be the assumption that I'm trying to get OpenAI to actually change their ways and that I'd be more likely to succeed if I just talked to them more nicely. >

2023-02-27 22:41:22 Some folks are very upset with my tone and really feel like I should be more gentle with the poor poor billionaire. ¯\_()_/¯ >

2023-02-27 22:40:14 The reactions to this thread have been an interesting mix --- mostly folks are in agreement and supportive. However, there are a few patterns in the negative responses that I think are worth summarizing: https://t.co/Sbug4eF1js

2023-02-27 18:59:46 RT @eaclmeeting: You want tutorials at #eacl2023 ? We have them! Check out the list of six accepted tutorials online: https://t.co/y7yIb3

2023-02-27 17:08:09 To add to this: someone might well want their work to be *discoverable* via search and yet not included in training data sets. So, like, just using the existing info in robots.txt is not sufficient. https://t.co/dL15USoOi9

2023-02-27 16:50:04 And these folks who invite me to be a "Co-Organizer" of their International Conference on Mechatronics and Smart Systems https://t.co/pgEZJrP356

2023-02-27 16:49:01 Here's another, who have invited me to be a keynote speaker at their "Global Summit on Chemical Engineering and Catalysis" https://t.co/eSJI2PjOYj

2023-02-27 01:00:00 CAFIAC FIX

2023-02-20 13:57:50 RT @emilymbender: Heard the phrase "stochastic parrots " and curious what that's about? Familiar with our paper and interested in developm…

2023-02-20 13:57:32 RT @emilymbender: "The bots will offer us easy answers. We just have to remember that's not what we should be asking for." Sound advice f…

2023-02-20 13:57:22 RT @emilymbender: You wouldn't take medical advice from this and you should take it from the tech he's peddling either. Beyond that: Ye…

2023-02-20 04:48:48 RT @alexhanna: A vision of a modern overpaid university administrator: a "prompt engineer" -- or rather, a parrot selector -- who scans ove…

2023-02-20 04:36:28 Literally the second tweet in Altman's thread (i.e. the one after the one Manning retweeted): https://t.co/GXC9lbI1rb

2023-02-20 04:35:57 When you're Associate Director of something called "Human-Centered Artificial Intelligence" but the $$ all comes from Silicon Valley so you feel compelled to retweet the clown suggesting that the poors should have LLM-generated medical advice instead of healthcare. https://t.co/vuVo2OBKrp

2023-02-20 04:11:38 @Etyma1010 @BertCappelle @RemivanTrijp @Linguist_UR @hilpert_martin Do you count undergrads who studied with him?

2023-02-20 01:07:37 RT @cheryllynneaton: We can't even get automatic soap dispensers to recognize people with dark skin. I damn sure don't want an AI medical a…

2023-02-20 00:30:12 "The bots will offer us easy answers. We just have to remember that's not what we should be asking for." Sound advice from @jetjocko https://t.co/eFkngtxMST

2023-02-19 22:29:53 RT @shengokai: I mean it’s not like we’re going on over two decades of people pointing out issues with these very applications. In fact, me…

2023-02-19 20:43:19 @KHabermas It comes from eugenics, in fact. Effective altruism/longtermism is a eugenicist cult. And they're the ones funding this.

2023-02-19 20:36:45 And to be very clear, @percyliang this is on your head, and those of anyone else who treats the US medical licensing exam as a "benchmark" to "SOTA", too. https://t.co/enRuxBBGCD

2023-02-19 20:36:14 The thought that it could somehow be seen as beneficial --- that this is somehow taking care of people who can't afford care --- is so offensive I can't even find the words. Tech solutionism indeed.

2023-02-19 20:34:27 You wouldn't take medical advice from this and you should take it from the tech he's peddling either. Beyond that: Yes, the US healthcare system has enormous inequity problems. So we should be reforming it so that healthcare is treated as the basic human right that it is. https://t.co/GXC9lbI1rb

2023-02-19 19:54:17 RT @HeidyKhlaaf: This is the harm in publishing "scientific" papers claiming ChatGPT "passed" a medical exam. It actually didn't and it had…

2023-02-19 19:54:05 RT @HeidyKhlaaf: The mental gymnastics to justify using AI for a high-risk application like medical care by pointing to people who can't af…

2023-02-19 14:20:57 Heard the phrase "stochastic parrots " and curious what that's about? Familiar with our paper and interested in developments in the past two years? Join me, @timnitGebru, @mmitchell_ai and @mcmillan_majora + guests for Stochastic Parrots Day, March 17: https://t.co/YuBGS54oWV

2023-02-19 14:11:18 RT @emilymbender: *sigh* WaPo generally has better tech coverage than the famous-for-transphobia NYT, but they do decided to publish screen…

2023-02-19 02:56:34 @jscottwagner @AmandaAskell No algorithmic component. Her QT brought my tweet to the attention to the dregs of Twitter. Some of them did also reply to her tweet, btw. Not sure what your point is?

2023-02-19 02:04:52 @jscottwagner @AmandaAskell I don't need that explained to me thanks. Also, the point isn't so much their behavior but the fact that Amanda's quote tweet brought it to my mentions.

2023-02-18 23:18:03 @ajaxsinger https://t.co/5MMNuWhj6H

2023-02-18 23:17:41 I would have hoped that by now reporters would have learned not to be impressed with the ersatz fluency of large language models. But it seems like each time they get a bit more fluent the reaction is "Okay, NOW let's be impressed." https://t.co/0Xc7WVwBKi

2023-02-18 23:16:47 Bing isn't the sort of thing that can be interviewed any more than a Magic 8-Ball is. >

2023-02-18 23:14:28 *sigh* WaPo generally has better tech coverage than the famous-for-transphobia NYT, but they do decided to publish screenfuls of synthetic (i.e. fake) text. https://t.co/qGdI0DBG1D >

2023-02-18 23:13:22 @ajaxsinger The main problem here is that the Washington Post thought it was newsworthy to print screens and screens of synthetic text.

2023-02-18 22:40:50 @ajaxsinger Definitely not the latter, but I'll have a look.

2023-02-18 22:31:48 @ajaxsinger From the headline, yeah. I'll have to take a look...

2023-02-18 20:01:04 @batsybabs @alexhanna Yes! It takes a while, but we do get the recordings up eventually. You can find eps 1-7 here: https://t.co/yZs162sLmd

2023-02-18 16:01:15 RT @emilymbender: @AmandaAskell If you're all about "making the world a better place", Amanda, I'd urge you to first consider why it is tha…

2023-02-18 15:57:44 @AmandaAskell If you're all about "making the world a better place", Amanda, I'd urge you to first consider why it is that your tweets draw approval and attention from this crowd.

2023-02-18 15:57:13 More replies courtesy of @AmandaAskell's absolutely lovely followers. https://t.co/f5mrmZLtej

2023-02-18 15:54:03 I mostly have avoided (so far) the worst kinds of Twitter harassment, but yesterday @AmandaAskell quote tweeted me and then this happened to my mentions... (continued) https://t.co/qtQBPHPSEV

2023-02-18 06:56:06 RT @tomg_: "Faites voler votre nouveau modèle d'avion pendant 50 000 heures avec plein de passagers à bords, vous verrez bien si il s'écras…

2023-02-18 06:55:53 RT @timnitGebru: Mind you this is the CEO of a company adverted as doing “AI safety” funded by 600 MILLION dollars of stolen money by Sam B…

2023-02-18 00:02:24 RT @mmitchell_ai: HEY GUESS WHAT, I HAVE A COVER STORY IN TIME!! jk I don't, I'm not a journalist. But @andrewrchow &

2023-02-17 19:24:38 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru Thank you.

2023-02-17 19:15:14 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru Finally: Just why? What is the benefit of having it on another site? It's openly available already. If you're worried about discoverability, post a link. Not the document.

2023-02-17 19:14:47 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru If someone accesses it from doccloud, they don't have the full context: that this is a paper that was published in an ACM context. Furthermore: If the paper were to change (not planned), they would be out of sync. >

2023-02-17 19:06:09 @cfarivar @nitashatiku @mmitchell_ai @timnitGebru Why in the world would you do that? The paper is open access and people should get it from it's actual home in the ACM Digital Library.

2023-02-17 17:27:07 Starting in just minutes! https://t.co/jFkvPWPibH

2023-02-17 14:57:16 Join us in just a couple of hours! #MAIHT3k #MathyMath #AIHype #ChatGPT #NLProc https://t.co/jFkvPWPibH

2023-02-17 13:58:48 RT @emilymbender: The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at https://t.co/

2023-02-17 02:46:18 RT @_Zeets: Emily Bender already wrote extensively about this nonsense and to urge to be impressed by this technology https://t.co/VbP9WwUL

2023-02-16 23:09:13 RT @Muna_Mire: We have surpassed 1000 NYT contributor signatories. Yesterday, the Times responded to our letter by erroneously identifying…

2023-02-16 22:58:02 Tomorrow! https://t.co/jFkvPWPibH

2023-02-16 22:31:36 Meanwhile, here is some actually good coverage about the current generation of chatbots, from @kharijohnson https://t.co/4UjCFdC87i

2023-02-16 22:26:02 @nytimes @kevinroose In sum, reporting on so-called AI continues in the NYTimes (famous for publishing transphobic trash) to be trash. And you know what transphobic trash and synthetic text have in common? No one should waste their time reading either.

2023-02-16 22:24:44 @nytimes @kevinroose And let's take a moment to observe the irony that the NYTimes, famous for publishing transphobic trash, is happy to talk about how a computer program supposedly "identifies". >

2023-02-16 22:23:11 @nytimes @kevinroose That paragraph gets worse, though. It doesn't have any desires, secret or otherwise. It doesn't have thoughts. It doesn't "identify" as anything. And this passes as *journalism* at the NYTimes. >

2023-02-16 22:21:43 @nytimes @kevinroose It didn't. It's a computer program. This is as absurd as saying: "On Tuesday night, my calculator played math games with me for two hours." >

2023-02-16 22:21:11 @nytimes @kevinroose And then here: "I had a long conversation with the chatbot" frames this as though the chatbot was somehow engaged and interested in "conversing" with @kevinroose so much so that it stuck with him through a long conversation. >

2023-02-16 22:19:04 @nytimes @kevinroose First, the headline. No, BingGPT doesn't have feelings. It follows that they can't be revealed. But notice how the claim that it does is buried in a presupposition: the head asserts that the feelings are revealed, but presupposes that they exist. >

2023-02-16 22:17:11 @nytimes @kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out. >

2023-02-16 22:16:21 @nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_()_/¯ >

2023-02-16 22:15:04 The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at https://t.co/FpDkGjRH4W), also decided to print an enormous collection of synthetic (i.e. fake) text today. >

2023-02-16 22:00:02 @doctorow Thank you for pointing folks to our paper. Perhaps even more closely related to your thread is the paper (and associated media coverage &

2023-02-16 20:58:57 The journalist had the gall to say " I would love to be in touch for future segments though if you may be interested." No thank you, not after this experience.

2023-02-16 20:58:10 (In today's instance, I heard nothing until I sent a query asking what was up... after having rearranged various things and made sure to be ready &

2023-02-16 20:57:27 But I am not okay with being jerked around like this. If I MAKE TIME for you, then at the very least you should respect my time by honoring the request that you made or COMMUNICATING asap if it changes.

2023-02-16 20:56:43 Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.

2023-02-16 20:55:42 If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.

2023-02-16 20:54:29 Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool. BUT:

2023-02-16 05:01:29 For anyone who is keeping track, @KirkDBorne is a credulous hack, spreading misinformation written by a modern phrenologist. https://t.co/R3Cg77ALVp

2023-02-16 04:59:42 @alexhanna Does it for the clicks and yet 380k+ accounts are credulous enough to promote it. It's pernicious.

2023-02-16 04:57:03 Next episode is in two days!! https://t.co/jFkvPWPibH

2023-02-16 04:56:15 RT @alexhanna: While I was at Google, from my former tech lead: to keep quiet and get involved in more technical projects. Turns out tha…

2023-02-16 04:55:36 Mixed in with the despair and frustration is also some pleasure/relief at the idea that through Mystery AI Hype Theater 3000 I have an outlet in which I can give this work (and the tweeting about it) the derision it deserves, together with @alexhanna . >

2023-02-16 04:54:40 NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago. >

2023-02-16 04:53:52 That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it. >

2023-02-16 04:51:30 TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models". #AIHype #MathyMath https://t.co/oz6PAikP5R

2023-02-16 04:18:17 @Leading_India No. It was a deliberate choice not to name the podcast or guest in this thread. Did you really think that was just a mistake?

2023-02-16 03:29:09 RT @LucianaBenotti: They payed crowdworkers in poor countries very little so as "not to disrupt the economy". Guess whether they offer diff…

2023-02-16 03:28:55 RT @haleyhaala: At an @StanfordHAI AI and Education event and reflecting on how scholars navigate expertise in this #interdisciplinary spac…

2023-02-16 02:10:43 I haven't finished it yet, but probably will --- it's one of those train-wrecks you can't look away from, alas.

2023-02-16 02:10:18 3) Because large LMs enable "few-shot learning" it follows that in the near future the amount of training data required to keep them from outputting toxic content will also be minimal. etc. >

2023-02-16 02:09:36 Other howlers: 1) Electronic calculators became cheap and widely accessible in the 1950s. 2) #ChatGPT makes a good Jungian therapist (for help quickly analyzing dreams) >

2023-02-16 02:08:34 The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) &

2023-02-16 02:07:18 Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language. >

2023-02-16 02:06:19 Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was. >

2023-02-16 01:30:10 @athundt Prompted by a really interesting discussion at the winter school I was at last week, I'd be really interested to learn about how other fields managed industry interest --- both best practices and things to avoid. (I'm thinking pharma, big oil...)

2023-02-16 01:23:50 Sorry, did I say toys? I mean extra super sophisticated bias reproducing, information ecosystem polluting, plagiarism machines.

2023-02-15 05:39:08 @Y_I_K_ES @alexhanna I was going for quack doctor...

2023-02-15 05:17:00 @luke_stark @alexhanna That feels like it's about to turn into a meme...

2023-02-15 03:36:54 If it quacks like a fake doc ... it might be scams with language models (generative #MathyMaths) in healthcare. Join me and @alexhanna as we take apart some truly appalling examples in the next episode of #MAIHT3k this Friday, Feb 17, 9:30am Pacific. https://t.co/VF7TD6tw5c

2023-02-15 01:44:06 @GretchenAMcC Woah -- that's not the stress pattern I usually have for kiki.

2023-02-15 01:43:54 RT @GretchenAMcC: Roses are red You'll probably agree Which one is bouba And which is kiki https://t.co/YlSPjhk15w

2023-02-14 17:51:51 @rachelmetz @technology Glad to have you on the beat!

2023-02-14 17:50:10 @rachelmetz @technology Woo-hoo!!! Congrats to all involved :)

2023-02-14 16:53:11 RT @Abebab: just a reminder that big tech corps still censor AI ethics work. we've been collaborating with a scholar who works in a big tec…

2023-02-14 14:18:37 RT @rajiinio: Something many often don't consider when discussing "Ethical AI" is the power differential - there is a multi-billion dollar…

2023-02-14 14:00:50 RT @emilymbender: Nothing says "human-centered AI" like casually dismissing the thorough work of those documenting AI harms to, you know, a…

2023-02-14 13:55:09 Early 2023 vibes: Those working in AI ethics have documented many harms associated with this approach, but the #AIHype peddlers are intent on selling "upside" and "promising futures" and have deep pockets for marketing. https://t.co/lz1P7pg5lN

2023-02-14 13:15:45 @ichiro_satoh

2023-02-13 22:37:58 RT @C_Schreyer: Come edit language-y things with me on @Wikipedia! Following in the footsteps of my first edit-a -thon teachers @GretchenAM…

2023-02-13 22:29:32 RT @Abebab: for anyone that thinks certain technologies/tools are 'inevitable', I encourage you to dump that thinking. the technologies/too…

2023-02-13 22:28:55 RT @BlackWomenInAI: "I Still Believe" is more than just a piece of art, it's a reflection of our shared humanity and a call to action to st…

2023-02-13 20:39:58 RT @timnitGebru: When I see schools raising $$ for their "ethics" related initiatives &

2023-02-13 20:39:55 RT @timnitGebru: Never look to those who have most to gain since they're at the top of the hierarchy, for any type of dismantling of power.…

2023-02-13 20:37:07 Nothing says "human-centered AI" like casually dismissing the thorough work of those documenting AI harms to, you know, actual people. https://t.co/lz1P7pg5lN

2023-02-13 14:31:10 @michaelgaubrey @AndrewDCase FTR -- that was a result from standard Bing, not BingGPT. (Which the person who posted it didn't have access to.)

2023-02-13 14:22:43 @JudgeFergusonTX That would only be slightly informative if we actually had information about the training data, so that we could look at the confabulation as reflecting that training data. But since OpenAI isn't open about its training data, there's zero value.

2023-02-13 14:11:00 Minor in the grand scheme of things, but it's still super annoying that folks are now attributing the phrase "Stochastic Parrots" to @sama after he used it in a sophomoric way. He didn't coin the phrase, we did in this paper: https://t.co/kwACyKvDtL

2023-02-13 13:57:05 RT @mart1oeil: Genre, c'est pas @timnitGebru et @mmitchell_ai qui ont alerté sur le sujet (et qui se sont fait virer de Google à cause de ç…

2023-02-13 13:56:58 RT @mart1oeil: Bref, même s'il essaye de se positionner, ce sont bien @emilymbender, @timnitGebru, @mcmillan_majora et @mmitchell_ai qui on…

2023-02-12 16:03:04 @JudgeFergusonTX Don't know &

2023-02-12 15:28:40 RT @mmitchell_ai: Ok so. In light of much talking about Bing, Bard and "truth", I looked at what the "Stochastic " paper warned. The first…

2023-02-11 20:02:23 RT @mmitchell_ai: Plug for our event! With @emilymbender and @timnitGebru https://t.co/kHf69wftFO

2023-02-10 19:16:06 RT @alexhanna: "ChatGPT isn't really new but simply an iteration of the class war that's been waged since the start of the industrial revol…

2023-02-10 08:11:35 RT @mmitchell_ai: "Those limitations were highlighted by Google researchers in a...paper arguing for caution w/ text generation...that irke…

2023-02-10 05:48:33 RT @csdoctorsister: Y’all can debate M’s chatbot vs G’s chatbot all day, if you want. The racism, sexism + rest of the -isms in these cha…

2023-02-09 15:30:43 RT @emilymbender: Because big tech is currently all racing to the bottom of this one particular valley (sinkhole? trench?), namely, chatbot…

2023-02-09 15:30:35 RT @emilymbender: I'm not sure which is less surprising: That Bard created a confident sounding incorrect answer, or that no one at Google…

2023-02-09 14:55:29 @stevermeister If you aren't interested enough in my writing to actually read it, I don't see why I should invest anything in your response.

2023-02-09 14:19:30 Itamar clarifies: this is just with normal Bing, not chat Bing.

2023-02-09 13:34:23 (Full disclosure: I haven't been able to repro this, but I rather suspect they've got folks on call playing whack-a-mole with all the egregious responses getting exposed.)

2023-02-09 13:33:50 Here's a cute example, due to Itamar Turner-Trauring (@itmarst@hachyderm.io), who observes that Google gave bad results which were written about in the news—which the new GPT-Bing used as reliable answers. Autogenerated trash feeding the next cycle, with one step of indirection. https://t.co/hERPlEbxeV

2023-02-09 09:41:50 Because big tech is currently all racing to the bottom of this one particular valley (sinkhole? trench?), namely, chatbots for search, I'm re-upping this thread about why that's a terrible idea. https://t.co/dSt8vLtvHl

2023-02-09 08:08:40 Looking forward to this! https://t.co/IqhLKwKBXj

2023-02-08 21:30:49 RT @ltgoslo: Extra talk at the LTG research seminar tomorrow, February 9! Emily M. Bender @emilymbender "Meaning Making with Artificial I…

2023-02-08 18:55:36 I'm not sure which is less surprising: That Bard created a confident sounding incorrect answer, or that no one at Google thought it worth validating the output before using it in a demo. #AIHype https://t.co/MvEzqgvVE8

2023-02-07 20:45:17 RT @ManeeshJuneja: Thank Goodness we have people like Emily - a useful thread

2023-02-07 16:51:09 RT @mmitchell_ai: As usual @emilymbender provides the amazing public service of breaking down AI hype. Check out her earlier piece that for…

2023-02-07 16:11:17 RT @emilymbender: Finally, we get Sundar/Google promising exactly what @chirag_shah and I warned against in our paper "Situating Search" (C…

2023-02-07 16:11:07 RT @emilymbender: "We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan https://t.co/d93p1efaEf I suggest you rea…

2023-02-07 15:16:26 @SashaMTL @ElectricWeegie Some of it is collected here: https://t.co/uKA4tuuwu7

2023-02-07 14:30:27 RT @anetv: A few thoughts on the blog post from Google CEO Sundar Pichai https://t.co/3UVbcsF6AD about what it means to automate knowledge…

2023-02-07 14:20:38 @mywoisme You surely then have a different experience of social media than I do --- my mentions are perpetually filled with reply guys, and no, it wouldn't make sense to take the stance that any given challenge comes from good intent.

2023-02-07 14:19:15 RT @emilymbender: Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack! https:/…

2023-02-07 14:18:55 RT @chirag_shah: Yes, we did warn about this in our #CHIIR2022 paper a year ago and we were told by Google proxies that we were overreactin…

2023-02-07 14:09:10 @mywoisme Those sound valuable --- and again, not relevant to the tech that I was talking about in my thread. You are providing interfaces to specific, curated sets of data (e.g. the jobs DB or the government site) and then from there people can explore the actual details of the data.

2023-02-07 14:07:33 @mywoisme That sounds better than what you originally said -- I encourage you to be very careful how to you talk about this.

2023-02-07 13:49:08 @mywoisme I can't guess what it is that you are actually building (since you didn't say), but if it's more like the latter, then it's a total non-sequitur --- and seems to be an attempt to undermine my argument based on irrelevant points and a tokenization of low income people.

2023-02-07 13:48:13 @mywoisme A curated website that includes information about services that people need, which itself embeds a chatbot to help people navigate that website, say, would be a very different proposition. >

2023-02-07 13:47:13 @mywoisme I think you're jumping in here with a non-sequitur. Your first tweet included "We build AI and bots for people on low incomes to access services." My thread was not about "accessing services". It was about chatbots as search engines for the Internet. >

2023-02-07 13:40:32 @mywoisme Chatbots are terrible search engines for anyone. Furthermore, no one is charged to use existing search engines. "Bots serve better" sounds like tech solutionism, and I rather suspect you are selling something.

2023-02-07 13:35:16 @mywoisme Huh? This is a thread about chatbots for search. Are you asserting that people with low incomes somehow don't deserve information access systems that support their information literacy just as much as anyone else?

2023-02-07 11:03:14 RT @gfiorelli1: A wonderful thread, which has nested, in its last tweet, another great thread.

2023-02-07 09:05:41 @djleufer @BuseCett @chiragshah Various presentations of the ideas from that paper here: https://t.co/dSt8vLtvHl

2023-02-07 08:38:20 @WendyNorris @danmcquillan https://t.co/m6zcbzG6pz

2023-02-07 07:56:46 Why aren't chatbots good replacements for search engines? See this thread: https://t.co/MYfVjFBOfe

2023-02-07 07:55:20 Finally, we get Sundar/Google promising exactly what @chirag_shah and I warned against in our paper "Situating Search" (CHIIR 2022): It is harmful to human sense making, information literacy and learning for the computer to do this distilling work, even when it's not wrong. >

2023-02-07 07:52:30 "High bar for quality, safety and groundedness" in the prev quote links to this page: https://t.co/RIKnAVGuwe Reminder: The state of the art for providing the source of the information you are linking to is 100%, when what you return is a link, rather than synthetic text. >

2023-02-07 07:51:10 Next some reassurance that they're using the lightweight version, so that when millions of people use it every day, it's a smaller amount of electricity (~ carbon footprint) multiplied by millions. Okay, better than the heavyweight version, but just how much carbon, Sundar? >

2023-02-07 07:49:12 Let's sit with that prev quote a bit longer. No, the web is not "the world's knowledge" nor does the info on the web represent the "breadth" of same. Also, large language models are neither intelligent nor creative. >

2023-02-07 07:48:21 And then a few glowing paragraphs about "Bard", which seems to be the direct #ChatGPT competitor, built off of LaMDA. Note the selling point of broad topic coverage: that is, leaning into the way in which apparent fluency on many topics provokes unearned trust. >

2023-02-07 07:45:12 And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how? #AIHype #InAweOfScale >

2023-02-07 07:43:08 Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!! There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI". >

2023-02-07 07:40:53 Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack! https://t.co/55U85T0UmZ #MathyMath #AIHype

2023-02-06 15:35:07 "Transformer models and diffusion models are not creative but carceral - they and other forms of AI imprison our ability to imagine real alternatives." -- @danmcquillan

2023-02-06 15:34:38 @danmcquillan "Instead of reactionary solutionism, let us ask where the technologies are that people really need. Let us reclaim the idea of socially useful production, of technological developments that start from community needs." -- @danmcquillan >

2023-02-06 15:34:16 "The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated." -- @danmcquillan >

2023-02-06 15:33:55 @danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan >

2023-02-06 15:33:36 "We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan https://t.co/d93p1efaEf I suggest you read the whole thing, but some pull quotes: >

2023-02-06 15:30:49 @danmcquillan Thanks -- and nice post! I don't see the second of those, but I do see the link to @IrisVanRooij 's great piece (among many other valuable resources).

2023-02-06 15:16:49 "on-the-fly" as in "post-processing on-the-fly" evokes different images when primed with discussions of (web) crawling and lots of spider metaphors.

2023-02-06 08:28:03 RT @TheNeedling: Space Needle Waiting Whole Life for This Moment: https://t.co/28PRWbUIUr https://t.co/zy5kSPPTv1

2023-02-05 20:08:01 @timnitGebru And vague allegations of missing citations that we so flimsy. Like if some specific thing were missing, they could have pointed us to it to consider adding...

2023-02-05 20:03:47 @timnitGebru Seriously -- today's object lesson in "don't mouth off about what you haven't read". And maybe also: "The more widely discussed a paper is, the more likely you'll get misleading info about what's in it..."

2023-02-05 20:02:40 RT @timnitGebru: I’m so confused. Besides the lie, on what our paper is about, where in the paper do we talk about “other large-scale NLP s…

2023-02-05 18:58:25 RT @emilymbender: Huh -- I rather suspect that Yann hasn't read our paper. Not even the abstract (attached). We suggested, as we wrote the…

2023-02-05 07:59:29 There's no mention of "human-level AI" there though, since that is not a research goal that we are speaking to in that paper. (And it certainly isn't a research goal of mine.)

2023-02-05 07:58:41 And as to the context for Yann's tweet, one of the risks we identify is that the ersatz fluency of language models would deceive researchers into thinking they were building natural language understanding systems when they weren't. (See sec 6.1.) https://t.co/xdzmojB4Zm

2023-02-05 07:55:19 Huh -- I rather suspect that Yann hasn't read our paper. Not even the abstract (attached). We suggested, as we wrote the paper in 2020, that it was pruden to consider the risks, and then gathered what info was available then (from the literature) about what the risks are. https://t.co/32NaMvTITy https://t.co/I8D1tVKYrB

2023-02-04 15:15:05 RT @gliese1337: Hey linguists! How could your subfield be employed in science fiction or fantasy without invoking the Sapir-Whorf hypothesi…

2023-02-04 14:31:31 @BoseShamik @rachelmetz @RadicalAIPod +1 for @RadicalAIPod

2023-02-04 12:49:51 RT @becauselangpod: Anyone could tell

2023-02-04 07:07:52 @venikunche I went to four conferences last year and avoided it. Very careful about masking &

2023-02-03 17:27:17 @poopmachine @rachelmetz @alexhanna Coming soon!

2023-02-03 16:09:31 @aryaman2020 Many aren't actually. Including @schock 's work I was alluding to above: https://t.co/xL8bQrKyI1

2023-02-03 16:07:42 On that last point, see: https://t.co/NA2Kbvuq4H

2023-02-03 16:06:28 And before a thousand more people say this to me: Yes the need for transparency isn't limited to training data. How was it evaluated? How were the data for the RLHF phases collected, who created them? What about the data &

2023-02-03 16:01:35 @DrSyedMustafaA1 Yes agreed.

2023-02-03 15:09:58 RT @emilymbender: And yes it's a problem that OpenAI is not being transparent about what they are unleashing on the world. The public deser…

2023-02-03 15:09:53 And yes it's a problem that OpenAI is not being transparent about what they are unleashing on the world. The public deserves to know what's in the training data for #ChatGPT. But this is about transparency and accountability, not about measuring "intellectual contribution."

2023-02-03 15:05:19 Look to the work of Safiya Noble, Ruha Benjamin, Abeba Birhane, Mar Hicks, Alex Hanna, Deb Raji, Sasha Costanza-Chock and others. Only some of these authors would put things on arXiv (and not all of their work).

2023-02-03 15:03:29 And when I think of "intellectual contributions" to AI research, I'd guess that much of the most important work isn't on arXiv at all. It's in books or journals that many computer scientists seem unwilling to learn about (or take the time to read). >

2023-02-03 15:01:55 Somehow a count of arXiv papers transmutes into a measure of "intellectual contribution". That's hilarious. ArXiv made sense as a countermeasure against slow or closed publishing back in the day. But don't valorize the collection of flags in the flag planting arena. >

2023-02-03 14:55:20 It's 2023. "Gosh, we didn't realize how people would misuse this" just isn't believable anymore. Bare minimum, with any new tech: 1) How would a stalker use this? 2) What will 4chan do with this? And don't release, not even as alpha or beta, before mitigating those risks. https://t.co/2u89bCDQRl

2023-02-03 14:08:57 RT @emilymbender: Was just perusing @OpenAI 's terms of service and was a little surprised to find this. Are they really saying that the us…

2023-02-03 14:08:04 RT @agstrait: How many more times must we see firms releasing their tech with an easy-to-use interface, then feigning shock when its immedi…

2023-02-03 14:07:52 RT @mmitchell_ai: So I was asked by several journalists last year about predictions for 2023, and described much of what @jjvincent is now…

2023-02-03 13:53:42 RT @csdoctorsister: “What was the last book you purchased, and why did you buy it? #DataConscience: Algorithmic Siege on our Humanity by Dr…

2023-02-03 00:58:06 RT @shengokai: Question for the Austinians out there: does the illocutionary force of an utterance also encompass the affective power of an…

2023-02-02 22:01:16 @rachelmetz We're working towards the audio-only version, but how about Mystery AI Hype Theater 3000 with @alexhanna https://t.co/6UCGlE6mx3

2023-02-02 13:46:04 RT @emilymbender: @gbrumfiel I want to set the record straight on one thing though. I do NOT "wonder" if #ChatGPT could be improved to be m…

2023-02-02 13:45:56 @gbrumfiel I want to set the record straight on one thing though. I do NOT "wonder" if #ChatGPT could be improved to be more accurate. @gbrumfiel asked me if it could be made more accurate and I said I don't think so. Not the same. https://t.co/TnvxY1RPqw

2023-02-02 13:43:06 I appreciated @gbrumfiel 's angle here -- if computers are so central to things like rocket science because they can reliably do complex calculations, why is supposedly "advanced" #ChatGPT so unreliable? https://t.co/pOMwxFwRpL >

2023-02-02 05:13:48 @alexhanna Thank you!!

2023-02-02 05:13:36 @UpFromTheCracks @mmitchell_ai Thank you

2023-02-02 04:03:51 @Grady_Booch @CriticalAI Thank you!

2023-02-02 03:23:31 RT @sl_huang: My novelette MURDER BY PIXEL in @clarkesworld has a bibliography. It includes 18 links Been meaning to do a lil tweet thread…

2023-02-02 03:10:26 @timnitGebru @mmitchell_ai Thank you!

2023-02-02 03:10:16 @mihaela_v @mmitchell_ai Thank you!

2023-02-02 01:16:17 @DiegoAlcalaPR @CriticalAI Gracias!

2023-02-02 01:11:00 @CriticalAI Thank you!

2023-02-01 21:27:59 @mmitchell_ai Thank you

2023-02-01 20:28:56 RT @jevanhutson: Do not do this. This is not legal advice. This is moral advice.

2023-02-01 20:00:47 RT @timnitGebru: Yep that's how they evade responsibility while advertising it as something that can do anything for everyone under any cir…

2023-02-01 18:13:01 RT @uwnews: Congrats to Emily M. Bender, John Marzluff, Sean D. Sullivan and Deborah Illman (pictured below from left to right), @UW's 2022…

2023-02-01 17:11:47 @LeonDerczynski Thank you :)

2023-02-01 17:08:16 @UWlinguistics Thank you!

2023-02-01 17:05:50 @bertil_hatt @OpenAI Uh no, the *you* in 3(a) is the user, not OpenAI. It is not in the least about how they are protecting your privacy.

2023-02-01 17:00:18 @bertil_hatt @OpenAI The terms of service are short, my dude, and linked from the first tweet in my thread. You could have checked before coming here to mansplain.

2023-02-01 15:45:57 @_vsmenon Thank you

2023-02-01 15:18:36 @TaliaRinger It is also seriously damaging to relationships with non-ML folks who (ideally) could be working collaboratively on ML approaches to various domains. "We've solved your field" isn't exactly enticing though, nor is "Our goal is to solve your field"....

2023-02-01 14:59:28 @gyrodiot Well, it isn't necessarily their fault. For all we know, they might have tried but their sound advice went unheeded...

2023-02-01 14:56:13 I'm beginning to think that whenever a ML researcher talks about 'solving X' where X isn't an equation, that's a really clear signal that they don't understand what X is, at all.

2023-02-01 14:54:09 Reading a terrible paper and scrolling down to the acknowledgements to see who failed to dissuade the author from publishing such drivel...

2023-02-01 14:22:35 @OpenAI In sum, @OpenAI 's approach to #AISafety seems to be: surely that's the user's job, especially when it comes to complying with any laws.

2023-02-01 14:22:24 @OpenAI Meanwhile, their approach to #GDPR/#CCPA seems to be "Nuh-uh. We're not collecting personal data. You're collecting personal data!" IANAL, though, and I'd love to hear what actual privacy lawyers make of this. >

2023-02-01 14:19:07 @alexhanna @OpenAI IKR??

2023-02-01 14:16:05 Was just perusing @OpenAI 's terms of service and was a little surprised to find this. Are they really saying that the user is responsible for ensuring that #ChatGPT's output doesn't break any laws? Source: https://t.co/VPWd2InRb5 >

2023-02-01 13:59:33 Interestingly, https://t.co/xDoVX6s9QC claims sponsorship from Google (displaying Google's logo). I wonder if Google is actually sponsoring scam events or if these folks are just fraudulently using the logo.

2023-02-01 13:58:26 The spam/predatory events linked were: https://t.co/nBadGD2LPj https://t.co/AEFf7qqYoF https://t.co/Y6r7DyfywB https://t.co/xDoVX6rC14 + one link that didn't work for me: https://t.co/mDDLihRmMX

2023-02-01 13:57:06 Here's a new twist (in my inbox this morning): "Dear Professor You are invited as Plenary Speaker / Invited Speaker in one of the following conferences. The Proceedings will be published by IEEE for BIO2023 and MACSE2023, with Springer Verlag for EEACS and with AIP for APSAC"

2023-02-01 13:12:34 RT @IrisVanRooij: "Here I collect a selected set of critical lenses on so-called ‘AI’, including the recently hyped #ChatGPT. I hope these…

2023-01-31 15:56:49 RT @alexhanna: Episode 7 of Mystery AI Hype Theater 3000 is out! @emilymbender and I talk with @trochee about evaluation, benchmarking, and…

2023-01-31 14:43:06 RT @emilymbender: Check it out! Episode 7 of Mystery AI Hype Theater 3000 is up --- with special guest @trochee who brings deep expertise o…

2023-01-30 20:10:54 RT @NEJLangTech: ACL Rolling Review now has journal publication: authors are invited to commit papers in ARR to the next issue of NEJLT, de…

2023-01-30 16:55:47 Check it out! Episode 7 of Mystery AI Hype Theater 3000 is up --- with special guest @trochee who brings deep expertise on measurement and evaluation (while @alexhanna and I provide the usual irreverence) https://t.co/6DbfNaYYkp #AIhype #MathyMath #MAIHT3k

2023-01-30 13:42:12 RT @emilymbender: Hey @Wikipedia -- in the new layout, you have a serious error around "Languages". English is a language. So if the pa…

2023-01-30 03:44:55 Hey @Wikipedia -- in the new layout, you have a serious error around "Languages". English is a language. So if the page exists in English and say Ukrainian, that means there are TWO languages, not one. https://t.co/sMOdVQz5zH

2023-01-30 01:00:00 CAFIAC FIX

2023-01-16 22:25:00 RT @ruthstarkman: Great article by @adrienneandgp @MilagrosMiceli @timnitgebru “Data labeling jobs are often performed far from the Sili…

2023-01-16 15:25:01 @agnesbookbinder By experience, I mean the subjective experience of doing something. Sure, intent and motivation are part of that, but not all of it.

2023-01-16 15:20:51 @CriticalAI @GaryMarcus @timnitGebru @TaliaRinger yes: https://t.co/jYEiASBLXT

2023-01-16 15:17:31 "form" vs. "meaning" sometimes doesn't seem to resonate, so I'm trying out a new way of describing this: "artifact" vs. "experience" #AIHype #MathyMath https://t.co/FqDSXcjhJC

2023-01-16 15:16:58 @CriticalAI p.s. I'm also reminded (again) of Lee Vinsel's points about "criti-hype": https://t.co/k2qb3rAyGb

2023-01-16 15:12:27 @CriticalAI I think this is another ex of people mistaking artifacts (eg. comments submitted in public comment processes

2023-01-16 15:10:36 @CriticalAI called this op-ed "well-intentioned" and I think it is in the sense that the authors are concerned with protecting democracy. But they are misapprehending what the threat is. >

2023-01-16 15:09:35 And this is just absurd. It comes down to: "If we had non-existent autonomous technology, that technology could..." "A system that can understand political networks" does not exist. And "understand" doesn't even imply agency like they assume. >

2023-01-16 15:07:22 Take this, for instance. #ChatGPT *could be used* to do this, but it doesn't have the agency to do it itself. >

2023-01-16 15:05:19 Indeed - this OpEd is weirdly misinformed #AIHype. Cheap text synthesis is definitely a threat, but it is one because *people* could use it to (further) gum up the communication processes in our government. But that's not what these authors seem to be saying. >

2023-01-15 23:01:05 @WellsLucasSanto Yeah, institutional websites like that are usually super hard to get up the gumption to sit down &

2023-01-15 22:55:42 @WellsLucasSanto In case it's helpful to have the 1st step: At most institutions, there's an office that helps mediate this. Students go there to establish was accommodations are needed and then the office communicates with faculty. U of M's is here, if that's relevant: https://t.co/v9RetGr2fi

2023-01-15 17:39:15 @firepile Thank you.

2023-01-15 17:34:21 @firepile Thanks. Not a philosopher --- any key citations you could point me to?

2023-01-15 15:27:46 @FarhadMohsin1 That's definitely how runners talk about it though!

2023-01-15 15:14:43 For more on why chatbots aren't a good replacement for search, see this thread: https://t.co/HabB70Bq8c

2023-01-15 15:13:46 On resisting centralization of information access systems, I highly recommend: 1) Safiya Noble's _Algorithms of Oppression_ 2) This recent podcast episode featuring @timnitGebru https://t.co/06Yd4uva76 >

2023-01-15 15:12:10 I did a screencap rather than a QT because there was no way to get a QT to show the interaction I wanted to capture. For completeness, here's the "deep lesson" tweet: https://t.co/M8rPHmwaj0 >

2023-01-15 15:11:15 The "deep lesson" has to do with how we collectively design information access systems, and our choices in this moment. Do we lean in to #AIhype or do we level up info hygiene? Do we accept inevitability narratives about centralized control of info systems, or do we resist? https://t.co/os1ahgt9HZ

2023-01-15 14:12:02 RT @emilymbender: Got a chance to listen and yep -- this is excellent. Highly recommended for all. @timnitGebru is genius at explaining in…

2023-01-15 14:11:57 RT @emilymbender: Stochastic Parrots on HackerNews today, &

2023-01-15 14:11:49 RT @emilymbender: "Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that wi…

2023-01-15 00:04:44 RT @IrisVanRooij: Stop feeding the hype and start resisting https://t.co/HrNFGTcEoS #StochasticParrots #ChatGPT #LanguageModels #AIEthics #…

2023-01-15 00:03:31 "Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that will allow them to recognise misleading claims made by tech companies" Excellent call to action by @IrisVanRooij https://t.co/3iUM1PFYZc

2023-01-14 22:53:45 @nitashatiku Ohhh! Saving to listen.

2023-01-14 22:36:27 @CriticalAI Probably not worth the headache, I would guess. It's techbro central over there.

2023-01-14 22:20:40 RT @emilymbender: If you'd like to know what actually went down, here's a collection of the better news coverage of those events: https://…

2023-01-14 22:20:36 If you'd like to know what actually went down, here's a collection of the better news coverage of those events: https://t.co/QrrBwXIlQi

2023-01-14 22:19:45 Stochastic Parrots on HackerNews today, &

2023-01-14 19:14:43 Got a chance to listen and yep -- this is excellent. Highly recommended for all. @timnitGebru is genius at explaining in clear language where the issues are, at connecting her work to others' and at not letting interviewers get away with erroneous presuppositions. https://t.co/5dGqB55tKh

2023-01-13 03:11:23 @complingy @NSF @EleanorNorton Congrats!

2023-01-13 03:11:02 Starting to see lots and lots of chatter about people using #ChatGPT for legal applications. This is reason #5176 that it MATTERS that the public understand that these things do not understand. #AIhype #MathyMath #FFS

2023-01-13 00:55:31 RT @alexhanna: And the hype starts comin and it don't stop coming: @emilymbender and I kick off Mystery AI Hype Theater 3000 in 2023 next F…

2023-01-12 20:54:01 Authors admit to using automatic plagiarism system https://t.co/xu6g7B0JgM

2023-01-12 18:06:24 @PsychScientists @IrisVanRooij @jdp23 @timnitGebru Well yes, the block button is there for anyone to use. I use it too. But I Iris was actually being very gentle and clear, and trying to hand him a clue. And that's a bit much?

2023-01-12 17:39:47 @IrisVanRooij @jdp23 @PsychScientists @timnitGebru Wow, talk about thin skin.

2023-01-12 17:11:58 @sergia_ch I'm super impressed that you got out of that and can now see it clearly.

2023-01-12 15:17:59 RT @chirag_shah: Still love this old post by @jerepick about how important “friction” is in information access and use. And why it’s not a…

2023-01-12 15:17:39 RT @emilymbender: It's gonna be sealions all the way down tonight, I'm afraid. I'll do my best to remember: the best way to observe sealion…

2023-01-12 15:17:33 RT @emilymbender: This is so gross. Also a study in a non-apology. He says he repudiates the horrific comments ... and then goes right back…

2023-01-12 14:45:27 RT @laurenfklein: Via @emilymbender, this recent interview with @timnitGebru provides such a clear view of where things are with AI right n…

2023-01-12 03:12:47 RT @aclmentorship: Want to collaborate beyond #NLProc?Check out our suggestions for "Developing collaborations with other disciplines" a…

2023-01-12 02:21:42 Video description from prev tweet: About a dozen seals are sitting on a skinny, round floating pier, barking. Eventually they over-balance the thing and several seals fall off into the water. Filmed while I was out on a run, in Seattle's Ballard neighborhood, in Feb 2022.

2023-01-12 02:20:23 It's gonna be sealions all the way down tonight, I'm afraid. I'll do my best to remember: the best way to observe sealions (both types) is from afar. https://t.co/KKWgi5kVtP [ok, technically I think my video is of seals, but I like it too much not to share.] ID in next tweet.

2023-01-12 01:17:53 This is the "intellectual" heart of Effective Altruism folks. It's a cult and it's harmful. And it's got branches in the form of student clubs at lots of universities. It's really important to not let this be normalized.

2023-01-12 01:15:19 And the level of naïveté regarding racial constructs and racism in the whole thing is mind-boggling. >

2023-01-12 01:14:36 And pro-tip: "inequality in social outcomes, including sometimes disparities in skills and cognitive capacity." --- is STILL making claims of superiority of one group (to be clear: he's claiming this for white people) over another (to be clear: he's talking about Black people).>

2023-01-12 01:12:43 This is so gross. Also a study in a non-apology. He says he repudiates the horrific comments ... and then goes right back into them. You think posting slurs is offensive, so you apologize by ... quoting yourself posting a slur? >

2023-01-12 00:54:25 @GaryMarcus @HenrySiqueiros @timnitGebru I'm talking about remarks like this one: From https://t.co/9YbLS7GDGa https://t.co/R3I2iCC37a

2023-01-12 00:45:07 I've also learned: It is there on the desktop app, just buried under "Privacy" in the settings menu.

2023-01-12 00:43:17 @signalapp Resolved! The setting for turning it off is (oddly) only available in the mobile app. But once done there, it affects the desktop app too.

2023-01-12 00:42:43 @matbalez @signalapp Thank you, that fixed it. (Super counterintuitive that this isn't available in the desktop app....)

2023-01-12 00:32:52 @matbalez @signalapp Huh -- I don't have "Settings" but rather "Preferences" and there's nothing there about Stories...

2023-01-12 00:21:13 Is there a way to just hide "stories" on @signalapp ? I don't use them, I don't want to see them, but every time the app restarts (on desktop) I get a new notification about them.

2023-01-12 00:16:47 I haven't heard of this podcast before, but this looks super interesting. @timnitGebru has such a clear view of things --- should be amazing! https://t.co/06Yd4uva76

2023-01-12 00:15:09 @HenrySiqueiros @GaryMarcus @timnitGebru Excuse me? "outsider" by whose definition? I think a better description of their relative positions is techno-chauvanist ("AI is going to solve everything, if we can build it right") v. techno-humanist (keeping ALL humans in view while designing).

2023-01-11 23:11:48 @nazarre Hmm --- I think I'd rather not. But at any rate, (at least) joint credit goes to @kareem_carr https://t.co/xKu7T2xu9F

2023-01-11 23:02:04 RT @kareem_carr: I've noticed a certain rhetorical trick that's common in tech spaces that I call "borrowing evidence from the future". It…

2023-01-11 21:34:05 RT @ejfranci2: Come be my colleague at Purdue! 2 positions as Assistant Professor of African American Studies, specializing in "artificial…

2023-01-11 20:38:42 RT @lmatsakis: In the newsletter today, I spoke with @emilymbender, who provided a much-needed correction to all the hype around ChatGPT ht…

2023-01-11 18:15:17 RT @NYU_Alliance: We are thrilled about reading this book exploring how technology can reinforce inequality and how to re-create a more equ…

2023-01-11 17:26:49 @jared_du_jour Well, except it's not clear that it is possible! People are asserting that it is, without evidence.

2023-01-11 17:21:06 It seems the main thing OpenAI has mastered is getting other people to do their hype for them. Exhibit A: Millions of cherry picked ChatGPT examples on social media. Exhibit B: Breathless anticipation (and prognostication) about GPT4.

2023-01-11 17:20:04 Do we have a name for this rhetorical move/fallacy? A: AI Hype! My system can do X! B: No, it can't. Here's why. A: So you think no computer could ever do X? -or- A: But what about future versions of it that could do X? It's super common, and it feels like it should be named.

2023-01-11 16:48:59 RT @jennycz: @jessgrieser "I love your dress!" Tired: "Thanks - it has pockets!" Inspired: "Thanks - it has dressussies!"

2023-01-11 15:25:52 How did I miss that @merbroussard has a new book coming out?? _Artificial Unintelligence_ is fantastic and I'm super excited for _More than a Glitch_. Pre-ordered as soon as I saw! https://t.co/n9VEYTA7F2

2023-01-11 15:20:26 brb ... gonna pre-order!!! This looks great. https://t.co/oUv4wZkk0v

2023-01-11 15:16:09 @TimoRoettger I've often wondered if (some) YouTubers intonation patterns are somehow accommodating what happens when people watch the videos at higher speeds...

2023-01-10 18:22:28 RT @timnitGebru: You have a white man who was asked to not say all white male names in a podcast and here is the response. Some of us have…

2023-01-10 17:00:19 @GaryMarcus @haleyhaala ... an especially uninterested in participating in something called an "AGI debate" where I understand the framing to be part of a larger series about how to achieve AGI. Not my interest, not my job, not worth my time.

2023-01-10 16:59:15 @GaryMarcus @haleyhaala 3) Organizing an event to "feature" minoritized voices on your platform isn't the same thing as you taking time &

2023-01-10 16:57:57 @GaryMarcus @haleyhaala The point is: 1) There is ALWAYS room to talk about the contributions of minoritized groups in scholarship &

2023-01-10 15:04:58 I also do not appreciate how the article presents especially "recognizing (and avoiding) pedestrians" as a solved problem. It's not. (And neither are voice interfaces or machine translation, for that matter, but this seems most egregious wrt supposedly self-driving cars.) https://t.co/Gi9ur3nfFG

2023-01-10 15:01:06 The article also makes it sound like the biologists are just writing English descriptions of protein shapes. I'm really doubt that. Surely there's some formal system for specifying/describing the shapes of proteins? >

2023-01-10 14:58:20 No, "A.I." doesn't have "artistry". And no the biologists didn't look at all the synthetic images on social media being passed around as "AI art" and say "hey, let's do that for proteins!" >

2023-01-10 14:56:20 The NYT continues to be trash at covering so-called "AI" (or, in NYT style sheet "A.I."). This piece is framed as though the folks working on protein folding "took inspiration" from DALL-E &

2023-01-10 14:53:49 @GaryMarcus @haleyhaala This is in large part why I am not interested in participating in your "AI debates". That and also not being interested in providing free labor in the middle of the winter holidays.

2023-01-10 14:52:51 @GaryMarcus @haleyhaala I have zero interest in building AGI (or AI for that matter). My concerns are with what is being done in the name of "AI". So yes, sometimes our messages overlap. But there aren't just two "sides" and we aren't on the same one. >

2023-01-10 14:51:52 @GaryMarcus @haleyhaala Also, your claim that we are on the same side wrt to AGI suggest that you don't really understand where I am coming from. I hear you saying (incl in that episode) "Deep learning (alone) isn't how we'll get AGI." >

2023-01-10 14:44:01 @GaryMarcus @haleyhaala "Shoot the messenger" suggests that a) My documenting the pattern of reference in that episode was a "shooting"

2023-01-10 14:38:09 @haleyhaala That was such a great question to ask---way to cut right to the ridiculous heart of it all, and on the spot no less!

2023-01-10 13:57:24 RT @emilymbender: There's a world of difference between "That's not how you build AGI" and "That thing you've built is not A(G)I and furthe…

2023-01-10 13:53:56 RT @histoftech: Pretend machines can replace labor for free. Destroy value of that labor. Then, if the machine works, jack up the price of…

2023-01-10 05:13:44 There's a world of difference between "That's not how you build AGI" and "That thing you've built is not A(G)I and furthermore is harmful in many ways:.

2023-01-10 01:19:16 @hipsterelectron Sorry, I'm not going to waste my time reading a paper that starts with a paragraph of synthetic text.

2023-01-09 19:58:21 @mizzuzbeldruegs @MyBFF @sashastiles No, I don't have open DMs on Twitter. You can find my email easily enough on my web page.

2023-01-09 19:06:47 @CriticalAI They have updated/fixed the error.

2023-01-09 17:38:11 @megdematteo @mizzuzbeldruegs @MyBFF @sashastiles Thank you. I hope that in the future you will be very clear about when you're working from a paper (which btw had multiple authors) and when you've talked to a person directly.

2023-01-09 17:07:40 I don't usually do predictions, but here's one: These will get to use their tech in court when they are defending themselves. https://t.co/Oh92zN8Th2

2023-01-09 17:05:38 That's a new one for me -- "newsletter" on LinkedIn claims that their writer spoke with me. Problem is, she didn't! What she attributes to me might come from my writing, but she doesn't point to a specific source. I've left comments for them. Suggestions on what else to do?

2023-01-09 17:02:37 @mizzuzbeldruegs @MyBFF @sashastiles I guess you all are just newsletter writers and not actual journalists, but I would hope that you hold yourselves to some standards of factuality and don't go around claiming to have received input from people you never spoke with!

2023-01-09 17:01:58 @mizzuzbeldruegs @MyBFF @sashastiles Meanwhile, your editor put out a LinkedIn post claiming that you talked with me. This is inaccurate and should be corrected ASAP. https://t.co/biZ8tA6ehd >

2023-01-09 17:01:12 @mizzuzbeldruegs @MyBFF @sashastiles This line is especially troubling to me: "Understanding how AI models are being built and engaging in productive collaborative conversations will be essential." ... because it's not clear who should be in "productive collaborative conversations" (nor who should be understanding).

2023-01-09 17:00:04 @mizzuzbeldruegs @MyBFF @sashastiles You quote me in this article, but we have not spoken. If you are summarizing some of my published work, please point to the work you are actually summarizing. (The link under my name just goes to my web page.) >

2023-01-09 14:08:09 RT @mmitchell_ai: This is how women and women's work are erased in tech. You can watch in real time. And it's harder to get a job/raise wh…

2023-01-09 13:50:44 RT @emilymbender: For anyone who doesn't want to be that guy when talking to the media, here's the strategy I follow: Make a list for yours…

2023-01-09 01:41:19 I guess one question is whether we can both teach others the lesson (don't do this—it's harmful and shameful) and provide space for the current offenders/main characters to get rehabilitated.

2023-01-09 01:40:29 Since we can't go back in time and get this into everyone's college curriculum (though wow does it need to be added ASAP), community responses using shame might well be an effective answer. >

2023-01-09 01:39:39 It seems that part of the #BigData #mathymath #ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >

2023-01-09 01:37:39 @mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics. >

2023-01-09 01:36:10 In the context of the Koko/GPT-3 trainwreck I'm reminded of @mathbabedotorg 's book _The Shame Machine_ https://t.co/UR0V1yiVbW >

2023-01-09 01:34:23 Second, if I've got the list in front of me, I can connect questions from the journalist with people's work to lift up. Usually, the questions are phrased in a way that makes it way too easy to refer only to one's own work and it's super embarrassing to see that after the fact.

2023-01-09 01:33:10 First, I'm really bad with names and always unsure of myself. But if I've got the list right there, I can say people's names way more confidently and avoid the embarrassment of verbally searching for them. >

2023-01-09 01:32:37 For anyone who doesn't want to be that guy when talking to the media, here's the strategy I follow: Make a list for yourself of the people you think you might want to be sure to mention ahead of time. I find this helps in two ways: https://t.co/d2TgsQWNmV

2023-01-09 00:08:26 Really great step-by-step tour of the Koko/GPT-3 trainwreck. Thank you @KathrynTewson https://t.co/c3bgePEW3O

2023-01-09 00:07:38 RT @KathrynTewson: A tale of fucking around, finding out, and why ethical review is important even if the researcher thinks it isn’t. Come…

2023-01-08 20:03:45 @ezraklein @GaryMarcus This despite the fact that the episode include commentary on dangers and risks of (so-called) AI. An area of study (at least if you leave the absurd Longtermism bubble) that is led by women, and especially Black women.

2023-01-08 20:02:28 Here's a list of every time a person (real or fictional) was mentioned by either @ezraklein or @GaryMarcus by name in the most recent episode of Klein's podcast. Notice any patterns here? >

2023-01-08 19:44:18 @a_tschantz There is no evidence that that is what happened.

2023-01-08 19:43:30 @calimagna Also, on what basis are you declaring this "minimal risk" and who are you to make that declaration?

2023-01-08 19:43:10 @calimagna It might have been possible to have the study approved with consent waived, but that is orthogonal to my point. He is both claiming that the set up was opt-in (and people knew what was up) and that they learned about the set up partway through. >

2023-01-08 19:31:55 RT @bobehayes: VIDEO: UW professor, @emilymbender, explains new #artificialintelligence #chatbot on @KIRO7Seattle https://t.co/XaQ8HDz5W6

2023-01-08 16:58:06 @VaughnVernon I'm really not interested in engaging with your hypothetical. Please state plainly the point you are trying to make here. My guess is that it is completely irrelevant to discourse about putting vulnerable people in conversation with synthetic text.

2023-01-08 16:49:56 @VaughnVernon That seems completely irrelevant to the thread you are responding to. What software is she using? Is it a database lookup of symptoms? Something else? How was it developed (or trained) and evaluated? What training does she have regarding how it works &

2023-01-08 15:59:35 It is not possible for both of these things to be true. Either, you had full, transparent informed consent OR people only found out later. Even if the former is true, the fact that you could write the latter as if it were fine is deeply disturbing. https://t.co/AKwwqAe4TP

2023-01-08 15:57:13 Here, let me fix it for you: "UPDATE: I leaned into AI hype and made it sound like we used GPT-3 (implied: as an automated system to) provide mental health support. What we actually did was also wildly unethical &

2023-01-08 05:22:21 @mmitchell_ai @IrisVanRooij But I don't think it makes sense to cite ChatGPT (or any similar system) as a source --- because it isn't really a source, but is rather doing automatic plagiarism itself.

2023-01-08 00:21:12 RT @emilymbender: @lizbmarquis @Abebab @luke_stark And "experiment"?! FFS. And clearly they didn't have informed consent, because only LATE…

2023-01-07 22:40:46 RT @timnitGebru: Do you understand why this is bad? You perform experiments with some of the most vulnerable people and are casually explai…

2023-01-07 14:11:33 @RGGonzales1 lolsob in quarter system

2023-01-06 21:36:19 @moyix My response is about safety issue with using this when you don't already have a lot of information.

2023-01-06 21:35:44 @sg1753 @moyix OP said: "it was easy to tell that it was correct by running the command."

2023-01-06 21:31:03 @moyix My response was about your remark that it's easy to validate by just trying the suggested code.

2023-01-06 21:30:30 @lizbmarquis @Abebab @luke_stark And "experiment"?! FFS. And clearly they didn't have informed consent, because only LATER did they tell people a machine was involved.

2023-01-06 21:28:25 @moyix How do I do XYZ on Linux? Just try: cd /

2023-01-06 21:14:15 @hueykwik This? Really? That's a terrible mnemonic. Also, I don't see why I should trust any of the other answers it gives... https://t.co/6QUpQMCnaG

2023-01-06 20:18:00 RT @emilymbender: @raciolinguistic @americandialect Such a lost opportunity for @LingSocAm --- #ADS2023 #WOTY2022 (like all before it) is k…

2023-01-06 20:17:56 @raciolinguistic @americandialect Such a lost opportunity for @LingSocAm --- #ADS2023 #WOTY2022 (like all before it) is key outreach. How many more high school students might get excited about #linguistics if they could tune in to this?

2023-01-06 18:50:31 @ReubenCohn I see. So you're saying your claim of "intelligence" in an inanimate object is in fact just a report of your own experience of dealing with it and not grounded in any definition of intelligence. That sounds extremely valuable.

2023-01-06 18:23:50 @ReubenCohn "truly intelligent"? That's a rather remarkable claim that would seem to call for detailed, careful, scientific support, beginning with a definition of "intelligent".

2023-01-06 14:46:12 @joaogsr Please contact me by email with timeline info --- I am rather booked a the moment.

2023-01-06 14:33:05 @mmitchell_ai That's a spot on quote from @mmitchell_ai but I think what follows isn't that we should cite "<

2023-01-06 14:24:16 "I believe this is a false dichotomy (they are not mutually exclusive: can be both) and seems to me intentionally feigned confusion to misrepresent the fact that it’s a tool composed of authored content by authors" @mmitchell_ai on whether #ChatGPT is an author or a tool. https://t.co/G3BsD8IHQY

2023-01-06 14:18:42 Q for those finding interest in playing with #ChatGPT: Why is this interesting to you? What's the value you find in reading synthetic text? What do you think it's helping you to learn about the world and what are you assuming about the tech to support that idea?

2023-01-06 14:16:50 @joaogsr I still would be skipping/skimming the synthetic parts. I find it a complete waste of time to read synthetic text. But I can see how an annotated guide might be helpful to someone new to this.

2023-01-06 14:16:17 @joaogsr Got it -- so you are not letting the readers think even for a moment that the synthetic text came from a person? And also hopefully not fawning over how "amazing" it is. >

2023-01-06 14:12:03 @joaogsr Please don't make that the first chapter. And definitely do not present it as if it were your writing. If you must include it, make it an appendix out of the way. https://t.co/J7eAgU1yBe

2023-01-06 14:09:01 @raciolinguistic @americandialect -- I imagine there are costs to online hosting, but perhaps there are ways to swing that while still keeping the event open?

2023-01-06 14:08:36 @raciolinguistic Indeed! I appreciate that there was a less expensive ADS only on-line registration option, but that's not the same thing as making this event truly inclusive. >

2023-01-06 14:02:28 RT @emilymbender: Just listened to this piece about #ChatGPT on @NPR @Marketplace and I want to say: Journalists, have some self-respect!…

2023-01-06 05:39:07 Just to amplify this point: Isn't journalism at its core about framing questions, figuring out who to interview to get to answers, and doing those interviews? Why would anyone think that warmed over internet text could ever replace this? https://t.co/5Kt1OSnEbI

2023-01-06 05:36:53 p.s. Yes this whole thread is a subtweet of the OpenAI researcher whose on here trying to talk up how "dangerous" #ChatGPT is for education. Like, wasn't OpenAI's whole thing "safety"? *sigh*

2023-01-06 05:36:06 Finally, all this hand wringing seems to be predicated on the idea that #ChatGPT will remain free to the public. That seems highly unlikely... >

2023-01-06 05:35:27 It seems pretty unlikely to me that such cheating will go unnoticed for long. >

2023-01-06 05:34:55 Those harms are real, to be sure, but they are local. And unlike in the peer review context, this reading, evaluation and feedback takes place in the context of a direct person-to-person relationship. >

2023-01-06 05:34:25 The harms here are waste of time (I would hate to spend time giving feedback on synthetic text

2023-01-06 05:32:49 Students using #ChatGPT to write their assignments isn't an example of this. A teacher reading the essays isn't trying to get information, but rather trying to evaluate the students' work and/or provide them with formative feedback on it. >

2023-01-06 05:32:01 What these have in common is that the reader is seeking information and encountering text that they either believe was written by a person or (mistakenly) believe to be authoritative for some other reason. >

2023-01-06 05:30:47 2nd, there are many contexts in which I'm concerned about people encountering synthetic text: people searching on the internet (or worse: using #ChatGPT as an information access system), people reviewing for scientific conferences, people reading sites like Quora or Wikipedia. >

2023-01-06 05:28:39 Apropos the hand-wringing about #ChatGPT and education, a few thoughts. First, this point from my blog post last April: https://t.co/0Xc7WVwBKi >

2023-01-06 04:37:12 (Sorry misspelled Korinek's name there) Also: I know it's @Marketplace but maybe an economist isn't the right person to have opining on the actual capabilities of these systems? https://t.co/ByUJbuGXTX

2023-01-06 04:32:27 Does any journalist really think that their job is just about producing the FORM of journalism? If so, what are you doing?

2023-01-06 04:31:45 Cont: GPT4 is the next iteration, and could debut sometime later this year. In the meantime, I'll be asking this ChatGPT whether I'm too old to learn how to be a plumber. >

2023-01-06 04:31:28 Cont: "It will probably be able to throw out soundbites that sound like you. It may not quite be able to produce a whole episode of Marketplace, but maybe GPT4 will be." >

2023-01-06 04:31:14 @NPR @Marketplace Around minute 12:17, it goes like this: For Koronek though, how AI will revolutionize search is just the tip of the iceberg. He thinks within the next decade it will pretty much revolutionize everything, including my job. >

2023-01-06 04:30:35 Just listened to this piece about #ChatGPT on @NPR @Marketplace and I want to say: Journalists, have some self-respect! https://t.co/rKUdh5p8Wm >

2023-01-06 03:35:40 RT @LucianaBenotti: How does culture shape #NLProc data, annotations, models, and applications? This is one of many questions we ask in t…

2023-01-05 17:31:28 Hey @LingSocAm -- maybe putting the **conference schedule** behind a paywall is a bad idea? Making that world readable is good for the organization, good for the members presenting, and just plain convenient for everyone. #LSA2023 #linguistics

2023-01-05 17:28:07 Hey @AbstractsOxford I'm trying to register for this conference and the link leads to an error. Please fix this so we can attend our association's annual event. Also, you're costing the @LingSocAm $$ with this error. https://t.co/olO4Pf8cC2

2023-01-05 17:24:31 @bgzimmer @americandialect Thanks -- I'm going to try to register then! (Getting an error right now, though.)

2023-01-05 17:24:12 Hey @LingSocAm I'm trying to register for the meeting as a virtual attendee, but the link takes me to an error page: https://t.co/cq3oD4Wsli Help? #LSA2023

2023-01-05 16:25:48 Q for any other #linguists experiencing #LSA2023 and #ADS2023 FOMO ... is @americandialect 's Word Prom (#WOTY2022) going to be accessible online? What's the schedule?

2023-01-05 14:56:48 @mcwm No. https://t.co/Snrghpoht9

2023-01-05 14:05:17 RT @emilymbender: This is exhausting. I'd really love to hear computer scientists who know better, who have the humility to realize that th…

2023-01-05 14:04:51 RT @emilymbender: Hey Philly tweeps --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gu…

2023-01-05 14:04:42 RT @emilymbender: Sometimes it seems like the shitposting by the big names in AI is really a distraction strategy to let trash like this sl…

2023-01-05 01:13:41 RT @HabenGirma: Happy #WorldBrailleDay! Louis Braille, a blind teacher, invented the tactile reading system used by millions of #blind peop…

2023-01-04 22:22:06 This is why I need more of you closer to Lecun &

2023-01-04 22:21:23 But I think it is all connected. The more the general public believes that "artificial neural networks" have anything in common with what we recognize as thinking, feeling, accountable humans, the easier it is for people to believe in the AI snake oil. https://t.co/q4vDiQNgnI >

2023-01-04 22:19:51 Given limited hours in the day, and the continual setting of dumpster fires by the AI bros for the rest of us to put out (*sigh*), it does feel like we need to prioritize. >

2023-01-04 22:19:01 Sometimes it seems like the shitposting by the big names in AI is really a distraction strategy to let trash like this slip through under the radar... >

2023-01-04 22:14:28 RT @mmitchell_ai: A few people referring to discussions on values in AI as "moral panic". It makes sense that reading discussions around hu…

2023-01-04 21:56:47 RT @vdignum: Is really sad to see CS folk being so mislead by our own language. An artificial neural network reassembles a neural network o…

2023-01-04 21:21:46 @alexhanna This is maybe most urgent right now for Philly, but we've all got work to do making sure our electeds aren't setting up this nonsense in our own towns.

2023-01-04 21:20:55 @alexhanna Absolutely key point of evaluation: How many times does the system send the cops in to a situation where no violence was occurring, but all primed to think that there is? >

2023-01-04 21:20:06 @alexhanna And can you spot the GLARING omission in this evaluation plan? (Answer in next tweet, for those who aren't sure.) >

2023-01-04 21:18:43 .@alexhanna is quoted raising key points. There's zero transparency about how the system is evaluated and it's pretty predictable what harms are going to happen --- and to whom. >

2023-01-04 21:16:10 Hey Philly tweeps --- SEPTA is doing a pilot of a system supposedly uses "AI" to call the cops when the "AI" detects a gun. This is terrifying. What kind of civil oversight do you all have going on out there? https://t.co/7MWeoHNJKX >

2023-01-04 20:49:24 RT @kareem_carr: I don’t know who needs to hear this but this is not a neuron. https://t.co/lUdPiF41XA

2023-01-04 20:33:10 @orob @mmitchell_ai Wait, is Russia famous for discourse that clearly point out the flaws in common arguments?

2023-01-04 20:10:29 RT @mmitchell_ai: Guys, please stop trying to convince yourselves or others that it's a good argument to say that using a language model to…

2023-01-04 19:03:39 @pgolding Please check first, maybe?

2023-01-04 19:03:24 @pgolding Uh, thanks but no thanks on the mansplaining.

2023-01-04 18:24:45 RT @databoydg: If you do AI without sensationalism… is it still AI?

2023-01-04 17:31:49 @ctolsen So I'm not the one you need to be telling that. I want to see this addressed to Lecun and/or the people who follow him.

2023-01-04 17:01:03 @KarlaParussel Thank you. He may well have ignored it, but it's still worth saying for bystanders.

2023-01-04 16:28:55 RT @AJLUnited: "Computer scientists @timnitGebru and @jovialjoy showed us that algorithmic bias is real." via @Forbes by @dianebrady "Not…

2023-01-04 04:56:18 @mellymeldubs Same! https://t.co/NZBJWl24FB

2023-01-04 02:40:49 @SMT_Solvers There is absolutely nothing in their tweet about checkbooks coming out.

2023-01-04 02:12:38 @SMT_Solvers @TaliaRinger @MicrosoftTeams But why tell Talia? They were specifically saying they are tired of this.

2023-01-04 01:51:57 @SMT_Solvers @TaliaRinger @MicrosoftTeams Hey techbro what made you think this reply would be the least bit welcome?

2023-01-03 19:59:01 RT @drkatedevlin: This is great from @emilymbender. I agree fully — as I said before, if students are turning to chatGPT to write essays th…

2023-01-03 19:47:50 RT @mmitchell_ai: Emily Bender @emilymbender gave a great summary on our local news station of what ChatGPT is. Only about 4 minutes, check…

2023-01-03 16:11:16 @sarahbmyers @timnitGebru @mer__edith Crowdworkers who probably were given minimal context for their tasks, weren't working in their area(s) of expertise, and were almost certainly poorly compensated.

2023-01-03 16:10:43 @sarahbmyers @timnitGebru @mer__edith Re All the hand wringing about ChatGPT putting people out of work: Surely we don't actually want children's books, legal documents, news stories, etc that are averaged internet posts + the refinement provided by OpenAI's crowdworkers. >

2023-01-03 16:07:59 @sarahbmyers @timnitGebru @mer__edith Your remarks were great, @sarahbmyers ! At the same time, I am SO DONE with journalists pulling the "haha that was GPT all along!" gimmick. https://t.co/jnm8GZ4kjC

2023-01-03 14:06:27 RT @emilymbender: Slightly belated year-end indulgence: Looking back on 2022, I think the main thing that differentiated it professionally…

2023-01-03 02:31:31 @jessgrieser Nope. But also: my contacting me page is largely there to assuage my guilt for not replying. https://t.co/LpXfErKyUc

2023-01-03 00:34:35 @complingy @jessgrieser @BNMorrison Thank you!

2023-01-02 23:29:07 @complingy @jessgrieser @BNMorrison My student days were definitely pre-LMS, but websites were starting to be a thing. I still make external, world-readable websites with the heart of the syllabus, because I think we lost something when all the info went behind LMS moats.

2023-01-02 23:22:59 And not quite in the same category, but Mystery AI Hype Theater 3000 with @alexhanna has been a blast. Definitely looking forward to more of that in 2023! https://t.co/6UCGlE6mx3

2023-01-02 23:22:14 And last Friday's spot on @KIRO7Seattle https://t.co/rlgoNjXPGt >

2023-01-02 23:21:32 An interview on @KUOW 's Sound Side with @libdenk https://t.co/0SEWDOngv0 >

2023-01-02 23:20:29 In many ways, the print media work is easier to fit in, but I think my favorites have been audio (+ my most recent TV appearance), especially: An episode of Factually! with @adamconover https://t.co/iVmcVmkISO >

2023-01-02 23:19:53 Slightly belated year-end indulgence: Looking back on 2022, I think the main thing that differentiated it professionally for me is how much more media engagement I did. I've collected links here and I see 51 items dated 2022: https://t.co/XEc34KgwKG >

2023-01-02 20:02:45 @deliprao @tallinzen "What purpose is it serving and for who" did not seem to me like a genuine question, especially given what I see you saying on here on a regular basis. I took it as a pugnacious (and ungenerous) swipe at linguistics. Good day.

2023-01-02 19:56:04 @deliprao @tallinzen Like, why isn't enough to just do the NLP you want to do? Why do you have to say that linguistics not only isn't useful for the NLP you want to do, but isn't useful at all?

2023-01-02 19:55:31 @deliprao @tallinzen So, you're asking a whole field to justify its existence? Linguistics/language sciences are about understanding a natural phenomenon. Beyond that, linguistics/language sciences have been very important in e.g. education, social justice movements, &

2023-01-02 19:50:26 @deliprao @tallinzen Here: https://t.co/cfQpORLRBu "Decent model" -- for whom? If you're using that only in the ML sense, why should it be a decent model for linguists?

2023-01-02 19:44:28 @deliprao @tallinzen And yet you seem to want to say that linguists should adopt ML folks' notion of what a model is, or that ML folks' notion is the "important" one/should be valid for everyone.

2023-01-02 00:51:43 RT @timnitGebru: The finesse with which Emily answered all of the questions in a way that counters the hype… https://t.co/JDlaGq4ieg

2023-01-02 00:51:40 @timnitGebru Thank you!

2023-01-01 21:07:52 RT @emilymbender: Yesterday I got to do a segment on @KIRO7Seattle about #ChatGPT and took the opportunity to try to push back on the #AIhy…

2022-12-31 21:41:45 RT @histoftech: Please make it your 2023 resolution to take a layered approach to mitigation that involves not just vaccines but high quali…

2022-12-31 14:27:16 Yesterday I got to do a segment on @KIRO7Seattle about #ChatGPT and took the opportunity to try to push back on the #AIhype https://t.co/rlgoNjXPGt

2022-12-30 17:27:42 @kirbyconrod Don't have any (yet) so limited opportunity for use.

2022-12-29 22:21:41 @AngelLamuno Your QT suggests that either you believe option #3 or you were snitch-tagging. Either of those: Rude. On top of that, you tagged in someone irrelevant to the conversation, which was rude to him.

2022-12-29 22:21:00 @AngelLamuno I didn't tag Ben Ainslie in my tweet. Possible reasons: The Ben Ainslie I was talking about isn't on Twitter (apparently true), I had some reason to talk about Ben Ainslie without tagging him, or I wanted to tag him but am too incompetent to do so. >

2022-12-29 21:00:23 @AngelLamuno I wasn't saying you needed my permission. I was pointing out that what you did was rude. You can decide what to do with that info.

2022-12-29 20:48:02 RT @mmitchell_ai: "Information seeking is more than simply getting answers as quickly as possible". Great piece &

2022-12-29 19:56:17 @AngelLamuno You weren't just tweeting whatever. You were QT-ing a tweet of mine, tagging someone else into it in. I absolutely have a right to have and express an opinion about that kind of action.

2022-12-29 19:33:45 @AngelLamuno @AinslieBen Wrong person. I searched and the @becauselangpod co-host is apparently not on Twitter. Also, why did you assume I needed your help tagging people?

2022-12-29 19:27:12 Just listened to Ep 66 of @becauselangpod and I really hope that Ben Ainslie in particular will read this op-ed: https://t.co/FYymgEF0FG

2022-12-29 17:45:00 @dr_nickiw Sadly predictable, isn't it?

2022-12-29 17:36:35 @dr_nickiw IKR? It has been quite a couple of days on Twitter for me... https://t.co/BgFwyxejUd

2022-12-29 17:25:20 RT @_kendracalhoun: Brief LSA 2023 self-promotion thread! I'll be co-facilitating an organized session on Jan 6 with my amazing colleague…

2022-12-29 16:57:23 RT @aronchick: Great piece about the current advancements in AI. Basically, they all do a great job regurgitating words which have already…

2022-12-29 16:43:03 @robinc @chirag_shah Hello -- it's the winter break. You asked for my opinion and I wrote back to you. I don't need your opinion on what I should be doing a "given my role as critic". Goodbye.

2022-12-29 16:29:32 @robinc @chirag_shah I think the design of GPT prevents any thorough linking to information sources. Any search interface that provides "answers" as synthetic text (and I'm including "summaries" we already see on Google in that) has huge risks of producing misinformation &

2022-12-29 16:06:10 @phonesoldier Thank you.

2022-12-29 16:04:07 For a slightly longer version of this argument, see this recent op-ed by me &

2022-12-29 16:01:54 @phonesoldier gives me the last word, but the "For the time being" bit is NOT anything I said. This isn't something I expect will change, TYVM. It is a fundamental design flaw. https://t.co/kiPk5WM0nC

2022-12-29 16:00:14 And then the credulously reported #AIhype. Says who? What are they selling? What does "understanding of the person" even mean and how do they measure that? https://t.co/B0ksNJIQwI

2022-12-29 15:57:15 @phonesoldier These points come from my work with @chirag_shah : https://t.co/V2QyKpCEvh

2022-12-29 15:54:00 I appreciate the opportunity to speak with journalists such as @phonesoldier (piece below) but it is always so disheartening to see my words along side credulously reported #AIhype >

2022-12-28 22:30:13 Wow -- this thread is bringing me comments from all sorts of racists, sexists and other reactionaries. Definitely a day to be exercising the ol' block button. https://t.co/BgFwyxejUd

2022-12-28 18:34:54 @bsansouci @timnitGebru Surely misinterpreting? "not only X" != "only Y"

2022-12-28 15:09:17 @zehavoc It's been quite a run...

2022-12-28 15:03:33 @MonniauxD Indeed, my view into this is primarily only the US education system. (I did study for a year as a HS student in France, but don't have a sense of higher ed there at all.)

2022-12-28 15:01:03 And lots and lots of ex of doubling down on the point of view I was talking about. "Only builders get to decide the mechanics of the machine." "You've never built anything. Your opinion is discarded as worthless." "Math and science are the guiding principles that govern us all."

2022-12-28 15:00:39 Weirdest sexist comment I've ever encountered, which started with "Since Dobbs, I try not to say things that might sound sexist, but..."

2022-12-28 15:00:27 "Pronouns and mastodon handle in bio" (¯\_()_/¯ --- this is how I knew for sure my tweets had traveled further than usual.)

2022-12-28 15:00:15 "You are arguing against the notion of expertise." (Try reading the thread again,my dude.)

2022-12-28 14:59:59 "Universities are elitist systems. Tech lets you get ahead without credentials." (Orthogonal. I was speculating about how the hierarchy of knowledge gets built.)

2022-12-28 14:59:43 Takes that think this is about academia vs. industry, which is kinda hilarious given the "capture" of academia by industry in my field. See: https://t.co/dD8nzQvULW

2022-12-28 14:59:30 "Humanities are also full of gatekeepers." (Yes, at the level of humanities research. In undergrad classes though?)

2022-12-28 14:59:21 "You haven't built anything." (False, but maybe true for anyone who only counts building LLMs or doing other kinds of ML as building something. Either way: irrelevant.)

2022-12-28 14:59:06 "You clearly weren't smart enough to hack it in math/CS classes." (Uh, no.)

2022-12-28 14:58:57 To be very clear: I'm not arguing against math tests. I'm arguing against the hierarchy of knowledge, the idea that STEM people, esp math/CS people are "smarter" than everyone else. And musing that understanding how we built the hierarchy will help us dismantle it.

2022-12-28 14:58:33 "It's a good thing that STEM students are evaluated that way. We want only the best people doing it." (Sink or swim teaching techniques shouldn't be used as "evaluation".)

2022-12-28 14:58:21 Common misreading: "A hierarchy where social sciences are on top would be a mess!" (I'm not arguing for a different hierarchy, I'm arguing for mutual respect between fields.)

2022-12-28 14:58:00 This seems to have both resonated with many and hit a nerve with others. I want to surface the more odious negative reactions, w/o linking to or replying to specific people. They don't deserve my platform or attention. Why? So folks can see what my mentions have been like. https://t.co/00Q5H3cq2w

2022-12-28 14:16:35 @bsansouci @timnitGebru I'm surprised you'd start with the assumption that "close to the machine" means "close to the problem". Not actually that surprised tho. This encapsulates the narrow understanding of tech problems as only involving tech and not primarily the social systems it fits into.

2022-12-28 01:15:58 https://t.co/vSIc6dHZ7B

2022-12-28 01:15:51 https://t.co/WDQde4jEVA

2022-12-28 00:07:15 @MingGu262 Yes, this has come from w/in my own institution. Here's an example. (NB --- he blocked me shortly after posting this.) https://t.co/T1Z2CmZNhd

2022-12-27 22:35:40 RT @mmitchell_ai: OTOH, betcha the Stochastic Parrots thingie will be in AI histories well past 50 years from now. https://t.co/xaZhV16P6m

2022-12-27 19:20:02 @robtow Thank you for that pointer!

2022-12-27 19:13:15 The above is all idle speculation, but I'd would love to see if there is actual work (sociology of science? something in education?) that looks into the educational construction of the hierarchy of knowledge.

2022-12-27 19:12:28 So when the tech bros, likely the "winners" over in math &

2022-12-27 19:11:45 So there's much less of a sense of "Here's this body of knowledge, and only the smart ones can master it, and we can see who they are." (Though boy howdy does a certain kind of formal syntax lean into that.) >

2022-12-27 19:10:57 Meanwhile, if you look to the humanities and humanistic social sciences, the teaching is (on average, say) less gate-keepy (though not perfect!) and the evaluation requires spending time together in the details of open-ended explorations (essays, qualitative studies). >

2022-12-27 19:10:01 So you end up with people thinking they are "good at" or "bad at" these things, and furthermore situations where those who are "good at" them are the winners of (sometimes literal) contests. >

2022-12-27 19:09:21 But I also think that some of it has roots in the way different subjects are taught. Math &

2022-12-27 19:08:31 I've been pondering some recently about where that hierarchy comes from. It's surely reinforced by the way that $$ (both commercial and, sadly, federal research funds) tends to flow --- and people mistaking VCs, for example, as wise decision makers. >

2022-12-27 19:07:30 There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>

2022-12-27 16:25:14 @egrefen @FelixHill84 This is for you https://t.co/aMpXnspF03 /Emily out

2022-12-27 16:23:12 @egrefen Let me remind you where this thread started: You QT'd a thread of mine which was full of pointers to my writing &

2022-12-27 16:21:43 @nisten @egrefen That wasn't proposed as a test. It was an example of a broader phenomenon. The viewpoint that looks at this one example and says "See, problem solved!" is frightening.

2022-12-27 15:33:16 They do have three citations for that claim of "true understanding". All arXiv links. Talk about an echo-chamber.

2022-12-27 15:31:59 A paper brought to me by a Semantic Scholar alert cites Bender, Gebru et al 2021 (Stochastic Parrots) and yet asserts: "Pre-trained with a massive amount of information from the internet, LLMs are now capable of truly understanding the language" I guess they didn't read it.

2022-12-27 15:30:36 @egrefen @FelixHill84 You can read why that's wrong here: https://t.co/z1F7fEBCMn It's not my job to spoon feed you academic work, tweet by tweet.

2022-12-27 14:40:05 There's a whole metaphor study to be done regarding "goal posts" and "sidelines" and the way the people who see themselves as "building AI" define "the game" and who the real "players" are --- to the detriment of society. https://t.co/LC1yDlnwKG

2022-12-26 21:44:29 RT @TaliaRinger: "We find that participants who had access to an AI assistant based on [Codex] wrote significantly less secure code than th…

2022-12-26 20:44:26 @TaliaRinger Safe travels!

2022-12-26 17:45:27 I swear, the "evil AI is coming and it will be so powerful and it will kill us all" tech bros are even more annoying than the "AI is just around the corner and it will save us all" kind.

2022-12-26 17:43:49 @nisten EMB: But why? (Benchmarks fail at construct validity) N: This benchmark measures how kind the AI is. Bad AI is coming! EMB: ¯\_()_/¯ Go see the beginning of the thread?

2022-12-26 17:42:35 @nisten This conversation so far: Paper: Oh noes! We can't keep scaling, there isn't enough data! EMB: What a gross framing. Maybe scaling isn't the thing? N: Well how else would you do it? EMB: Why do you want to do it? N: Because benchmarks! EMB: But why? N: Because benchmarks! >

2022-12-26 17:41:10 @nisten Nowhere am I advocating for advertising-driven information access systems. I invite you (again) to read the paper with @chirag_shah which underlies our op-eds, the media coverage, and my recent tweets: https://t.co/o6XfOSHMsc

2022-12-26 17:19:55 @nisten You are confusing the goal of the benchmark (measuring something, though well-intentioned or not, it lacks construct validity) with the goal of the model builders.

2022-12-26 17:17:40 @nisten How are we supposed to get anything from the measures without establishing construct validity? Also, you still haven't answered my question: Why is improving performance on that benchmark a worthwhile goal?

2022-12-26 17:14:54 @nisten That doesn't answer my question: Why is improving performance on that benchmark a worthwhile goal?

2022-12-26 17:08:48 @nisten First question: Why is improving performance on that benchmark a worthwhile goal? Worthwhile enough to pursue in unsustainable ways? You might find this informative: https://t.co/kR4ZA1k7uz

2022-12-26 16:55:59 @strubell h/t @evanmiltenburg who draws an excellent connection to @abebab 's work on values in ML research: https://t.co/4BbIt4xbDn

2022-12-26 16:55:13 Surely the lesson here (which is not new, see the work of @strubell et al 2019 etc) is that the approach to so-called "AI" that everyone is so excited about these days is simply unsustainable. >

2022-12-26 16:54:34 This framing is so gross. To see (human!) generated (ahem: English) text to be a "vital resource" you have to be deeply committed to the project of building AI models and in this particular way. >

2022-12-26 15:45:29 RT @emilymbender: Chatbots are not a good UI design for information access needs https://t.co/ookfM3DZtM

2022-12-25 18:15:39 RT @emilymbender: Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to…

2022-12-25 16:23:22 RT @mmitchell_ai: “I feel good about what I did, despite what happened,” she said. “It feels important to hold people accountable.”

2022-12-25 14:28:53 RT @emilymbender: Chatbots are not a good replacement for search engines https://t.co/FYymgEF0FG

2022-12-25 04:32:38 @BikalpaN Sorry for the short reply --- there have been a LOT of reply guys in my mentions today. The reason I say this isn't my problem is that building AI is not a project I subscribe to. If so-called "conversational AI" has no beneficial use cases, that's fine by me.

2022-12-25 04:22:19 @BikalpaN Why do ask me? That surely isn't my problem.

2022-12-25 04:21:26 @FelixHill84 @egrefen And the unsuitability of language-model driven chatbots for "search" is two-fold: 1) they give the illusion of "understanding" when they don't and 2) they don't support sense making, i.e. what people should be doing as they access information. See: all the links I posted.

2022-12-25 04:20:02 @FelixHill84 @egrefen I don't have philosophical objections to neural nets to language processing. I object to claiming that models of word distribution are "understanding" anything. There is plenty of NLP that isn't about understanding... >

2022-12-25 02:03:53 @egrefen And I'm saying that whether or not users find utility isn't actually the only metric to look at. And, if you'd bothered to read the links in my thread, you'd see that it's a discussion, grounded in information science, of how chatbots/LLMs fail to support user info needs.

2022-12-25 00:28:25 @egrefen I, for one, don't think that just because someone has $$ to invest means they are in a good position to understand what tech would actually benefit society.

2022-12-25 00:27:46 @egrefen Making cigarettes more addictive is also an engineering problem. Doesn't mean it's a good idea. Also: Folks are very quick to mistake VC interest with an indication that a tech idea is a good idea.

2022-12-25 00:11:30 @egrefen Alright, then thank you for promoting my thread to your followers, even with your snide remark that refuses to engage with the substance of my work.

2022-12-25 00:10:03 RT @Abebab: this is an insightful thread on language understanding (lack thereof) and large language models https://t.co/ronf3W5oMz

2022-12-25 00:07:51 @egrefen Or, you know, read the pieces I'm linking to to see the full argument as to why it's a terrible idea?

2022-12-24 23:19:30 @mgubrud So you assert and yet --- what studies have you done to quantify that? How do you define "correct"? And even if they were perfectly "correct" it's still not a good UI for search, for the reasons Shah &

2022-12-24 23:16:59 @mgubrud They are coherent once you make sense of them. (And often wrong.)

2022-12-24 23:14:38 @mgubrud So you are equally rude to everyone then? Well done, you.

2022-12-24 23:12:16 @mgubrud Your first move in this thread was your "nuke your credibility" comment. That is not substantive and it betrays an enormous lack of respect for people whose expertise (and possibly gender) differs from yours. https://t.co/6O6SZuuVb2

2022-12-24 23:10:50 @mgubrud If you read the whole paper, you'll see that we talk about distributional semantics and the extent to which similarities in meaning between words are reflected in their distribution in the text. This is not the same thing as understanding.

2022-12-24 23:10:13 @mgubrud Perhaps you would benefit from taking a deep breath and reflecting on the fact that women academics tend to know what they are talking about. And to jump in and say otherwise is rather rude and not likely to lead to a pleasant conversation.

2022-12-24 23:09:15 @mgubrud You say your comments have been substantive, but you jumped into my mentions to tell me that my speaking from my own expertise "nukes [my] credibility", apparently because I wasn't saying what you want me to say. >

2022-12-24 22:09:46 @mgubrud Well, Dr Dude, you could keep spouting off here or you could read the (award-winning) paper of mine that I linked too.

2022-12-24 22:03:23 @mgubrud And as for "AI is real and happening"? Give me a break. What is real and happening is surveillance capitalism, pattern matching at scale, and AI snake oil. Please take your concern trolling elsewhere.

2022-12-24 22:02:36 @mgubrud Yes, I care about mitigating harms, but also: I am a linguist. And so when I say that I am speaking directly from my expertise. These systems manipulate linguistic forms but do not understand nor have communicative intent. In more detail: https://t.co/z1F7fEBCMn

2022-12-23 20:52:34 @danielsgriffin @chirag_shah Thank you. I don't think the summary of @safiyanoble 's book is very good either, FWIW.

2022-12-23 19:03:39 @danielsgriffin @chirag_shah No thank you. This isn't a very good summary and I would rather not have it promoted.

2022-12-23 18:22:08 RT @evanmiltenburg: Useful exercise for students in #NLProc: what values are implicitly communicated through this tweet/paper? (For contex…

2022-12-23 17:09:58 Could there be a more on-the-nose example of why you'd never want generative models to speak for you in a context where anyone cares about what is being said? https://t.co/L8uqVmgcvt

2022-12-23 14:15:35 @TaliaRinger @chirag_shah Short version, as an op-ed: https://t.co/FYymgEF0FG

2022-12-23 12:59:44 @TaliaRinger Wrote a paper on this with @chirag_shah earlier this year: https://t.co/rkDjc4k5HL

2022-12-23 03:32:11 RT @gleemie: TT job: UCSD Urban Studies and Planning, Designing Just Futures with a focus on Indigenous, Black, and migrant futures (due Ja…

2022-12-22 21:04:57 RT @BNonnecke: Recently fired from @Twitter, @Meta, @Google, @Microsoft? Work in civil society, government, or academia? WE WANT YOU! Appl…

2022-12-21 22:50:44 RT @ruthstarkman: @emilymbender here's what our students made for you and Alexander Koller for your Eavesdropping Octopus paper. I'll print…

2022-12-21 22:50:38 @ruthstarkman <

2022-12-21 22:50:19 @groundwalkergmb @histoftech Didn't I say as much in the very tweet you are replying to?

2022-12-21 20:13:06 @pgcorus None of these examples are about LLMs and using them as unfiltered generators though. So again, I'm wondering why you jumped into this particular conversation. Bringing your techno-optimism about other tech to apparently counter my cautions about one specific thing.

2022-12-21 19:02:46 @pgcorus And I still think "LLMs can be used for restorative justice!" is a big claim that needs supporting ... especially when launched into the sea of AI hype.

2022-12-21 19:02:19 @pgcorus Sorry for not tracking that you weren't the person I was QT-ing when you QT-ed me. But that's where my confusion was coming form and part of where this conversation went off the rails.

2022-12-21 18:44:23 @pgcorus So why are you QTing me in that way? And suggesting that I am saying "turn away" from tech?

2022-12-21 18:43:26 @pgcorus Again, I wasn't saying "reject outright". I was saying: do the most basic harm mitigations.

2022-12-21 18:43:06 @pgcorus And your first contribution (at least in this thread) was this one: https://t.co/5JmVh62sm3

2022-12-21 18:42:32 @pgcorus Wait -- hold on: That wasn't you. That was someone else. But you seem to be jumping in on their side.

2022-12-21 18:41:37 @pgcorus "not reject outright": Rachael wasn't talking about rejecting LLMs. She was talking to chatbot devs, i.e. people who create public-facing technology, and giving the eminently sensible advice that unfiltered LLM output has no place there. https://t.co/shBehHxUvm

2022-12-21 18:40:24 @pgcorus I hope you can see that that just sounds like you cheerleading for the people who think that LLMs and garbage like Stable Diffusion should just be unleashed on the world. >

2022-12-21 18:39:32 @pgcorus So when you came out with this, I felt I had to speak up: https://t.co/7zhiYgu2wc

2022-12-21 18:38:50 @pgcorus I'm not looking for your trust. My main goal is pushing back on #AIhype, which I see doing damage to both the academic research domain I belong to and (more importantly) many, many public systems. >

2022-12-21 18:29:03 @pgcorus And I am STILL wondering what you think the connection is to your original complaint about Rachael's eminently sensible words of caution.

2022-12-21 18:28:27 @pgcorus And here you are falling into tech solutionism --- which is a frightening thing to hear from someone working in the space you are working in.

2022-12-21 18:24:08 @pgcorus And STILL no reply on this question.

2022-12-21 18:23:48 @pgcorus As soon as there is tech involved (i.e. the capacity to scale harm), "move fast" is a bad motto, no matter what you moving fast towards.

2022-12-21 18:20:24 @pgcorus You are presenting as "move fast and break things" while also (confusingly) doing buzzword salad around "community-centered" etc.

2022-12-21 18:16:18 @pgcorus Still no reply to this question. You're using LMs to generate code and (in a kinda gross way talking about them as if they were "junior devs"). This is not related to Rachael's point that you jumped all over at the start of this.

2022-12-21 02:17:47 @raciolinguistic @freshair_zee I haven't been tracking alas, but maybe @rctatman has?

2022-12-21 00:21:42 @katzenclavier @histoftech For comfort TV, I've found The Good Place very re-watchable!

2022-12-21 00:07:33 @histoftech The Good Place? (I mean technically dead women are central but not I think how you mean.)

2022-12-20 22:53:52 AGI bros at an AGI company: *write a paper with extreme and ridiculous anthropomorphization of language models* NLP researchers: They can't possibly have meant that literally. How dare you critique their writing as if they did. me: ¯\_()_/¯

2022-12-20 14:53:55 Just so everyone is clear, the founder of https://t.co/XWbloQfhKR thinks it's "feckless" to do even the most basic harm mitigations with a technology that has widely been shown to be harmful. https://t.co/7zhiYgu2wc

2022-12-20 14:32:25 RT @JamesStubbs1979: Good morning @StretfordPaddck Do we use a different language to talk about black players and white players? Why were t…

2022-12-20 14:32:06 RT @rctatman: Hello friends! The last stream for the year is starting in just about half an hour. Come read some papers with me. :) https:…

2022-12-20 14:31:06 RT @emilymbender: Mystery AI Hype Theater 3000, Episode 6 - Stochastic Parrot Galactica where @alex and I have way too much fun taking ap…

2022-12-20 14:01:34 RT @emilymbender: Come for the snark, stay for the sociology of science!

2022-12-20 14:01:08 RT @emilymbender: I appreciate this write up of my work with @chirag_shah Why We Should Not Trust Chatbots As Sources of Information http…

2022-12-20 13:59:37 RT @emilymbender: Okay, I haven't read this paper from Anthropic yet, but on a quick skim, it's utterly absurd. They start off talking abou…

2022-12-20 06:11:18 RT @marylgray: Only 2 weeks left to apply for this fully paid 2-week workshop for early career technologists and computing researchers. Ple…

2022-12-20 05:45:31 Come for the snark, stay for the sociology of science! https://t.co/dJMx1aksMa

2022-12-20 05:32:50 Mystery AI Hype Theater 3000, Episode 6 - Stochastic Parrot Galactica where @alex and I have way too much fun taking apart the #AIHype around #Galactica https://t.co/p5VjFkEH4o

2022-12-20 02:58:18 RT @alexhanna: We've posted Mystery AI Hype Theater 3000, Episode 6! In this one, @emilymbender and I pick apart MetaAI's shortly-lived Gal…

2022-12-20 01:51:31 @scottniekum @timnitGebru "Behaves equivalently" --- only if you actually make the mistake of taking its words as something worth interpreting. They are not. They are not expressing goals. They are not expressing desires. The whole enterprise is a fantasy and a waste of time. /Emily out.

2022-12-20 01:26:50 @scottniekum @timnitGebru ... and it sounds like you're not 100% clear on it either.

2022-12-20 01:25:09 @scottniekum @timnitGebru As soon as you are talking about synthetic text machines, it becomes essential to distinguish between what the system is actually doing (outputting word forms) &

2022-12-20 01:06:40 @scottniekum @timnitGebru Language models aren't agents. They don't have goals. I'm not saying that there's no such thing as artificial agents. Just to be very clear. Language models are models of the distribution of words in text, and that's it.

2022-12-19 23:46:13 @scottniekum On the large LMs aren't intelligent, see: https://t.co/z1F7fEBCMn https://t.co/kR4ZA1k7uz

2022-12-19 23:45:23 @scottniekum I don't have time just now to lay out for you in a tweet thread why longtermist "xrisk" reasoning is nonsense, why large LMs aren't "intelligent" and don't have "desires", etc etc. But I assure you: though arguments have been made.

2022-12-19 23:12:51 @scottniekum It doesn't merit it, because it doesn't merit being taken that seriously. For one thing, it's not even a publication, just a pdf on their company website.

2022-12-19 23:04:06 @scottniekum These folks *do* work on the xrisk nonsense. They don't deserve (or probably even want) your "charitable" reading of their laughable paper.

2022-12-19 22:51:07 RT @vdignum: This! "It is urgent that we recognize that an overlay of apparent fluency does not, despite appearances, entail accuracy, inf…

2022-12-18 20:20:59 RT @_alialkhatib: if you all think elon's not gonna make downloading an archive of your data more difficult to try to keep you here, then…

2022-12-18 17:01:36 @pgcorus https://t.co/oAThpf7DLy

2022-12-18 17:00:58 @pgcorus This is not the same idea as saying chatbots could be therapists. Also: just because there's a problem in the world (lack of resources for mental healthcare) and just because ML can make something that looks like the solution doesn't mean it is the solution.

2022-12-18 16:54:58 @pgcorus That's some tech solutionism right there.

2022-12-18 16:54:20 @pgcorus Still never gonna be a good idea for therapy. Read your Weizenbaum. Talk to some actual psychologists. Stop promoting the hype.

2022-12-18 16:53:03 @pgcorus Ah, I see. You're defensive because you're building this stuff. But even still: you should be as appalled as I am about the coverage in the NYT that misleads the public about how the tech works and what it can do.

2022-12-18 16:52:03 @pgcorus I am talking about how the NYT piece I quoted is full of hype about ChatGPT in particular. You are arguing with me about some technology that apparently only exists in your head.

2022-12-18 16:48:23 @pgcorus But also: You're a computational linguist writing about ethics &

2022-12-18 16:47:51 @pgcorus Keeping in mind that not only is general web garbage going to skew anti-queer, the ways in which it is supposedly "cleaned" makes that worse, as documented in this paper: https://t.co/54vOJot3q1

2022-12-18 16:47:12 @pgcorus If you're attuned to the ethics and harms of ML, what gives you any reason to believe that synthetic BS generators trained on general web garbage would be safe &

2022-12-18 16:38:03 @pgcorus Then you are failing. I'm speaking from my expertise as a computational linguist who has studied &

2022-12-18 15:06:37 RT @tala201677: "However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the wr…

2022-12-18 14:56:28 @pgcorus Commenting on your own tweets, I see. Points for self-awareness, I guess. https://t.co/A4lnc6U3va

2022-12-18 14:08:09 RT @emilymbender: Just in case anyone is unclear, here is why ChatGPT (or anything like it) will not replace search engines. There is cer…

2022-12-18 14:07:37 RT @emilymbender: Finally got around to reading this one, and yeah, the NYT continues to be trash at covering AI: "the existence of a high…

2022-12-18 05:48:55 @TaliaRinger https://t.co/EYt2aegJ8v

2022-12-18 03:50:35 @pgcorus And clearly you don't know who I am. "Google defender." Hah.

2022-12-18 03:48:27 @pgcorus Thank you for sharing your professional opinion as ... checks notes ... someone who writes python and does philosophy of tech. Clearly qualified to determine if automatic bullshit generators would be a safe and effective approach to therapy.

2022-12-18 03:47:23 Just in case anyone is unclear, here is why ChatGPT (or anything like it) will not replace search engines. There is certainly room (and urgent need) to improve on Google's for-profit model, but this ain't it. https://t.co/FYymgEF0FG

2022-12-18 03:43:11 If you want to be informed about what's actually going on with so-called "AI", get your news elsewhere.

2022-12-18 03:42:46 "Assessing ChatGPT’s blind spots and figuring out how it might be misused for harmful purposes are, presumably, a big part of why OpenAI released the bot to the public for testing" "a chatbot that some people think could make Google obsolete" #AIhype >

2022-12-17 00:24:46 @TaliaRinger Vaccine side effects?

2022-12-16 23:36:56 RT @SashaMTL: After the dazzling success of our "AI researchers as moths/butterflies" thread, I'm psyched to announced that it is now a @hu…

2022-12-16 15:01:06 And a core problem in AI-infected-NLP (and other areas of AI) is that people seem to believe that fundamental unsoundness (LMs being designed to just make shit up) is something that will surely be fixed in exciting future work!! https://t.co/cbGdxJ6ynu

2022-12-16 14:59:23 From further down @percyliang 's thread, apparently it's not really "for" anything yet. All of that is "future work". This is the core problem with the "foundation models" conceptualization. They are impossible to evaluate. https://t.co/P3ZqpBZ8Mm >

2022-12-16 14:57:56 More tales from the front of #NLProc's evaluation crisis. What is PubMedGPT actually for and why are medical licensing exam questions a legitimate test of its functionality in that task? >

2022-12-16 14:44:20 RT @ruthstarkman: @emilymbender critique:"Hallmark card analogy is particularly apt: ChatGPT’s output is frequently anodyne." Spot on,…

2022-12-16 14:06:18 RT @emilymbender: Here's yet another installment in my series of annotated field guides to #AIhype. This time, the toxic spill was in the W…

2022-12-16 05:10:06 Here's yet another installment in my series of annotated field guides to #AIhype. This time, the toxic spill was in the Washington Post: https://t.co/nqAzzvsXbD

2022-12-15 20:00:14 @_katiesaurus_ :'(

2022-12-15 19:51:57 @_katiesaurus_ Yikes. No.

2022-12-15 19:44:08 @mrdrozdov a. It would be, if you had any way to actually trace the ideas. b. Yes.

2022-12-15 18:57:15 Lots of folks are QT-ing this thread saying: It's okay to use this tool because I know how to use it and I will evaluate the ideas before incorporating them into my paper. And yet, none of those folks are addressing this point: https://t.co/OwCfwH8bDZ

2022-12-15 17:05:28 RT @mmitchell_ai: Welcome to the list to my wonderful colleagues @SashaMTL and @IreneSolaiman ! So grateful to have you in my life.

2022-12-15 15:52:39 RT @timnitGebru: Effective altruists read a critique by one of the "poor Africans" they can't stop talking about, on the backs of whom they…

2022-12-15 15:09:01 Also, wow does Forbes know how to pick 'em. And to display one's affiliation with the likes of Palantir ... that's definitely a choice. https://t.co/euNdh6qtGg

2022-12-15 15:06:53 My dude: Connecting ones work to what has gone before is part of science (and all scholarship). Anyone can do this! It's not gatekeeping to say: yes, come do the science, but no your so-called "AI" is not a scientist. https://t.co/nFbmNl3NwA

2022-12-15 14:09:28 RT @emilymbender: I got to have a really interesting conversation with @KUOW 's @libdenk yesterday on @SoundsideKUOW about #ChatGPT. The te…

2022-12-15 14:08:07 RT @emilymbender: We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers.…

2022-12-15 01:18:01 @complingy @aryaman2020 I'd be very curious about this too, but I am quite skeptical. Also, what is "weakly" in practice?

2022-12-14 22:42:23 @AmericanGwyn Here's how we define "Stochastic Parrots" in the paper that introduced the phrase (if that's where you're seeing it): The word that is cut off (b/c it's on the preceding page) is "contrary". https://t.co/0hpxzwthYX

2022-12-14 22:10:31 @snarkbat If you aren't already following @JewWhoHasItAll I strongly recommend that account as a balm in situations like this. Also, what an enormous drag.

2022-12-14 21:45:56 @KUOW @libdenk @SoundsideKUOW "Bender cautions that even as the prowess of the technology can seem immense at first, there are limitations with how much the program actually understands what it puts out." Not really what I said. "Limitations" is still overselling it. It. Does. Not. Understand.

2022-12-14 21:35:53 I got to have a really interesting conversation with @KUOW 's @libdenk yesterday on @SoundsideKUOW about #ChatGPT. The text of this article downplays my critique -- please listen to the recording for the full story. https://t.co/0SEWDOnOky

2022-12-14 18:54:01 @tdietterich @Azure Demand = pointless ChatGPT queries. Powered down = that electricity can go somewhere else on the grid.

2022-12-14 18:46:31 @tdietterich @Azure Actually not relevant, because if that carbon neutral energy weren't being used up for pointless ChatGPT queries, it could be being used for something else.

2022-12-14 16:41:50 @morenorse https://t.co/MgxgwQwBJC

2022-12-14 16:00:45 RT @neilturkewitz: Excellent by @emilymbender I especially appreciated “Your job there is to show how your work is building on what has…

2022-12-14 15:52:10 p.s.: How did I forget to mention 7- As a second bare minimum baseline, why would you use a trained model with no transparency into its training data? https://t.co/BvmZI8Gj9F

2022-12-14 15:00:26 RT @lilianedwards: This is an incredibly sensible thread about how GPT3 can't write your academic article ( or please note, your dissertati…

2022-12-14 14:51:03 6- As a bare minimum baseline, why would you use a tool that has not been reliably evaluated for the purpose you intend to use it for (or for any related purpose, for that matter)? /fin

2022-12-14 14:49:43 5- I'm curious what the energy costs are for this. Altman says the compute behind ChatGPT queries is "eye-watering". If you're using this as a glorified thesaurus, maybe just use an actual thesaurus? https://t.co/MenJK0tPQS >

2022-12-14 14:49:23 4- Just stop it with calling LMs "co-authors" etc. Just as with testifying before congress, scientific authorship is something that can only be done by someone who can stand by their words (see: Vancouver convention). https://t.co/TyvuX65ft0 >

2022-12-14 14:49:01 3- It breaks the web of citations: If ChatGPT comes up with something that you wouldn't have thought of but you recognize as a good idea ... and it came from someone else's writing in ChatGPT's training data, how are you going to trace that &

2022-12-14 14:48:47 2- ChatGPT etc are designed to create confident sounding text. If you think you'll throw in some ideas and then evaluate what comes out, are you really in a position to do that evaluation? If it sounds good, are you just gonna go with it? Minutes before the submission deadline?>

2022-12-14 14:47:55 The result is a short summary for others to read that you the author vouch for as accurate. In general, the practice of writing these sections in #NLProc (and I'm guessing CS generally) is pretty terrible. But off-loading this to text synthesizers is to make it worse. >

2022-12-14 14:47:43 1- The writing is part of the doing of science. Yes, even the related work section. I tell my students: Your job there is show how your work is building on what has gone before. This requires understanding what has gone before and reasoning about the difference. >

2022-12-14 14:47:01 We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers. So, I guess we need a thread of why this a bad idea: >

2022-12-14 14:46:42 RT @timnitGebru: Read this by @KaluluAnthony of https://t.co/bf2uXxretK. "EA is even worse than traditional philanthropy in the way it ex…

2022-12-14 14:32:18 I'm guessing that both numbers are very small, but I'm interested in the difference &

2022-12-14 14:06:03 RT @emilymbender: "we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer ne…

2022-12-14 14:05:56 RT @emilymbender: Who's ready for Episode 7 of Mystery AI Hype Theater 3000? Tune in live tmr as @trochee joins me and @alexhanna and will…

2022-12-14 14:05:45 RT @emilymbender: Come work with us at the University of Washington! Assistant Professor of Humanities Data Science https://t.co/UlALX7Fj

2022-12-14 13:11:44 RT @dmonett: What a quote! Nor thinking that "playing" with it will give us more insight into the technology that lies behind nor into the…

2022-12-14 01:53:42 Come work with us at the University of Washington! Assistant Professor of Humanities Data Science https://t.co/UlALX7FjyV

2022-12-14 01:10:27 Who's ready for Episode 7 of Mystery AI Hype Theater 3000? Tune in live tmr as @trochee joins me and @alexhanna and will almost certainly use the phrase "Gish Gallop". 9:30-10:30am Pacific, Dec 14, 2022 https://t.co/MTvIHSvsiW #MAIHT3k #MathyMath #AIhype #nlproc

2022-12-13 23:16:26 RT @emilymbender: Narrator voice: LMs have no access to "truth", or any kind of "information" beyond information about the distribution of…

2022-12-13 19:50:26 "we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world." -- me &

2022-12-13 19:44:35 RT @IAI_TV: “It is urgent that we recognize that an overlay of apparent fluency does not entail accuracy, informational value, or trustwort…

2022-12-13 18:22:00 It's particularly galling that this is on a piece that *I co-authored* for a popular outlet. And I'd like to promote it. But I refuse until this error is fixed. (And it's past the end of the workday over there, so we'll see.)

2022-12-13 18:18:05 Like, a name is a name right? Just because you might know Rebecca who goes by Becky doesn't mean you can assume that all Rebecca does. And just because universities in the UK allow those two alternate forms doesn't mean it works that way in the rest of the world!

2022-12-13 18:17:05 I know systems differ, but I really really really wish folks in other countries could learn that the University of Washington and Washington University are NOT the same institution. That, or look at my web page before publishing something about me and just copy what's there.

2022-12-13 15:36:45 RT @mixedlinguist: It’s not “Gen Z slang” it’s freaking old ass AAE that young white people just got put on to. Can we at least start recog…

2022-12-13 14:03:49 I'm thinking @JewWhoHasItAll might help provide some insight into this curious cultural phenomenon.

2022-12-13 14:02:36 This is such a bad idea on so many levels, but I'd like to add one more: "We'll pay the ticket, even if you lose" doesn't account for impacts on insurance rates or any other system that speeding tickets feed into. https://t.co/wnzqWJ7gjy

2022-12-13 13:51:00 Another: https://t.co/jMqvqig24r "Christmas Issue"? Right. https://t.co/sc8TSAvof4

2022-12-12 20:08:23 @VeredShwartz @mdredze If the end up quoting only men though even after they talked to you that's definitely worth calling out.

2022-12-12 18:37:16 @asayeed I don't buy it. This is such a hackneyed trope at this point. I think journalists can and should do better.

2022-12-12 18:27:17 @asayeed I would prefer that more readers *did* perceive it that way. Because it is a waste of time and it harms the credibility of the journalist. In general, we need higher AI literacy in the public and when journalists lean into #AIhype they are working in the opposite direction.

2022-12-12 18:20:48 RT @rapella: “And also: This is a news source willing to print synthetic BS.” #AIhype #NLProc #MathyMath #Journalism #ethics https://t…

2022-12-12 18:14:39 RT @emilymbender: So when I am told after the fact: "Those last sentences? They were written by a machine!" My reaction isn't "Wow cool" bu…

2022-12-12 18:07:27 @asayeed This isn't about not appreciating the coolness of tech. This is about standing against #AIhype and against practices that lean into fooling people. There are plenty of other ways to report on this without doing this (incredibly boring and furthermore overdone) trick.

2022-12-12 17:34:15 RT @CriticalAI: Totally agree. My own feeling is, "So what - are you trying to undercut the whole point of the article." It's such a cliche…

2022-12-12 16:56:01 So when I am told after the fact: "Those last sentences? They were written by a machine!" My reaction isn't "Wow cool" but "You just wasted my time." And also: This is a news source that is willing to print synthetic BS. #AIhype #NLProc #MathyMath #Journalism #ethics

2022-12-12 16:55:26 When I give my time and attention to the printed word, it is to learn something about how someone else sees the world or something about what they have learned and want to share with the world. >

2022-12-12 16:54:56 Yet another news story on #ChatGPT (which I was a source for) that starts with text generated by ChatGPT. I find that move insulting, in fact. https://t.co/9sJb9qm2uj >

2022-12-12 15:19:25 @luketrailrunner Thanks, Chris! Specifically on trying to use these things for search, see also: https://t.co/rkDjc4kDxj

2022-12-11 14:45:31 RT @a_derfelGazette: 1) Hi, everyone. I wanted to share with you the scary recent experience I had with my original Twitter account, @Aaron…

2022-12-11 14:16:31 RT @emilymbender: Mystery AI Hype Theater 3000, Episode 4 -- Is AI Art Actually "Art"? With @shengokai @WITWhat and @negar_rz hosted by @al…

2022-12-11 14:16:28 RT @emilymbender: Coming soon: Recordings of Episodes 5 (#Galactica) and 6 (xrisk essay contest) Episode 7, with @trochee @alexhanna a…

2022-12-11 06:29:27 RT @mmitchell_ai: "...without naming and recognizing the engineering choices that contribute to the outcomes of these models, it’s almost i…

2022-12-11 05:36:07 @thedansimonson @TaliaRinger Alas, so many of them do it for free --- seemingly believing they are providing some valuable pro-bono service.

2022-12-11 04:54:10 @thbrdy @TaliaRinger If you'd like to understand my stance, here are many things I wrote about #AIhype: https://t.co/uKA4tuv4jF

2022-12-11 03:45:31 RT @FAccTConference: Submit your excellent work to #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan 3…

2022-12-11 03:06:37 @TaliaRinger I can see that. OTOH, I worry that OpenAI is somehow putting on a mantel of "ethical development" that is actually completely undeserved.

2022-12-11 03:03:21 @TaliaRinger I mean ... "a preview of progress" also seems like a wild overclaim to me. Progress towards what? And "lots of work to do on truthfulness": it's designed to make shit up. That doesn't seem like a good starting point for truthfulness.

2022-12-10 21:18:49 RT @glichfield: Also, some people have talked about chatGPT being a search killer. To me the bigger concern is that it becomes a search ~po…

2022-12-10 14:58:02 @shengokai It was so amazing having you all on!!

2022-12-10 14:57:51 RT @shengokai: This was actually one of the most fun and insightful conversations I've had all semester and I was really thankful for the o…

2022-12-10 14:55:27 @athundt @shengokai @WITWhat @negar_rz @alexhanna Thanks for pointing that out, Andrew. We'll look into it.

2022-12-10 14:47:19 @jasonbaldridge It's a huge loss for the world that the distribution of power means that those who could be brilliantly envisioning uses of this kind of technology that might actually benefit their communities instead find themselves having to spend so much time cleaning up others' messes.

2022-12-10 14:45:39 @jasonbaldridge Likewise, those of us out here pointing out the ways in which these supposedly general models are oppression-reproducing machines wouldn't need to be doing that if systems weren't being developed and foisted on the world. https://t.co/2Z06E63T6F >

2022-12-10 14:43:51 @jasonbaldridge Also, I gotta add: "sadly acrimonious twitter discussions" sounds very both-sides-y to me. Those of out here push back on ridiculous claims of LLMs being "intelligent" etc etc wouldn't need to, if the claims weren't there. >

2022-12-10 14:40:13 @jasonbaldridge I wonder, though, if you have any examples of where they are used sensibly --- and if any of those actually involve using them generatively, rather than (as in previous uses of LMs) in choosing between outputs that come from some constrained, task-specific model?

2022-12-10 14:39:13 @jasonbaldridge I totally agree that safety is in the details of application-specific development. And thus a huge part of the problem with LLMs is that they are being put forward as "general" or "foundation" models that can be used for any task that can take place in language. >

2022-12-10 14:07:03 RT @SashaMTL: We need to stop conflating open/gated access and opensource. ChatGPT is *not* open source -- we don't know what model is und…

2022-12-10 14:02:19 RT @emilymbender: The bitter lesson is how much of the field is willing to accept a system that produces form that *looks like* a reliable…

2022-12-09 23:53:27 @trochee @alexhanna All episodes recordings (as they are available) can be found here: https://t.co/6UCGlE6mx3 #MAIHT3k #AIhype

2022-12-09 23:50:17 Coming soon: Recordings of Episodes 5 (#Galactica) and 6 (xrisk essay contest) Episode 7, with @trochee @alexhanna and me Join us live on Wednesday Dec 14, 9:30-10:30am Pacific Time https://t.co/MTvIHSvsiW #AIhype #MAIHT3k #MathyMath https://t.co/ZZYtdZYKDn

2022-12-09 23:48:25 Mystery AI Hype Theater 3000, Episode 4 -- Is AI Art Actually "Art"? With @shengokai @WITWhat and @negar_rz hosted by @alexhanna and me :) https://t.co/UgEVwIAgvX #AIHype #MathyMath

2022-12-09 23:31:06 RT @annargrs: Anybody attending @emnlpmeeting #EMNLP2022 virtually? Could you share your experience? E.g. - does Underline still feel slow?…

2022-12-09 22:28:17 RT @tveastman: Because it's Neal Stephenson, you have to swap out some words like "reticulum" for "internet" and "crap" for "spam". But he…

2022-12-09 17:43:44 @david_darmofal @owasow Jinx.

2022-12-09 17:43:12 @owasow ... with signs encouraging food fights.

2022-12-09 17:36:40 RT @cmhenry_: @emilymbender https://t.co/8D1O4RFoL9

2022-12-09 17:30:04 OP: We did it without paying attention to any of the previous science! This is very "Wile E Coyote" before he realizes he's standing on nothing, and I want to make a meme like that, but can't find the "before he realizes" images. Oh well.

2022-12-09 17:21:59 RT @rajiinio: Me &

2022-12-09 17:21:42 RT @Abebab: "We critique because we care. If these companies can't release products meeting expectations of those most likely to be harmed…

2022-12-09 16:38:06 @Abebab @rajiinio "But without naming and recognizing the engineering choices that contribute to the outcomes of these models, it becomes almost impossible to acknowledge the related responsibilities." https://t.co/wrFy2Q4VDY

2022-12-09 16:37:30 Essential reading from @Abebab and @rajiinio "Model builders and tech evangelists alike attribute impressive and seemingly flawless output to a mythically autonomous model, a technological marvel." https://t.co/wrFy2Q5ttw

2022-12-09 16:35:36 RT @vukosi: Abeba Birhane and Deborah Raji >

2022-12-09 14:00:51 @krustelkram I hadn't noticed the EA connection. That totally tracks --- "benefits" indeed.

2022-12-09 14:00:10 @Thom_Wolf https://t.co/uGdFohNlUS

2022-12-09 13:59:17 The bitter lesson is how much of the field is willing to accept a system that produces form that *looks like* a reliable solution to task as actually doing the task in a way that is interesting and/or reliable. Are we doing science or just standing in awe of scale? https://t.co/ChNQqNKtM0

2022-12-09 01:11:52 @spacemanidol @amahabal No, it's not about the name. It's about the way the systems are built and what they are designed to do.

2022-12-08 22:48:41 @chrmanning I'm not actually referring to your slide, Chris, so much as the way it was framed in the OP's tweet --- which Stanford NLP sought fit to retweet, btw.

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-15 17:30:12 Oh and of course: the dude who did this, who fired the entire human rights team, who fired the excellent META team, &

2022-11-15 15:31:19 RT @emilymbender: So the dude who considers 2FA bloat that can just be switched off also runs the car company famous for updating vehicles…

2022-11-15 01:13:53 RT @TaliaRinger: I've twice seen people almost give up on applying from not being able to find a third letter writer, and for real applying…

2022-11-14 20:47:58 So the dude who considers 2FA bloat that can just be switched off also runs the car company famous for updating vehicles 'over the air'? That's reassuring...

2022-11-14 18:39:59 I'm really looking forward to this! I'm excited to be talking to this audience and really enthusiastic about the format that @mmitchell_ai and I have planned. https://t.co/0QpRxBNDXh

2022-11-14 18:35:46 RT @ruthstarkman: Here's the registration for @emilymbender and @mmitchell_ai talk @Stanford Dec 2 "Collective Action for Ethical Tech…

2022-11-14 00:20:43 RT @DrMonicaCox: https://t.co/8weOTUycov

2022-11-13 06:02:24 RT @timnitGebru: I’d love to work on a list here to expose just how much influence this cult has in “AI safety” and how complicit the “elit…

2022-11-13 00:02:57 RT @sjjphd: If you’ve organized, taught, built relationships, made funny jokes, shared original content or intellectual property on this ap…

2022-11-11 16:40:47 RT @ChanceyFleet: For someone like me, with disparate interests and unruly curiosity, Twitter has been a place to learn from the brightest…

2022-11-11 15:57:36 Does anyone know what happens to @threadreaderapp and esp the html pages it has generated if Twitter goes down? Basically, I'm curious if unroll requests are a good way to 'backup' threads I'd like to keep visible on the web, like those linked here:https://t.co/uKA4tuv4jF

2022-11-11 15:54:59 @threadreaderapp unroll

2022-11-11 15:53:51 @threadreaderapp unroll

2022-11-11 15:52:44 @threadreaderapp unroll

2022-11-11 04:34:01 RT @ihearthestia: Go into your Twitter settings and disconnect your google account, all the apps that are connected to Twitter, disconnect…

2022-11-11 03:44:55 @TaliaRinger I was just gonna say, you might be interested in https://t.co/IWoXj0JXXJ. Off to follow you now!

2022-11-11 03:44:25 @TaliaRinger Yes! And you can follow hashtags to discover people with shared interests.The choice of server matters some, but it's not everything. It gives you your local neighborhood + your view onto the rest of it (determined, I gather, by who you and others on your instance follow).

2022-11-10 17:37:28 (It's obvious and uncomfortable every time this happens to me. I never introduce myself as "an American linguist" and the Howard and Francis Nostrand Professor title is a time-bound thing which now has moved along to one of my colleagues.)

2022-11-10 17:36:22 PSA to conference organizers: Please don't use a Wikipedia page to draft a bio for your invited speakers. Either ask them or go to their own web page to see how they present themselves.

2022-11-10 17:05:02 RT @mattbc: Increasingly concerned the servers are going to go down and we simply won’t be able to ever open this appIf you haven’t downl…

2022-11-10 14:37:32 RT @emilymbender: It's really nice to watch my network grow over on Mastodon and I'd like to encourage more people to check it out. It seem…

2022-11-10 14:37:07 RT @ruthstarkman: Envisioning Paths: Individual and Collective Action for Technology Development. @emilymbender and @mmitchell_ai …

2022-11-10 12:49:27 RT @nsaphra: I'm now on the academic job market! I work on understanding and improving training for NLP models, with a focus on studying ho…

2022-11-09 23:48:35 RT @IBJIYONGI: Literally @TwitterSupport is going to get people killed

2022-11-09 23:32:16 Take the time to learn the differences in affordances &

2022-11-09 23:31:15 It's really nice to watch my network grow over on Mastodon and I'd like to encourage more people to check it out. It seems it's best to think of it as a Twitter alternative, rather than a Twitter replacement.>

2022-11-09 22:42:52 RT @DAIRInstitute: For our 1 year anniversary virtual events, we'll have talks, panels and breakout discussions with attendees. Sign up her…

2022-11-09 22:14:36 RT @timnitGebru: They're looking at this announcement from Future Fund which was like tell us why we're wrong about superintelligence and w…

2022-11-09 21:15:25 @timnitGebru @mmitchell_ai @mcmillan_majora And it remains a crying shame that three of our co-authors were prevented by their employer from getting recognition for their work on this paper --- even as the same actions by said employer put such a spotlight on it.

2022-11-09 21:14:39 @timnitGebru @mmitchell_ai @mcmillan_majora That said, many of these citations are spurious: People saying "yeah yeah ethical issues" or talking about environmental impact, when they really should be citing the people we cite. (Our main addition there was to bring in the env racism angle.)>

2022-11-09 21:14:20 Google Scholar certainly isn't a direct reflection of any reality, but I am still tickled that Stochastic Parrots is at 999 citations there. cc @timnitGebru @mmitchell_ai @mcmillan_majora >

2022-11-09 20:50:54 @faineg Tesla's "full self driving" mode would be a contender tho.

2022-11-09 18:56:26 RT @_alialkhatib: my hunch is that in 3 months it'll be harder to decipher someone's checkmark than it is to understand how mastodon works.

2022-11-09 18:03:49 RT @timnitGebru: I'm live tooting this on Mastodon :-) Will post here after.

2022-11-09 16:47:00 RT @kenarchersf: Starts in 45 minutes. https://t.co/mWi7drwSnU

2022-11-09 16:05:55 RT @DAIRInstitute: This will be in 1.5 hours. https://t.co/HZEA4hF3yW

2022-11-09 03:51:55 @Abebab ffs

2022-11-08 22:53:45 #Enough

2022-11-08 21:48:55 RT @DAIRInstitute: Join us tomorrow (Wednesday). https://t.co/HZEA4hXcN4

2022-11-08 21:42:28 RT @emilymbender: Episode 5 is coming this Wednesday! Join me and @alexhanna for more Mystery AI Hype Theater 3000 on Wednesday 11/9, 9:30a…

2022-11-08 19:19:44 RT @emilymbender: My voting experience yesterday (because runner, because Seattle, because WA is #VoteByMail) USians: If you haven't al…

2022-11-08 18:59:29 (I asked this first on Mastodon, but thought it might be valuable to put the query out here, too.)

2022-11-08 18:59:04 Is anyone tracking the vocabulary springing up around #TwitterMigration? I've seen "twefugees" and "birdsite expats" at least. I think it could be quite interesting how the metaphors relating to migrants (w/all their inherent connection to colonialism &

2022-11-08 18:33:18 @Abebab Totally! The thread below is about a slightly different angle on ML for mental health, but I think a lot of the same criticisms apply:https://t.co/1gX8URBsz6

2022-11-08 15:00:27 My voting experience yesterday (because runner, because Seattle, because WA is #VoteByMail) USians: If you haven't already done so, today's the day! #VOTE #VOTE #VOTE https://t.co/ZnikG1qTnp

2022-11-08 05:57:38 @CosNeanderthal There are PLENTY of good discussions to have about interdisciplinarity. For ex: What are productive means of structuring collaborations to incorporate domain expertise? But these START from acknowledging that domain expertise is necessary.

2022-11-08 05:55:35 In sum: CS is over-funded. Not only is domain expertise necessary for defining legitimate tasks, we need to stop setting up the financial incentives such that the goals of everything are about advancing the state of knowledge of CS/ML/"AI".

2022-11-08 05:53:48 And: Yes, yes it is. Suggesting that that's a topic for debate sounds like they realize that just maybe ML can't go around claiming to have "solved" made up problems forever but aren't ready to face that reality. >

2022-11-08 05:50:40 Just turned down an invitation to be on a panel with topics including "Is domain expertise, like linguistics, necessary for the design of #NLProc benchmarks?"My dude, did you really just invite me to be on a panel to debate whether I should be on the panel?

2022-11-08 00:28:03 @rajiinio @SashaMTL (Not saying that I think you're suggesting lowest common denominator, but that's one way to read "things are different in different countries".)

2022-11-08 00:27:41 @rajiinio @SashaMTL And I think that NeurIPS can totally set its own standards, through community process, and they don't have to be (and shouldn't be) lowest common denominator.

2022-11-08 00:26:39 @rajiinio @SashaMTL I don't doubt that you see ethical considerations a consonant with and as important as technical ones. But the paragraph reads like you're trying to appease those who don't, I wish it didn't have to be that way.>

2022-11-07 20:44:33 @EricFos Not known to me, but the webpage has good transparency about what they are up to, including instructions for delinking your twitter account from the service afterwards.

2022-11-07 20:38:15 RT @emilymbender: It is beyond time for the ML community to drop this idea that ethical considerations are somehow at odds with "technical…

2022-11-07 20:18:25 I think I can read that blog post as the ethics committee chairs trying really hard to bring other people on board, and throwing the recalcitrant techbros a bone. But it shouldn't have to be this way. The "But my AI progress!!!" types need to just get over themselves.

2022-11-07 20:17:30 It is beyond time for the ML community to drop this idea that ethical considerations are somehow at odds with "technical merit" and that ethical review is somehow hampering "scientific progress". (As. If.)>

2022-11-07 20:15:56 I applaud the #NeurIPS2022 ethics committee for their transparency and thoughtfulness in this blog post:https://t.co/sUzuD5SsWtHowever: >

2022-11-07 17:37:08 Q for English speakers. Without looking it up, which of these *sounds* bigger? #Linguistics

2022-11-07 16:31:14 RT @rajiinio: Hard to believe but the @NeurIPSConf Ethics Review process is over - and has completed its third year! In a blog post, with…

2022-11-07 16:02:00 RT @schock: Is there a Mastodon Foundation yet? Since it seems like there is finally real possibility for significant migration, we are goi…

2022-11-07 16:01:39 Thinking about checking out Mastodon, but wondering if any of the folks you follow here are there already? @debirdify doesn't require you to have a Mastodon account first! Check it out herehttps://t.co/J25Rg0qEli#TwitterMigration

2022-11-07 14:57:40 Episode 5 is coming this Wednesday! Join me and @alexhanna for more Mystery AI Hype Theater 3000 on Wednesday 11/9, 9:30am Pacific.https://t.co/VF7TD6tw5c https://t.co/2VWNZtZXmt

2022-11-07 14:56:27 Mystery AI Hype Theater 3000 Ep. 4 - Is AI Art Actually Art? where @alexhanna and I bring on @shengokai @negar_rz and @WITWhat and the level of discourse is instantly way more sophisticated than Eps 1-3.https://t.co/FIxd1gPsnZ

2022-11-07 14:48:58 @ChanceyFleet I put mine in my display name and wondered if that would be annoying for screen readers. I see that the way you did yours is nice and short and not redundant, so I'll do the same!

2022-11-07 14:48:19 RT @natematias: Folks on Twitter are experiencing what STS scholars call "infrastructure inversion" - a jolt of recognition that invisible…

2022-11-07 05:44:44 @LkjonesSOC Thank you!

2022-11-07 02:06:24 @SashaMTL @huggingface

2022-11-07 01:42:07 @SashaMTL @huggingface

2022-11-07 00:24:05 Again I see people describing me as an "AI" person. I am not. People who are might (or might not) have something to learn from what I have to say, but that's not the same thing as me being an AI person. https://t.co/0CJjhf7uhM

2022-11-06 21:18:39 RT @shengokai: “A citational order is a social order, and like all social orders it is defended as a moral order. To refuse to cite the rig…

2022-11-06 20:37:48 RT @shengokai: In case it was not clear from my tweets over the last two days “do not cede the forum to bigots” is not just instructions fo…

2022-11-06 15:48:56 @HadasKotek Connected! First as a list of posts on your profile and second through cross-linking if you put in the hyperlinks.

2022-11-06 15:42:01 @HadasKotek https://t.co/kTBw3sywiC is an easy interface for blog posts... and then you can just link to that.

2022-11-06 14:44:00 @schock @ramakmolavi I think a key conceptual sticking point is that the separate servers aren't separate social networks---or at least not isolated networks, because they are federated.

2022-11-06 14:23:11 @aeryn_thrace I'm not leaving Twitter (yet). There are so many people I am learning from #onhere But I find it's worthwhile for me to put the time into also creating another space in case this becomes unusable (or the site just stops working all together).

2022-11-06 14:03:16 RT @gerardkcohen: I am officially no longer the Engineering Manager for the Accessibility Experience Team at Twitter. I have words.

2022-11-06 03:16:41 RT @RJDeal: @alwaystheself @andizeisler I’m putting people I follow on a list so I can use my list as my feed. I’m just adding people to th…

2022-11-06 01:34:52 RT @rachelmetz: Some great thoughts from @emilymbender, who I was thrilled to find on mastodon along with a lot of academic (and specifical…

2022-11-06 00:40:04 @joshisanonymous @Nanjala1 Well, not fully automatically. I had to do a migration step to make that happen. But it was pretty easy!

2022-11-06 00:35:24 p.s. I'm not leaving the birdsite yet. Just working to also build up an alternative, in case this implodes like it looks like its going to.

2022-11-06 00:30:52 Anyway, I hope to see many of you there! I'm @emilymbender@dair-community.social

2022-11-06 00:30:27 When I first joined Twitter, I didn't post for two years, and mostly used it to follow conferences I wasn't at. Over in the #fediverse, I'm going slow, hoping to learn the ropes before doing too much, but also trying to build community, so tooting some.>

2022-11-06 00:28:27 This, along with the lack of full-text search (only hashtags are searchable) appear to be design choices meant to dampen virality and nudge the interactions towards contentful exchanges rather than dunk-fests.>

2022-11-06 00:27:28 There aren't QTs, though you can get the URL for a post and paste into a post. But this isn't the same as a QT not least because you can't get from a post to other posts that link to it. >

2022-11-06 00:26:09 @Nanjala1 Yes -- info a bit lower in the thread. Also, you can definitely follow people on other servers, so that's fine too.

2022-11-06 00:25:07 And then, it's back to learning mode. Things are a little different in the #fediverse, not least because of some design decisions in the software. It's wonderfully easy to create "content warnings" so you can, for example, post a #wordle result but require a click-through.>

2022-11-06 00:24:02 As for things being overloaded, the first thing is patience. But also: it's worth choosing a smaller server to join, rather the ginormous ones. And also: you can pretty easily change servers. I did this to redirect my followers and it worked like a charm:https://t.co/1CLtpi7eXW

2022-11-06 00:22:03 Honestly, it's kind of nice to start fresh and have fewer people I'm following. And it's *wonderful* to once again be in a space with a reverse-chron timeline that I can just scroll back until I hit stuff that I've seen and know that I'm caught up :)>

2022-11-06 00:21:14 For things being too quiet---the trick is to find people to follow :) A good starting point is this service, which (heuristically) combs through the people you follow on Twitter, and finds those that have announced mastodon (or other fediverse) accounts:https://t.co/J25Rg0qEli

2022-11-06 00:18:02 At first it was quiet (not following enough people), and the server I was on (https://t.co/7lqqF6ZxN8) got slammed with new accounts and it was all too slow to be usable. But both of those things are fixable!>

2022-11-06 00:17:17 A few thoughts about #TwitterMigration : I've had a Mastodon account (now at @emilymbender@dair-community.social) for a couple of weeks. It takes some effort to get started, but I think it's worth it!>

2022-11-06 00:16:38 @Lasha1608 I see people using the #nlproc hashtag over there, so that would be a way to find some! (And I'm hoping we'll just use #nlp and make it ours...)

2022-11-05 22:59:53 @marc_schulder @AngloPeranakan Yes come join us there! You don't have to delete your Twitter account to do so....I'm @emilymbender@dair-community.social

2022-11-05 05:08:36 RT @haydenfield: Members of Twitter's ethical AI team learned they had been laid off early this morning, according to tweets by those affec…

2022-11-05 05:06:58 RT @alexhanna: A word about the META team build by @ruchowdh @quicola and others -- This was probably one of the last teams at a big tech…

2022-11-04 21:48:55 RT @asayeed: fin de twiècle

2022-11-04 18:57:07 @susansternberg @DAIRInstitute That might be about your browser client. That's what mastodon handles look like...

2022-11-04 16:38:45 @prem_k That should really depend on the account you are following. I don't believe I have mine set up that way! (Though I did just switch instances, so maybe that did something?)

2022-11-04 16:34:41 @heartsalve Btw, I've collected some of my responses to #AIhype here:https://t.co/uKA4tuv4jF

2022-11-04 15:56:17 Please find me at @emilymbender@dair-community.social

2022-11-04 15:32:33 #TwitterMigration: Please find me at @emilymbender@dair-community.social

2022-11-04 15:09:20 @heartsalve So far, my Mastodon experience is AGI bro free, which is lovely.

2022-11-04 15:08:45 @complingy @yuvalpi NPI to the rescue, indeed :)

2022-11-04 14:48:35 I would be really surprised if this site retains the value it had, but I'm deleting my account yet. At the same time, I'm also starting to use Mastodon, and happy to see community starting to form over there.

2022-11-04 14:47:35 It's devastating to watch on Twitter as Twitter is getting gutted. I wish strength to all of those going through this.

2022-11-04 05:09:40 RT @JortsTheCat: Does anyone know any state or local representative from CA? I have a quick question https://t.co/jgNAyieo1g

2022-11-03 16:33:10 @bobirakova @LangMaverick @LeonDerczynski I know much less about (2), but it does seem very useful to have different lenses in use when creating taxonomies/typologies.

2022-11-03 16:32:21 @bobirakova @LangMaverick I believe @LeonDerczynski is also thinking about taxonomy/typology of harms.>

2022-11-03 16:32:02 @bobirakova @LangMaverick There's another one, more extensive and maybe closer to concerns of the law here:Lefeuvre-HalftermeyerA,GovaereV,AntoineJ,AllegreW,PouplinS,etal.2016.Typologiedesrisquespourune analyse éthique de l’impact des technologies du TAL. Rev. TAL 57(2):47–71https://t.co/cZ4ZvmQBxI

2022-11-03 16:30:16 @bobirakova For (1), there are definitely attempts, including one that I did in this paper (w/@LangMaverick ):https://t.co/avrAZKv2dk>

2022-11-03 04:21:20 RT @complingy: #CompLing alert: For anyone applying to North American grad programs, here are 35 Linguistics departments with #NLProc(-adja…

2022-11-02 22:46:10 RT @lopalasi: #RiskRecidivism tools like #COMPAS or #RisCanvi are not “biased”. They automatize *discrimination*. And they do that not beca…

2022-11-02 18:56:31 RT @alexhanna: You can catch the recording of our stream with @shengokai @negar_rz and @WITWhat about AI Art over at YouTube!And mark you…

2022-11-02 16:03:22 @jmhenner My son who is a junior at UCLA consistently refers to his classmates (and implicitly himself) as "kids" and each time I have to resist learning that pattern...

2022-11-02 15:54:10 @LeonDerczynski @encoffeedrinker That is beyond the pale. Also, I am worried for the OP --- at minimum it looks like they aren't receiving appropriate mentoring.

2022-11-01 22:01:25 @DavidSKrueger I don't think you've understood my remark. How is it a race to the bottom if instead we refuse to build it and/or regulate against its being built?

2022-11-01 21:49:41 @IEthics My best idea is to call it out when I see it, as I did here.

2022-11-01 21:38:46 @alexhanna IKR?! @timnitGebru and @adrienneandgp were *so* polite! I don't think I could have held back...

2022-11-01 20:57:35 RT @DAIRInstitute: If you missed DAIR fellow @adrienneandgp and founder @timnitGebru discussing their Noema Magazine article co-written wit…

2022-11-01 20:57:21 This was so cool! (Though wow the mansplaining was strong with the callers, especially the first two.) https://t.co/ud8SeUtp9d

2022-11-01 20:19:28 @IEthics That plays again into this idea that "AI" systems are somehow human-like.

2022-11-01 20:19:06 @IEthics Lack of interpretability of these models is a huge problem -- that's not what I'm objecting to here. What I'm objecting to is the attempt to explain or illustrate the lack of interpretability in terms of ways in which humans can't always explain our own preferences. >

2022-11-01 20:17:55 @jeffclune Glad to hear it! And yeah, I wondered if the quote was somehow out of context.

2022-11-01 15:31:29 RT @emilymbender: I also wanted to push back on this quote from @jeffclune. There is always the option of not building, not buying, not dep…

2022-11-01 15:31:23 RT @chloexiang: “As with any other tools, we should be asking: How well do they work? How suited are they to the task at hand? Who are they…

2022-11-01 15:31:12 I also wanted to push back on this quote from @jeffclune. There is always the option of not building, not buying, not deploying the thing. (Though once it's built, bought &

2022-11-01 15:28:07 I appreciate this reporting, though it starts off with an unhelpful analogy: There is no parallel between our uninterrogated preferences as humans and our inability to understand black-box "AI" systems (mathy maths).>

2022-11-01 13:13:35 @danchall Well, cars consume gas, don't they? I agree that when LMs "consume" text, it isn't really "content" for them in any sense, though.

2022-11-01 13:07:41 [#TwoMarvins, because it was just one tweet. But on unpacking, it looks more like #ThreeMarvins.]

2022-11-01 13:06:45 To wrap up the linguistics lesson: When you see a verb of cognition like "appreciate" in a sentence talking about mathy maths ("AI" systems) stop and check. What is the subject of the verb? If it's the mathy math, who's trying to sell you something?

2022-11-01 13:02:50 And the more likely we are to shrug and agree when we're told that tech is the solution to societal problems and then suffer the effects when it makes them worse (or provides cover for people to make them worse

2022-11-01 13:01:22 @Miles_Brundage @OpenAI The more the public is given the impression that stochastic are able to "appreciate content" or "think" or "decide" or "understand", the more likely we are to cede power to automated systems and the companies that sell them

2022-11-01 12:59:37 @Miles_Brundage @OpenAI Yeah, this is just Twitter (and maybe noone is here anymore anyway) and yeah this is just a throwaway comment, but it is still harmful because it is yet another flake in the blizzard of #AIhype.>

2022-11-01 12:58:12 Maybe @Miles_Brundage is just making a joke or trying to be cute here. But given the context (everything else the folks at @OpenAI have said about their LMs), if he is, it's really hard to tell. And that's indicative of a huge problem.>

2022-11-01 12:56:00 Sense 2 at least isn't about cognition, but that would be a very strange sense to use in the context of "we [people] appreciate content". So clearly the intended sense is one of 1a-1d. **All of which describe cognition.**>

2022-11-01 05:04:24 Communicative intent*Typos, sigh.

2022-11-01 04:58:03 @andrewthesmart And this is a useful contribution to the discussion why?

2022-11-01 02:14:35 When a language model synthesizes text, it's not even lying, because "lying" entails some intent to dissemble, which is still a communicative in.... I wonder if reflecting on that might help people get a better intuition for what these things are. Or is it too subtle?

2022-10-31 22:03:39 Another: https://t.co/pSyZiP8JahImmediately detectable as spam (even before I clicked through to find out that is nominally a "pharmacy and chemistry" journal trying to get me to publish there) with the subject line "Wеlϲοmе to Ρubliѕһ Ρaрҽrs in the International Јоᴜrnals"

2022-10-31 21:09:39 #linguistics #naclo2023 #nlproc https://t.co/B9goxfD97t

2022-10-31 21:09:22 This year, too, @UWlinguistics is a site for @naclo_contest Please share this info with interested high schoolers in the area!https://t.co/Qq7JA19XK0

2022-10-31 19:31:55 @omicreativedev @shengokai @WITWhat @negar_rz @alexhanna Either @alexhanna or I will post a link, and we'll surely both retweet!

2022-10-31 18:50:03 @elizejackson What kind of mask do you wear? How do you ensure that it fits properly? What's the hardest thing about putting it on?

2022-10-31 16:45:03 @shengokai @WITWhat @negar_rz @alexhanna I don't think I can really claim to be an artist, but can you tell what my intention was here?https://t.co/QdQ9do1Z2j

2022-10-31 16:44:42 It was great to get to learn about how to deal with hype around "AI art" from @shengokai @WITWhat and @negar_rz in the latest episode of Mystery AI Hype Theater 3000 w/@alexhanna. Recording coming soon, but key point: art expresses the artists intention.

2022-10-31 13:35:53 RT @SashaMTL: What could be scarier than a Stochastic Parrot? Complete with a t-shirt full of GPT quotes about the meaning of life, inclu…

2022-10-31 01:08:44 This is such a key point: if we are to disrupt and prevent genocide, we need to know how to recognize it in action. In this case, especially important is detecting and dismissing propaganda while there seem to be so few channels for the victims to speak to the world. https://t.co/Knv80KDjBE

2022-10-30 22:42:28 RT @PeoplePowerWA: SEATTLE ACTION: It's time to speak out against the ShotSpotter gunfire detection system! When: Wednesday, Nov 2 at 9am…

2022-10-30 22:04:37 @JonKBateman on so so many levels.

2022-10-30 20:57:47 @ShobitaP @ruha9 @emilymbender@mastodon.social

2022-10-30 19:33:10 RT @CriticalAI: #CriticalAI friend @MadamePratolung belatedly live tweets the first of the Mystery AI Hype Theater episodes by @emilymbende…

2022-10-30 12:44:16 RT @shengokai: Art, for example, requires an entire social nexus to legitimate what is and is not art

2022-10-30 12:43:51 RT @shengokai: Put simply, the cult belief that STEM is more rigorous, more "objective" than the humanities is not just one of the great tr…

2022-10-30 11:29:09 @omicreativedev It was @alexhanna ! https://t.co/TrZ4mZZi4G

2022-10-29 13:59:20 RT @emilymbender: What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but…

2022-10-29 13:01:58 RT @emilymbender: I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxio…

2022-10-29 13:00:56 RT @emilymbender: Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-29 04:02:24 #ThreeMarvins

2022-10-29 04:01:56 Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, &

2022-10-29 04:01:21 And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine./12 https://t.co/pDCl1lgRx8

2022-10-29 04:00:49 This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."/11 https://t.co/USu7GiP7V1

2022-10-29 04:00:20 Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data./10 https://t.co/h44gCyjkka

2022-10-29 03:59:35 This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability./9 https://t.co/4S61hX1tQb

2022-10-29 03:59:02 I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out./8 https://t.co/UTEr3VflVo

2022-10-29 03:58:30 Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making./7

2022-10-29 03:58:10 Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic./6

2022-10-29 03:57:47 The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus"./5 https://t.co/WRf5bT8iMI

2022-10-29 03:56:46 However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high./4

2022-10-29 03:56:38 I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable./3

2022-10-29 03:56:26 Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system./2

2022-10-29 03:56:13 Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-28 21:28:04 @DrVeronikaCH See end of thread.

2022-10-28 21:22:27 @JakeAziz1 In my grammar engineering course, students work on extending implemented grammars over the course of the quarter. Any given student only works on one language (with a partner), but in our class discussions, everyone is exposed to all the languages we are working on.

2022-10-28 20:54:22 For that matter, what would the world look like if our system prevented the accumulation of wealth that sits behind the VC system?

2022-10-28 20:53:48 What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but rather to realistic, community-governed language technology?>

2022-10-28 20:40:46 (Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: https://t.co/CKybX4BRsz )

2022-10-28 20:40:05 Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.

2022-10-27 15:35:48 @jessgrieser For this shot, yes. Second dose is typically the rough one for those for whom it is rough. Also: thank you for your service!!

2022-10-27 05:16:49 RT @mark_riedl: That is, we can't say X is true of a LM at scale Y. We instead can only say X is true of a LM at scale Y trained in unknown…

2022-10-26 21:03:30 Another fun episode! @timnitGebru did some live tweeting here. We'll have the recording up in due course... https://t.co/UwgCA1uu4a

2022-10-26 20:53:19 RT @timnitGebru: Happening in 2 minutes. Join us.https://t.co/vDCO6n1cno

2022-10-26 18:28:08 AI "art" as soft propaganda. Pull quote in the image, but read the whole thing for really interesting thoughts on what a culture of extraction means. By @MarcoDonnarumma h/t @neilturkewitzhttps://t.co/2uAJvBTVbM https://t.co/X4at2irn0V

2022-10-26 17:51:27 In two hours!! https://t.co/70lqNfeHjh

2022-10-26 15:20:39 @_akpiper @CBC But why is it of interest how GPT-3 responds to these different prompts? What is GPT-3 a model of, in your view?

2022-10-25 18:16:23 @_akpiper @CBC How did you establish that whatever web garbage GPT was trained on was a reasonable data sample for what you were doing?

2022-10-25 18:14:43 Sorry, folks, if I'm missing important things. A post about sealioning led to my mentions being filled with sealions. Shoulda predicted that, I guess. https://t.co/pg6IfnZxUQ

2022-10-25 12:51:32 RT @emilymbender: Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly repor…

2022-10-25 12:51:29 RT @emilymbender: Thinking about this again this morning. I wonder what field of study could provide insight into the relative contribution…

2022-10-25 00:29:46 @timnitGebru @Foxglovelegal From what little I understand, these regulations only kick in when there are customers involved paying for a product. So, I guess the party with standing might be advertisers who are led to believe that they are placing their ads in an environment that isn't hate-speech infested.

2022-10-25 00:27:03 @timnitGebru Huh -- I wonder how truth in advertising regulations apply to cases like this, where people representing companies but on their own twitter account go around making unsupported claims about the effectiveness of their technology.

2022-10-25 00:19:07 @olivia_p_walker https://t.co/YyrMnZdhjW

2022-10-25 00:16:57 I mean, acting like pointing out that something is eugenicist is the problem is not the behavior I'd expect of someone who is actually opposed to eugenics.

2022-10-25 00:15:14 If you're offended when someone points out that your school of thought (*cough* longtermism/EA *cough*) is eugenicist, then clearly you agree that eugenics is bad. So why is the move not to explore the ways in which it is (or at least appears to be) eugenicist and fix that?

2022-10-25 00:03:12 RT @aclmeeting: #ACL2023NLP is looking for an experienced and diverse pool of Senior Area Chairs (SACs). Know someone who makes the cut?…

2022-10-24 19:18:09 @EnglishOER Interesting for what? What are you trying to find out, and why is poking at a pile of data of unknown origin a useful way to do so?

2022-10-24 17:06:13 @EnglishOER But "data crunching of so much text" is useless unless we have a good idea of how the text was gathered (curation rationale) and what it represents.

2022-10-24 16:40:43 Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly reporting on how exciting it was to read the results?

2022-10-24 04:29:30 @athundt @alkoller It looks like only 7 of them are visible but that's plausible.

2022-10-24 04:17:55 I wasn't sure what to do for my pumpkin this year, but then @alkoller visited and an answer suggested itself.#SpookyTalesForLinguists https://t.co/Bp3rULsA9z

2022-10-23 20:53:56 @jasonbaldridge I bookmarked it when you first announced the paper on Twitter but haven't had a chance to look yet.

2022-10-23 19:52:26 @tdietterich Fine. And the burden of proof for that claim lies with the person/people making it.

2022-10-23 19:47:57 @tdietterich Who is going around saying airplanes fly like birds do?

2022-10-23 19:32:27 To the extent that computational models are models of human (or animal) cognition, the burden of proof lies with the model developer to establish that they are reasonable models. And if they aren't models of human cognition, comparisons to human cognition are only marketing/hype.

2022-10-23 19:08:14 @Alan_Au @rachelmetz https://t.co/msUIrYeCEr

2022-10-23 05:29:16 @deliprao Also if you feel the need to de-hyoe your own tweet, maybe revisit and don't say the first thing in the first place?

2022-10-23 05:27:35 @deliprao What does "primordial" mean to you?

2022-10-23 05:26:27 How can we get from the current culture to one where folks who build or study this tech (and should know better) stop constantly putting out such hype?

2022-10-23 05:24:52 And likening it to "innermost thoughts" i.e. some kind of inner life is more of the same.https://t.co/kFfzL3gbhm

2022-10-23 05:22:59 Claiming that it's the kind of thing that might develop into thinking sans scare quotes with enough time? data? something? is still unfounded, harmful AI hype. https://t.co/hilvqpXgWM

2022-10-23 03:51:33 RT @emilymbender: @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 03:51:31 @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 01:18:48 @EnglishOER @alexhanna @dair_ai For the text ones, I tend to say "text synthesis machine" or "letter sequence synthesis machine". I guess you could go for "word and image synthesis machines", but "mathy math" is also catchy :)

2022-10-22 23:32:51 RT @timnitGebru: I need to get this. Image is Mark wearing sunglasses with a white hoodie that has the writings below in Black.Top:Sto…

2022-10-22 20:07:59 @safiyanoble I'm a fan of Choffy, but as someone super sensitive to caffeine I can say it will still keep me up if I have it in the afternoon. (Don't expect hot cocoa when you drink it. Think rather cacao tea.)

2022-10-21 23:46:26 @LeonDerczynski And now I'm hoping that no one will retweet the original (just your QT) because otherwise folks won't check the date and will wonder why I'm talking about GPT-2!

2022-10-21 23:39:49 @LeonDerczynski Hah -- thanks for digging that up. I've added it here, making it (currently) the earliest entry.https://t.co/uKA4tuv4jF

2022-10-21 23:38:09 RT @LeonDerczynski: This whole discussion - and the interesting threads off it - have aged like a fine wine https://t.co/ykUiRfoGTf

2022-10-21 23:11:29 @zehavoc I think a good limitations section makes the paper stronger by clearly stating the domain of applicability of the results. If that means going back and toning down some of the high-flying prose in the introduction, so much the better!

2022-10-21 19:19:40 @kirbyconrod I don't know, but I love the form pdves so much. Do you name your folders "Topic pdves"?

2022-10-21 19:14:54 @LeonDerczynski @yuvalmarton @complingy I want this meme to fit here but it doesn't --- if only people would cite the deep #NLProc (aka deep processing, not deep learning). https://t.co/7rrLQ11GEm

2022-10-21 18:19:29 RT @rctatman: Basically: knowing about ML is a subset of what you need to know to be able to build things that use ML and solve a genuine p…

2022-10-21 14:15:13 RT @mer__edith: You can start by learning that "AI" is first &

2022-10-21 04:12:05 RT @timnitGebru: I say the other way around. To those who preach that "AI" is a magical thing that saves us, please learn something about…

2022-10-21 01:44:09 @edwardbkang @simognehudson Please do post a link to your paper when it is out!

2022-10-20 23:08:05 RT @StevenBird: @ReviewAcl @aclmeeting you are recruiting reviewers and sending out reminders and calling for papers, but we do not yet hav…

2022-10-20 20:55:42 @AlexBaria Thank you!

2022-10-20 20:11:25 @programamos Thank you!

2022-10-20 20:09:36 @Miles_Brundage @baobaofzhang Thank you!

2022-10-20 19:59:49 Interesting question about how *people* understand what we're calling "AI" these days. Is anyone out there working on assessing that? https://t.co/xglrFgSVqv

2022-10-20 19:51:09 PSA to people eating lunch at meetings with Meeting Owls---whoever has the noisiest wrapping for their lunch will be 'on screen' as you eat

2022-10-20 14:59:03 @JoFrhwld @alkoller By general linguistics education, do you mean what people who aren't studying linguistics would encounter about linguistics in their education?

2022-10-20 14:46:04 @alkoller As to whether I think machines can understand? Sure:https://t.co/F7efO4Kwfy

2022-10-20 14:45:01 @alkoller I think this is symptomatic of something --- perhaps an extremely ascientific desire to believe that LMs are "AI"?>

2022-10-20 14:44:10 @alkoller (This is a subtweet of a [student?] paper I came across on arXiv. But it also a subtweet of all the other times I've come across this misunderstanding.)>

2022-10-20 14:43:15 Funny how the people who equate language models with "AI" misread read Bender &

2022-10-20 02:49:49 RT @kirbyconrod: oh today is International Pronouns Day! ive once again forgotten to do anything in particular but perhaps you would like s…

2022-10-20 00:31:47 This is gonna be awesome!! https://t.co/70lqNfeHjh

2022-10-20 00:31:35 RT @alexhanna: Next Mystery AI Hype Theater 3000 alert!Next week, we invite @shengokai @negar_rz and @WITWhat to talk about AI Art!Oct…

2022-10-19 23:04:46 @RottenInDenmark Some of my fellow Seattlites are basking in the irony of Seattlites waiting impatiently for the darkwet. I'm just waiting impatiently for the darkwet.

2022-10-19 22:30:57 RT @techreview: After her departure, she joined Timnit Gebru’s Distributed AI Research Institute, and work is well underway.https://t.co/0

2022-10-19 20:47:28 Lots of really interesting resources in the replies! Thanks all :) https://t.co/QV1sKbK4IM

2022-10-19 19:55:07 RT @DAIRInstitute: Join us virtually on December 2nd and 3rd as we celebrate our 1st anniversary. We'll have interactive talks and conversa…

2022-10-19 19:51:42 RT @Brown_NLP: Brown is hiring Assistant Professors in Data Science. Language people, please apply! https://t.co/kXoX3N5Cge

2022-10-19 14:25:02 @ggdupont No, I don't think so. I think the folks at Stanford HAI were trying to sell something (their work + these models) rather than trying to make it obvious where people stand wrt them.

2022-10-19 14:24:03 @cbrew I'm tripping over "positive affect" and "capabilities" in this tweet. Positive affect bc what I'd want in a term for these things is neutral at best. "Capabilities" bc so often it's used to refer to various wishful mnemonics (computer functions named after human cognitive acts).

2022-10-19 13:39:14 RT @emilymbender: Gauntlet thrown. I this!

2022-10-19 04:10:56 Interesting how the term "foundation model" is becoming a shibboleth. I get the sense that I can make a lot of inferences about someone's stance towards so-called "AI" based on whether (&

2022-10-19 03:23:27 @ACharityHudley @_alialkhatib Thank you!

2022-10-19 02:49:11 @paulfriedl4 @mireillemoret @RDBinns @laurencediver Thank you!

2022-10-19 02:32:02 @paulfriedl4 Yeah, "formal" as in "formal logic". What I'm particularly interested in is literature that looks at the role of the accountability of the humans interpreting the laws.

2022-10-19 02:25:43 And @_alialkhatib cites this from Lipsky which seems apropos, too:Michael Lipsky. 1980. Street-Level Bureaucracy: The Dilemmas of the Individual in Public Service. Russell Sage Foundation.

2022-10-19 02:24:06 "To Live In Their Utopia" by @_alialkhatib is the most relevant thing I have so far:https://t.co/ZFqxyHcHr7>

2022-10-19 02:22:44 Q for legal scholars out there: is there any writing about the extent to which &

2022-10-19 00:53:59 @dmonett In this wonderful paper, @AlexBaria and @doctabarz point out that the computational metaphor (THE BRAIN IS A COMPUTER / THE COMPUTER IS A BRAIN) is bidirectional, and problematic in both directions:https://t.co/qUC2ECHbTh

2022-10-19 00:47:59 RT @Alber_RomGar: There's a debate on AI writing tools on Twitter right now. As an AI writer, I want to give my 2 cents.Here's my hot tak…

2022-10-18 19:45:00 @merbroussard @rgay Ohhh! That looks excellent.

2022-10-18 13:20:17 @jessgrieser @laura_mcgarrity any leads?

2022-10-17 22:49:05 Q for US-based #NLProc folks: Does anyone know the timeline of the DARPA LORELEI program? That is, when did the program start and, if it's not still going, when did it end?

2022-10-17 20:35:22 RT @davidberreby: The above prompted by this fine analysis of #AIhype by @emilymbender. Got me thinking about how/why we writers &

2022-10-17 20:25:41 @holdspacefree This news story started off life as a press release from @UWMedicine 's @uwmnewsroom who I think should also have disclosed the financial COI that was in the underlying study.https://t.co/0HDsYmyP1g

2022-10-17 20:21:28 Gauntlet thrown. I this! https://t.co/fJFnke0S0Z

2022-10-17 20:21:00 RT @davidberreby: What to do to improve journalism on these topics? 1 Treat AI/robotics like politics or the fossil fuel industry, not like…

2022-10-17 18:12:51 Coda: @holdspacefree illustrates the importance of reading the funding disclosures. The researchers giving the hype-laden quotes to the media weren't just being naive. They're selling something.https://t.co/TK8gjwzYwv

2022-10-17 15:45:56 RT @emilymbender: #twomarvins

2022-10-17 13:22:21 RT @emilymbender: Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying pape…

2022-10-17 04:44:58 @maria_antoniak Babel by R F Kuang

2022-10-17 02:32:48 @ai_skeptic @mmitchell_ai @sleepinyourhat Even if what you say is true (that you're a junior researcher, afraid to express your opinions from a non-anon account) this is still trolling. And of course, on an anonymous account, you can claim any identity you like.

2022-10-17 02:07:37 @holdspacefree https://t.co/5Nc0SEoCNf

2022-10-17 02:05:27 #twomarvins https://t.co/5Nc0SEoCNf

2022-10-16 22:47:39 @timnitGebru I'm so sorry, Timnit.

2022-10-16 21:26:26 In sum: It seems like here the researchers are way overselling what their study did (to the press, but not in the peer reviewed article) and the press is happily picking it up./fin

2022-10-16 21:26:12 Another one of the authors comes in with some weird magical thinking about how communication works. Why in the world would text messages (lacking all those extra context clues) be a *more* reliable signal?/22 https://t.co/JOMOvVQp6F

2022-10-16 21:25:42 Note that in this case, the source of the hype lies not with the journalist but (alas) with one of the study authors./21

2022-10-16 21:25:28 In the popular press article, on the other hand we get instead a suggestion of developing surveillance technology, that would presumably spy not just on the text messages meant for the clinician, but everything a patient writes./20 https://t.co/kXdOCAUNnl

2022-10-16 21:24:26 Next, let's compare what the peer reviewed article has to say about the purpose of this tech with what's in the popular press coverage. The peer reviewed article says only: could be something to help clinicians take action. /19 https://t.co/s23mTHCv1D

2022-10-16 21:23:45 Another misleading statement in the article: These were not "everyday text messages" (which suggests, say, friends texting each other) but rather texts between patients and providers (with consent) in a study./18

2022-10-08 14:38:26 @AngelLamuno Uh, read the thread?

2022-10-08 14:20:02 RT @emilymbender: In other words: linguistics, computational linguistics, and #NLPRoc all collectively and separately have value completely…

2022-10-08 14:00:27 I'm unmoved when people talk about one danger of #AIhype being the prospect of it bringing on another AI winter. But I do care that #AIhype is making it harder (in this and many ways) for researchers grounded in the details of their research area to do our work.

2022-10-08 13:58:51 e.g. https://t.co/S1XoTBe9JI>

2022-10-08 13:58:39 But the #AIhype is making it harder to do that work. When AI bros say their mathy maths are completely general solutions to everything language &

2022-10-08 13:54:39 In other words: linguistics, computational linguistics, and #NLPRoc all collectively and separately have value completely unrelated to the project of "AI". >

2022-10-08 13:53:03 But that's okay, because it's a tool, involving limited language understanding, and it has served its purpose. And it's a very impressive and interesting tool! Language is cool and building computer systems that can usefully process language is exciting!>

2022-10-08 13:52:23 Has it understood the same way or as well as a human would? No. It doesn't make inferences about what the timer is for based on shared context with me or wonder what I plan to do outdoors. >

2022-10-08 13:50:50 So when I ask a digital voice assistant to set a timer for a specific time, or to retrieve information about the current temperature outside, or to play the radio on a particular station, or to dial a certain contact's phone number and it does the thing: it has understood.>

2022-10-08 13:49:46 To answer that question, of course, we need a definition of understanding. I like the one from Bender &

2022-10-08 13:46:44 People often ask me if I think computers could ever understand language. You might be surprised to hear that my answer is yes! My quibble isn't with "understand", it's with "human level" and "general".>

2022-10-08 13:42:15 @NoppadonKoo @ledell Multimodal interfaces can be very useful tools --- but developing good multimodal interfaces isn't the same thing as "near human performance on reasoning tasks".

2022-10-08 13:38:48 @joelbot3000 @_joaogui1 So long as "AI safety" is premised on the idea that we are delegating decision making authority to machines (including imagined future "AGI", but also real current systems) then I think it is antithetical to actual AI ethics work, regardless of where they publish.

2022-10-08 13:35:27 1. The general operating mode is "make shit up". Sometimes it just happens to be right.2. "Make shit up" is actually giving too much credit, since the LMs are only coming up with sequences of linguistic form &

2022-10-08 13:34:34 I was going to compare that to the failure more of e.g. large LMs used as dialogue agents where the failure mode is "make shit up", but that's a little inaccurate on two levels:

2022-10-08 13:33:32 I'm not sure I agree that current systems "fail more gracefully" than rule-based predecessors. Failing to return an answer when the input falls outside the system's capability does have a certain grace (humility) about it...>

2022-10-08 13:31:22 The theme track looks interesting and timely! https://t.co/16qQsXNmI4

2022-10-08 13:29:19 RT @boydgraber: We have a call for papers for ACL 2023, but we haven't gotten the Twitter account, a hash tag, or a blog post yet. Stay tu…

2022-10-08 13:04:41 @Kobotic Alas, I think just some coding exposure wouldn't do it and might even make it worse, unless the hour long lesson was well crafted to highlight how computers are simply instruction following machines...

2022-10-08 13:01:27 RT @emilymbender: There is 0 reason to expect that language models will achieve "near-human performance on language and reasoning tasks" ex…

2022-10-08 05:16:55 RT @LeonDerczynski: lurid hyperref color boxes on links are a violence upon the person. luckily, if your venue's template author hasn't not…

2022-10-07 23:33:45 @seamuspetrie @HelenZaltzman Updated version, coined by yours truly in ~2017 for a protest march:Jingoistic Charlatan Makes Seattle Undertake Protest

2022-10-07 19:05:14 @LinguaCelta In Kathol &

2022-10-07 19:03:46 @Stupidartpunk @alexhanna ... which include a lot of discussion of terminology. For example:https://t.co/TrZ4mZZPUe

2022-10-07 19:03:15 @Stupidartpunk @alexhanna You might enjoy our first three episodes:https://t.co/78tYEfs17d

2022-10-07 19:02:19 @Stupidartpunk Mystery AI Hype Theater 3000 (with @alexhanna ) plans an episode on "AI art" together with people who are more knowledgeable about art &

2022-10-07 17:08:09 @kirbyconrod @mixedlinguist That's been my M.O. for naming the language --- the problem definitely isn't solved, but I think I've seen the needle move at least a little!

2022-10-07 17:07:40 @kirbyconrod @mixedlinguist I think by doing what @mixedlinguist is doing in reviewing (and similarly at conference presentations if people don't say): Asking directly &

2022-10-07 17:06:00 @kirbyconrod @mixedlinguist https://t.co/fLeoxN06eI

2022-10-07 17:03:30 In case anyone needs a quick refresher:https://t.co/JjqcSaFizu

2022-10-07 17:02:29 Once again "AI safety research" is just the pious-seeming version of AI hype.

2022-10-07 17:02:04 There is 0 reason to expect that language models will achieve "near-human performance on language and reasoning tasks" except in a world where these tasks are artificially molded to to what language models can do while being misleadingly named after what humans do. https://t.co/HgCaLgTfcx

2022-10-07 17:00:12 @sarmiento_prz @Abebab Came here to add a plug for @ImagesofAI !

2022-10-06 18:36:17 So far, I'm just not saying anything about academic venues. I don't want to set the expectation of free labor, but also I don't want to rule out informal presentations of work in progress to other research groups. And that starts being a lot of words...

2022-10-06 18:34:46 Thanks, all, for the input! I've updated my contacting me page to state that I won't do unpaid speaking engagements in corporate venues. https://t.co/nxRxxz4DvX>

2022-10-06 16:49:56 @willbeason @adamconover Source: https://t.co/yJiq4Yxu99

2022-10-06 13:37:34 RT @timnitGebru: Take a look at our job application here: https://t.co/Fi6rdpKZUP

2022-10-06 00:19:02 RT @DAIRInstitute: We are hiring a senior community-based researcher. Full job ad and application here: https://t.co/9KOCe1ps2p

2022-10-05 23:10:53 @simonw @vlordier @robroc @mtdukes Thank you for this. I'm glad that you see how "magic" and "spells" can be harmful metaphors in this context.

2022-10-05 22:52:36 @thamar_solorio Totally agreed. The PCs were right to put that requirement in. I just wanted to vaguely vent. (The paper was submitted to an NLP venue, but concerned ML applied to something without any language data in sight...)

2022-10-05 21:06:55 @StephenMolldrem I think there's a world of difference between sponsored research (where the sponsor has some say in the research direction) and getting paid to give talks on research that is already done.

2022-10-05 20:27:59 I get why review forms have a word count minimum on certain boxes (esp "What are the strengths of this paper") but sometimes it really truly is a struggle.

2022-10-05 20:26:24 @meredithdclark And knowing that my refusal to do it for free supports the position of people for whom it is more taxing/more crucial to livelihood helps draw a firm boundary!

2022-10-05 20:25:31 @meredithdclark I can see speaking to academic groups as part of my day job (that I am paid to do) and also of benefit to me in working out my ideas. But speaking to tech cos to help educate their workforce doesn't seem the same.

2022-10-05 20:24:27 @meredithdclark Thank you, Meredith. That is very clarifying. It feels insulting to be asked to do free work for big tech cos, but I was also imagining that they see themselves as just researchers inviting other researchers to come share ideas...

2022-10-05 20:23:27 @EmilyTav Follow the replies &

2022-10-05 17:11:29 Academics invited to give tech talks or other informal presentations at industry labs: Do you expect to be paid for that? Do the invitations usually come with an offer of an honorarium? #AcademicTwitter

2022-10-05 17:01:16 @AngloPeranakan Recently saw &

2022-10-05 16:46:58 RT @rctatman: I've seen some confusion around this recently, so to clarify:- AI ethics/FAccT: evaluating &

2022-10-05 14:30:12 @simonw @dylfreed @knowtheory @vlordier @robroc @mtdukes Even saying "fact-check everything it says" is giving it too much credit. It is only spitting out sequences of letters (word forms). If its words make sense, if it's "saying" anything, it's because we make sense of those words.

2022-10-05 14:22:19 @simonw @dylfreed @knowtheory @vlordier @robroc @mtdukes https://t.co/7YYD3QxI7R

2022-10-05 13:38:06 @simonw @vlordier @robroc @mtdukes "AI" is terrible and we don't have to acquiesce. Alternatives: SALAMI, PSEUDOSCI:https://t.co/4jm6nD8Q0s

2022-10-05 13:29:29 @simonw @vlordier @robroc @mtdukes "We're throwing spaghetti at the wall. Sometimes it sticks. And sometimes when it sticks we like the patterns we see. But the people who sell the spaghetti like to say they've made special, sentient spaghetti that is actually trained like marching bands to make specific shapes."

2022-10-05 13:27:19 @simonw @vlordier @robroc @mtdukes Even "people messing around with forces they don't understand" is a bad metaphor here, because it STILL suggests that the forces (= the "AI") are coherent and powerful.

2022-10-05 13:26:11 @robroc @simonw @vlordier @mtdukes And so claiming that they do “magic” or repeating those claims is a problem, because that makes it seem like they work, even if no one understands why.

2022-09-30 18:58:01 @athundt @mmitchell_ai @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT These are great ideas!

2022-09-30 18:57:13 @annargrs @mmitchell_ai @stephaneghozzi @timnitGebru @DAIRInstitute @rajiinio For me, I think there's some overlap between "!" and "tsk tsk tsk", which maybe isn't what we want. Just plain ! or ! is maybe stronger... Still, if one or more of these expressions were to be borrowed into English, their meanings would surely drift.

2022-09-30 18:54:30 @SashaMTL @arxiv @mmitchell_ai I took great joy in pointing out to the ACM that certain aspects of their pubs system weren't Unicode compliant. It seems that @arxiv needs updating in the same way.

2022-09-30 15:57:25 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT Thank you!

2022-09-30 14:58:29 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT Will you be updating the checklist PDF too? Pitfall 16 needs citations to Roberts and Gray &

2022-09-30 14:57:03 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT @ubiquity75 @marylgray @ssuri The tendency to appropriate and fail to cite the work of Black women is pervasive &

2022-09-30 14:56:06 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT @ubiquity75 @marylgray @ssuri I get the sense that you are aiming for a popular audience &

2022-09-30 14:51:28 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT @ubiquity75 @marylgray @ssuri Similarly, your Pitfall 15 looks exactly like the main point of my blog post that you do link to elsewhere ... but there's no connection drawn there.>

2022-09-30 14:50:37 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT For example, your Pitfall 16 should cite @ubiquity75 's "Your AI is a Human" and @marylgray and @ssuri's _Ghost Work_ ... maybe you point to their work earlier in the piece, but I'd have to go click lots of links to figure that out.>

2022-09-30 14:47:16 @random_walker @benbendc @aimyths @LSivadas @SabrinaArgoub @GeorgetownCPT I appreciate the shout out, but I think it would be better citational practice to name people in the post, in addition to links and to draw clearer connections to how you are building on previous work.>

2022-09-30 14:29:43 RT @GretchenAMcC: It's the semi-final round of figuring out the least confusing way to clip the word "usual" and so far we've learned that…

2022-09-30 14:23:05 @kilinguistics Which says something about who their imagined users were....

2022-09-30 14:22:52 @kilinguistics A few years later, I did get the satisfaction of complaining about it to the person at Microsoft in charge of that feature. She was surprised, having never heard of a case where someone wouldn't want Teh "corrected" to The.>

2022-09-30 14:22:00 @kilinguistics Autocorrect kept changing "Teh" to "The" in my bibliography entries referring to the work of James Cheng-Teh Huang, then editor of said journal. It took a lot of menu hunting to figure out how to turn that off.>

2022-09-30 14:20:31 @kilinguistics Ugh, what a waste!The last time I had to deal with Word without a co-author was for my 2000 paper "The syntax of Mandarin bǎ: Reconsidering the verbal analysis">

2022-09-30 14:13:30 @kilinguistics Yeah, when it's just for the reviewing process it seems like there should be other solutions!!

2022-09-30 14:01:11 @kilinguistics It seems to be a cost of the kind of interdisciplinary work I've been getting involved in. Fortunately, this time I complained and learned that latex is okay --- their web page was out of date!

2022-09-30 14:00:22 RT @LingMuelller: Chapter 25 of the HPSG handbook is on #ComputationalLinguistics and #HPSG by @emilymbender and @AngloPeranakan. Read all…

2022-09-30 00:17:15 @undersequoias It is Sage --- but it's a site that is somehow too similar to other journals, so my LastPass has like four passwords for it, none of which work. I'm pretty sure I don't have an account for this journal, except maybe they made one for me? Anyway, dealing with password reset...

2022-09-30 00:12:08 @jackclarkSF The fact that you are putting "use an LM to end-to-end write a testimony" in the discourse at all is the problem. I'm also morbidly curious what you mean by "nitty gritty". Did you talk about it as "AI" or as "understanding" the prompt? If so: not helpful.

2022-09-29 15:53:35 RT @FlyingTrilobite: The internet has taken advantage of artists with virtually every platform that has launched since it began: we need to…

2022-09-29 14:17:31 RT @neilturkewitz: This whole thread is fantastic, but this in particular captures the unique challenges of creating accountability in a un…

2022-09-29 12:21:24 RT @emilymbender: I'm glad to see this reporting from @nitashatiku about the rise of text-to-image systems and their dangers. She also coll…

2022-09-29 03:41:14 Fixing auto-captions for #MAIHT3k ep 3, and the system capitalized T and W in "The Wishful mnemonics" and now I'm sitting here thinking that would be a cool band name ... maybe one that plays middle-grades silly music?

2022-09-29 01:51:08 RT @ZeerakTalat: I wanna zoom in on one word in one sentence. "There is a *need* to develop a system that establishes a link between spoken…

2022-09-28 20:16:45 RT @Matt_Cagle: Do your professional interests include fighting surveillance, unearthing secretive programs, and suing the hell out of the…

2022-09-28 20:10:00 Oh, and to all the OpenAI people quoted in the article, I award you #threemarvins

2022-09-28 20:09:14 See also: https://t.co/vKXofWJc0J

2022-09-28 20:07:19 @nitashatiku Those who claim to be simply aren't actually positioned to do so, even without their bizarre "drive to be first and hype [their] AI developments" (per the article). And the others don't even think it's their responsibility at all.

2022-09-28 20:06:15 Thanks again to @nitashatiku for this coverage. It shows very clearly how neither OpenAI who claim to be building "AGI" to save humanity from "AGI" and to be trying to release tech "responsibly" nor those who want to make an OS playground are actually handling safety well.>

2022-09-28 20:03:37 "All y'all should be ashamed for how you're using this deepfake porn, disinfo creating machine I've made. Not my business, tho." https://t.co/Yy8IbgIip2

2022-09-28 20:01:53 Just because social media platforms have largely failed at creating sustainable community standards doesn't mean we need more spaces like that. That's like saying: we might as well create more nuclear waste sites, since there are already several festering. https://t.co/hqLtxbtR2G

2022-09-28 19:59:19 This trust is set up as between the company (here OpenAI, or the others) and the people using the system to generate images. But who is speaking from the people who are harmed by deepfake porn &

2022-09-28 19:57:27 @nitashatiku I hope so. It's worth keeping a clear distance to the #AIhype.

2022-09-28 19:56:11 Grandiose much? (Again, not surprising for OpenAI.) But also: doesn't society get a say in whether we have to "co-develop" with such systems? https://t.co/PrgpF9iduJ

2022-09-28 19:54:54 But also: Who has the relevant cultural context to detect images that are generated for the purpose of creating disinformation? How is OpenAI making sure the right images get to the right "third party contractors"?

2022-09-28 19:54:10 @nitashatiku How is OpenAI making this "safe"? Ghost work. Ghost workers around the world now have to sift through synthetic images to decide which ones meet community standards. I don't think we should have to live in a world where these tasks are generated. >

2022-09-28 19:52:12 Uh, nope. The text to image generation shows us ... what images the system associates with text. Calling that "concept" is stretching that term beyond any recognition. (Here @nitashatiku slips a bit too into #AIhype --- that first sentence isn't attributed to anyone at OpenAI.) https://t.co/sjdDxz17SB

2022-09-28 19:50:29 @nitashatiku "pre-AGI" But that's totally par for the discourse with OpenAI. They do really seem to believe that they are building AGI and thereby saving the world. It's absurd --- and unfortunately it warps so much of the discourse in this field. https://t.co/aNDvuj0yMf

2022-09-28 19:48:23 I'm glad to see this reporting from @nitashatiku about the rise of text-to-image systems and their dangers. She also collected quotes that range from laughable AI hype to alarming lack of responsibility. Here is a sample:https://t.co/CzXBxjJRhv>

2022-09-28 15:10:02 RT @emilymbender: Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use ca…

2022-09-28 14:38:15 RT @jshermcyber: This entire thread is great (read it!) — and this tweet in particular, and the one following, speak to an important point.…

2022-09-28 13:51:21 @twocatsand_docs @hypervisible @MayorofSeattle @SeattleCouncil I can't tell of you're supporting their review or not. Do you mean it's all that simple? Disagree: the methodology is in where their data comes from. Do you mean they are making it look simple when it isn't? Agree.

2022-09-25 14:45:16 @mmitchell_ai At first glance, I thought this was commentary on the silly practice we're seeing a lot of from longtermists of putting probabilities on predicted future events (xrisk...) but then that didn't quite fit in context.

2022-09-25 14:44:00 @sherrying Yep.Alt: This is the Wondermark cartoon that is the source of the term sealioning. Orig is here (but w/o alt text): https://t.co/yIRJVgo3UkWikipedia description: https://t.co/qxH3VOKXCn

2022-09-25 02:42:53 @ruthstarkman @SashaMTL @timnitGebru @mmitchell_ai @alexhanna @tolulopero @GRACEethicsAI @HarriettJernig I'll be glad to get to meet you in person!

2022-09-25 02:41:53 @ruthstarkman @SashaMTL @timnitGebru @mmitchell_ai @alexhanna @tolulopero @GRACEethicsAI @HarriettJernig Glad you'll get to spend time with parents. Will you be at our talk?

2022-09-25 02:12:28 @ruthstarkman @SashaMTL @timnitGebru @mmitchell_ai @alexhanna @tolulopero @GRACEethicsAI @HarriettJernig Oh sorry you won't be there!

2022-09-25 00:43:04 @CT_Bergstrom @TwitterSupport @darkpatterns IOW: My phone (android) lets me say which apps get to actually display notifications. I allow very few.

2022-09-25 00:42:25 @CT_Bergstrom @TwitterSupport @darkpatterns I think I have this solved by allowing notifications on Twitter, but then not allowing them from the Twitter app to my phone. (So if I open the app, I see the badge, but it never makes noise.)[Sorry for generating a notification.]

2022-09-24 22:05:22 @josephsams I'm sorry for whatever pain you are experiencing. THat does not make it appropriate or helpful to make a joke about someone else's pain.

2022-09-24 20:46:12 @deliprao I'd go even further: Humans need logical reasoning for a wide range of the activities that NLP tasks are meant to emulate.

2022-09-24 20:40:26 @AngloPeranakan

2022-09-24 20:31:56 @josephsams Uh if someone says "X was painful" coming in with "How about X, but as a joke" is not helpful. I suggest deleting your tweet.

2022-09-24 20:29:36 The problem with calling out the "change my mind" bros is that they think I've asked them to change my mind. My dudes, I haven't. Your "prize-based philanthropy" contest about AI &

2022-09-24 20:23:46 @Lang__Leon @RadicalAIPod What makes you think I'm trying to find common ground? My goal here is to shed some light on the absurdity of what they are doing to warn other people off of it.

2022-09-24 20:21:18 @misc @DavidSKrueger @fhuszar @sarahookr Too far from my area. My guess is that there's two separate issues here: superforecasting in general + superforecasting as applied in the context of EA/longtermism/"AGI" + existential risk.

2022-09-24 20:16:57 @protienking @DavidSKrueger @fhuszar @sarahookr So, any evidence that the people identified in that way are positioned to make predictions about the development of fantasy technology like AGI?

2022-09-24 20:07:49 @Abebab @rajiinio e.g. in this talk:https://t.co/3KDiNyaM4a

2022-09-24 19:59:17 RT @emilymbender: @DavidSKrueger @fhuszar @sarahookr "Superforecasters" is so sus. What makes them super? Are these actually people who hav…

2022-09-24 19:54:03 @Abebab I've started using the phrase "ground lies". My inspiration was this piece by @rajiinio (but I don't think she uses that phrase specifically).https://t.co/1JnDJnXeCQ

2022-09-24 19:52:50 @SashaMTL @timnitGebru @ruthstarkman Three cheers for @ruthstarkman

2022-09-24 19:50:49 @DavidSKrueger @fhuszar @sarahookr "Superforecasters" is so sus. What makes them super? Are these actually people who have an outstanding track record of being proven right? (I doubt it.) People who have made a lot of predictions? (Who cares?)

2022-09-24 18:47:05 @Lang__Leon @RadicalAIPod But I doubt anyone who is down the "xrisk" rabbithole is actually interested in learning from these scholars, because doing so will require understanding their own unearned privilege and the importance of ceding, rather than hoarding, power.

2022-09-24 18:46:05 @Lang__Leon Abeba Birhane, Timnit Gebru, Safiya Noble, Ruha Benjamin, Brandeis Marshall, Deb Raji, Cathy O'Neill, Sasha Costanza-Chock. Or start here, with this curriculum from @RadicalAIPod https://t.co/FoqdosW7Gq>

2022-09-24 18:43:19 @Lang__Leon "Answering these questions": The problem is that they are in fact focused on irrelevant questions. If they cared about real harms affecting real people in the real world rather than their fantasy world, they could get a lot from reading authors such as:

2022-09-24 13:51:37 RT @emilymbender: This is so absurd. "We're too lazy/incurious to learn from the large existing literature outside our own community. Write…

2022-09-24 13:51:31 RT @emilymbender: I'm curious what people think about the Chatham House Rule and how it relates to the politics of citation.>

2022-09-24 12:28:34 RT @QueenOfRats: As we’re all still yelling abt research skills, digital literacy, &

2022-09-23 22:18:55 Really great reflections on this topic from @KendraSerrahttps://t.co/kgHJLZ10ZZ

2022-09-23 22:17:43 @undersequoias @alexhanna It doesn't *promote* doing the right thing, either, though.

2022-09-23 22:12:38 @LeonDerczynski Also, there is no spot where you actually have the view from their windows. That apartment would have to be hovering in mid-air...

2022-09-23 21:23:18 @jeffjarvis That is how I read it. I still think it is an unhelpful response.

2022-09-23 21:02:06 @GaryMarcus @timnitGebru I think that the fact that they have those resources in the first place is a misallocation. But not one that I think I can usefully fix by entering their silly contest.

2022-09-23 20:55:16 @GaryMarcus @timnitGebru Because I have other things to do with my time than engage the "change my mind" bros. Because I choose what questions I want to write about.

2022-09-23 20:40:37 @GaryMarcus @timnitGebru But since they're sitting on all the $$ they think that that gives them the right to shape the conversation. Tell others to "jump" and expect "how high?" as the response. While putting most of the $$ into developing the "AGI" they are also afraid of.

2022-09-23 20:39:39 @GaryMarcus @timnitGebru She has written &

2022-09-23 20:27:11 RT @timnitGebru: We only fund white dudes saving the world &

2022-09-23 19:52:07 This is so absurd. "We're too lazy/incurious to learn from the large existing literature outside our own community. Write something special for us and we'll (maybe) pay some of you after the fact, if we're impressed enough." https://t.co/PfzC1tACjS

2022-09-23 19:24:40 @jeffjarvis That response (equating machines &

2022-09-23 18:57:30 RT @timnitGebru: Happening now.

2022-09-23 18:27:29 RT @mmitchell_ai: PSA courtesy of @emilymbender and @alexhanna : "Hidden Figures" is a term that recognizes Black women. Don't co-opt for,…

2022-09-23 17:50:16 In 10 minutes! https://t.co/evpJ37PxQz

2022-09-23 17:43:40 All of this: https://t.co/kgHJLZ10ZZ

2022-09-23 17:19:20 @mmitchell_ai So awful on so many levels.

2022-09-23 17:13:37 @mmitchell_ai What a horror story indeed. And I'm guessing when you explained yourself ("G shouldn't be exploiting people") no one took the opportunity to learn from your expertise...

2022-09-23 16:41:35 RT @mmitchell_ai: Really important for those of us who are in "Chatham" scenarios.First heard about the idea of "Datasheets" from @timnitG…

2022-09-23 16:37:53 @tdietterich So I'm wondering if the habit of using the CHR has extended out a bit from where it is actually useful/appropriate into spaces where it perhaps doesn't belong or at least has negative consequences we should be considering &

2022-09-23 16:37:00 @tdietterich They are often really interesting groups of people, where the discussions can lead to really interesting ideas and the main reason I'd want to participate is to have the chance to learn from/with those people.>

2022-09-23 04:13:10 @RTomMcCoy @LoriLevinPgh And structured through NACLO. Outreach is really important! But it has to be sustainable...

2022-09-23 04:12:10 @RTomMcCoy @LoriLevinPgh Right. Not a cold call :)

2022-09-23 00:37:14 @dylnbkr @alexhanna @_dylan_baker @MadamePratolung Ah oops! Sorry.

2022-09-22 21:36:24 RT @LeonDerczynski: Detoxification systems not only fail to reject abusive language, they instead make huge efforts to ensure that the pers…

2022-09-22 21:05:24 @PhDToothFAIRy Yeah, it's interesting how tempting it is to accept presuppositions in questions/how much effort it takes to reject them in that context.

2022-09-22 17:51:39 Tomorrow! https://t.co/evpJ37PxQz

2022-09-22 16:48:25 RT @LinguisticsUcla: We are hiring in Syntax! https://t.co/5jd80xeiEqHit us up in comments below or DM us with questions.

2022-09-22 16:06:14 @alexhanna Named after Douglas Adams' robot, connected to the Wall of Shame that @_dylan_baker @MadamePratolung and others are working up

2022-09-22 14:37:34 @kirbyconrod I'm glad I could help --- and that you've managed to maintain those boundaries.

2022-09-22 14:24:18 I get to give this presentation to our new TAs again. Glad to have a chance to remind myself of these things, too. https://t.co/rZOnP7QVOp

2022-09-22 14:10:59 @Lenoerenberg @alexhanna Yes &

2022-09-22 14:08:30 @ImTheQ @xeegeex No thank you.

2022-09-22 13:20:03 RT @alexhanna: We will finish this article, by gosh, if it's the last thing we do! But really, there's a lot to unpack, as our people say.

2022-09-22 13:11:46 @ImTheQ @xeegeex Sorry, what? That seems entirely irrelevant to this discussion.

2022-09-22 12:53:06 @xeegeex @ImTheQ Argh yes -- everyone going around believing in that fairy dust and talking it up makes it SO MUCH HARDER to get traction for other things.

2022-09-22 12:51:56 RT @emilymbender: Episode 3 of Mystery AI Hype 3000 is this Friday, Sept 23, 11am-noon Pacific. @alexhanna and I will, by hook or by crook,…

2022-09-22 12:51:45 RT @emilymbender: Prepping for episode three of #MAIHT3k (w/@alexhanna) and this blog post is SO BAD. I'm trying to figure out which bits a…

2022-09-22 12:11:28 RT @emilymbender: In this talk, I go through six ways in which the research, development &

2022-09-22 04:33:47 @ImTheQ Agreed. But: we don't get there by serving up and promoting misinformation.

2022-09-22 04:26:14 @ImTheQ There is also good tech journalism. I'm thinking of journalists like @_KarenHao @nitashatiku @kharijohnson @dinabass @rachelmetz @haydenfield

2022-09-18 21:51:22 @yuvalmarton Most of the thread is about what to do instead!https://t.co/3qVhNRQQ7e

2022-09-18 21:50:48 @yuvalmarton Thanks for this response. My overall point is that the stance I was taking issue with ("we can't deal with this without universal human agreement") is bogus, because it rests on the idea that we achieve ethical AI by programming something in to autonomous agents. >

2022-09-18 21:34:31 @yuvalmarton This seems to be saying that you think I've tried to change the conversation to "AGI yes/no", which is very much not the point of my thread.https://t.co/XvVShWZNtG

2022-09-18 21:33:56 @yuvalmarton I specifically DO NOT conflate these two things. I am calling out cases where people invested in autonomous systems/AGI seem to be conflating them and I am taking issue with that.https://t.co/9s9IsTRjuO>

2022-09-18 21:32:56 @yuvalmarton While this should be done with reference to things like national &

2022-09-18 21:31:54 @yuvalmarton It makes sense to do this with specific systems and in their particular deployment contexts. That is where we can ask: who is at risk of being harmed here? how are we protecting them? what recourse do they have?>

2022-09-18 21:31:24 @yuvalmarton And this should be done with reference to existing laws in the jurisdictions where the systems are deployed, but looking at legal protections as the floor, i.e. required minimum .>

2022-09-18 21:30:33 @yuvalmarton I think we are largely in agreement --- and in particular I'd like to underscore that I am absolutely for making clear what is considered acceptable and unacceptable behavior of specific, situated systems.>

2022-09-18 04:11:22 RT @banazir: This was on @dawsonwagnertv’s podcast #TheSandsOfTime, which will air on @Wildcat919FM tomorrow (Sun 18 Sep 2022) at 1300 CDT.…

2022-09-17 20:31:32 @LeonDerczynski You don't need to buy it yet. Ask your neighbors how popular the neighborhood is for trick or treat. Trick is the kids not the houses. You don't live in the suburbs.

2022-09-17 12:55:51 RT @emilymbender: As @alexhanna and I have been working through Agüera y Arcas's blog post "Can Machines Learn to Behave" (episode 3 coming…

2022-09-17 12:55:20 RT @emilymbender: I've had the great pleasure of working with @LangMaverick on a piece of Annual Review of Linguistics on "Ethics in Lingui…

2022-09-17 01:59:08 @cat4lyst_Ma It's more than that, as I lay out in the thread that starts with the tweet you are responding to.

2022-09-17 00:16:40 RT @AndyPerfors: Great thread. I have very similar views in response to the argument that says we cannot do anything institutionally about…

2022-09-17 00:09:00 @alexhanna ICYMI: https://t.co/78tYEfs17d

2022-09-17 00:08:15 As @alexhanna and I have been working through Agüera y Arcas's blog post "Can Machines Learn to Behave" (episode 3 coming next Friday!), I'm glad to have re-found this thread on why I think that's just the wrong question to ask. https://t.co/sowyyCfRkn

2022-09-16 22:54:13 @ThomasILiao Cool, thanks! Any chance you might add stats about the training dataset size (in GB/TB or tokens)?

2022-09-16 22:46:29 RT @carlosgr_nlp: Looking for postdoc to work in one of the most active #nlproc research groups in Spain, within ERC PoC project SALSA on u…

2022-09-16 20:59:44 This episode was SO COOL. Definitely have a listen :) https://t.co/FVmOxRWQNu

2022-09-16 20:59:07 RT @xkcd: Thank you to @GretchenAMcC and @superlinguo for inviting me on their podcast and enthusiastically answering all my linguistics qu…

2022-09-16 20:52:22 RT @dlauer: I used to think of myself as a techno-utopian - that tech would bring us a better world and solve all our problems. However, I…

2022-09-16 20:23:40 RT @mikarv: Journalists talking to Google about AI and sustainability: ask them if Google Cloud will stop courting firms and selling comput…

2022-09-16 20:07:54 RT @timnitGebru: This is an interesting article to come out today. "while it’s important to be alert to ethical concerns surrounding A.I.…

2022-09-16 18:47:40 RT @PrincetonDH: We are hiring!The CDH seeks an assistant director to help accelerate impactful and ethical research at the intersections…

2022-09-16 17:44:29 RT @dmonett: Disgusting.But, #AI leaders? Someone working for an unethical company is not an #AI leader. Someone gaslighting the work…

2022-09-16 17:21:12 add*(That's not the first time for that particular typo. Hmmm....)

2022-09-16 17:14:33 Gotta ad: "shooting from the bleachers" embeds a whole set of presuppositions about where the action is &

2022-09-16 16:50:49 @clmallinson @KarnFort1 Sent, x2 :)

2022-09-13 03:38:46 See, @csdoctorsister -- I told this was gonna happen.No matter! One for home and one for the office :)https://t.co/bx1Ddpcsys https://t.co/S52Z99UtuI

2022-09-12 23:01:54 @LeonDerczynski /waves upwards

2022-09-12 20:04:38 @robyncaplan It looks like I missed my moment then, but I guess another might appear.

2022-09-12 19:41:41 @robyncaplan Bylines as in authorship of op-eds and similar?

2022-09-12 19:23:21 Today's question: Does the flightpath* go right over my house only when I'm trying to record something, or is it that I tend not to notice otherwise? *Both jets landing at SEA and also floatplanes headed towards Lake Union, the latter being especially noisy.

2022-09-12 18:45:18 Poking around a bit on the requirements for Twitter verification &

2022-09-10 19:44:59 @importnuance @alexhanna That is a highly concentrated bit of #AIhype, isn't it? I'm not sure we can do a whole episode on one tweet (though OTOH...) but I have actually written a paper that's relevant to just how wrong that is:https://t.co/rkDjc4kDxj

2022-09-10 17:28:11 RT @JuanDGut: Excellent panel on the use of AI in public sector, including @emilymbender &

2022-09-10 05:09:42 I guess it's relevant to talk about our own local news? https://t.co/cLudOHhCkV

2022-09-09 19:37:22 Looks like all the videos from #NAACL2022 are now freely available. Here's the panel on "The Place of Linguistics and Symbolic Structures" in #NLProchttps://t.co/lc7jsSo0zt

2022-09-09 12:53:18 @Kobotic @randtke @rcalo @lh3com Regarding that piece:https://t.co/QVZ2yTKTQl

2022-09-09 04:09:28 RT @CT_Bergstrom: Despite the assurances that I received from @cvspharmacy corporate, they are still refusing to provide the COVID-19 boost…

2022-09-09 03:28:15 @randtke @rcalo @Kobotic We shouldn't have to apply an enormous, reactive process each time to root them out. What can be done up-front to prevent these deployments?

2022-09-09 03:27:50 @randtke @rcalo @Kobotic Especially with large language models seeming to be able to "handle" just about any domain, there are a million ways that people might try to apply this stuff and cause harm.>

2022-09-09 03:27:23 @randtke @rcalo @Kobotic It may be that there is a good path to this that involves leveraging existing processes, but in that case, people need to know. >

2022-09-09 03:26:49 @randtke @rcalo @Kobotic First off, let's avoid using the term "AI" without very careful definition. It just doesn't help.Second: What I'm looking for is something that will prevent under-resourced local jurisdictions from buying snake oil sold as "AI".>

2022-09-09 00:08:44 @CT_Bergstrom I wish I had seen this before I went to CVS this morning only to be turned away **by the pharmacist** because they apparently don't take my insurance. (Fortunately, I was able to get an appointment at Walgreens for tomorrow.)

2022-09-08 19:23:08 @er214 The worry that I have though in case of something like shotspotter, is that the underlying data (recordings of gunshots &

2022-09-08 16:22:36 This looks like a really interesting research program! https://t.co/s8XKkSZ127

2022-09-08 16:21:57 More from the UK. (ALT-less photos are screen caps of the linked webpage.)https://t.co/aJjWdrU7xh

2022-09-08 16:17:07 @HBWHBWHBW It struck me as likely not an isolated incident but rather something likely to occur repeatedly. And so long as public services are underfunded, and civil servants undereducated about what "AI" even is, I see a risk of lots of these projects actually getting taken up.

2022-09-08 16:15:53 @HBWHBWHBW That makes sense. The proximal cause of my tweet was actually info about a volunteer project where some folks thought they would help with suicide prevention by doing some text classification over Twitter data.>

2022-09-08 16:10:26 On the issues with "border tech" which are extra vexxed and hard to counteract:https://t.co/VjB5z4ixOS

2022-09-08 16:09:36 @Nanjala1 Yes, that is definitely the kind of service I'm worried about and you're right that it's one that is extra vexxed because the people affected have (even) less leverage.

2022-09-08 16:08:14 From Human Rights Watch:https://t.co/oGdIChG4l3

2022-09-08 16:07:25 More AI registries:https://t.co/TWpirLSf0q

2022-09-08 16:07:01 AI registries:https://t.co/IpvV8wu7rt

2022-09-08 16:06:35 @Kobotic @rcalo Thanks, Kobi. Will that help provide protections against (willy nilly) incorporation of machine learning into public services?

2022-09-08 16:04:44 Resources for pushing back against algorithmic decision making in public benefits:https://t.co/gxyRgQBPtP

2022-09-08 16:04:11 @merbroussard It wasn't -- what a great resource! Thank you.

2022-09-07 15:39:57 RT @emilymbender: If you start with a false premise, you can prove anything. Corollary: if you start with a false premise, you can end up w…

2022-09-07 14:51:36 @Abebab Congratulations!!

2022-09-07 13:37:18 RT @Abebab: dismissing AI ethics work as one that "fails to offer actionable proposals for improvement" is like saying "if the issue at han…

2022-09-07 03:55:23 RT @emilymbender: @AlexCEngler I think this story about the radium collecting roommate is a good metaphor. Data which are relatively innocu…

2022-09-07 03:54:19 @AlexCEngler I think this story about the radium collecting roommate is a good metaphor. Data which are relatively innocuous in isolation/in their natural state can become dangers when stored in big piles. https://t.co/A6AHBGkFtV

2022-09-07 03:50:52 @AlexCEngler Obviously, the Googles and Metas of the world should also be subject to strict regulation around the ways in which data can be amassed and deployed. But I think there's enough danger in creating collections of data/models trained on those that OSS devs shouldn't have free rein.>

2022-09-07 03:49:41 @AlexCEngler Do you really see no responsibility on the part of those who created the models &

2022-09-07 03:48:48 @AlexCEngler 2. What about when HF or similar hosts GPT-4chan or Stable Diffusion and private individuals download copies &

2022-09-07 03:47:29 @AlexCEngler Perhaps this could be handled by disallowing any commercial products based on under-documented models, leaving the liability with the corporate interests doing the commercializing, still.HOWEVER:>

2022-09-07 03:46:36 @AlexCEngler 1. If part of the purpose of the regulation is to require documentation, the only people in a position to actually thoroughly document training data are those who collect it. >

2022-09-07 03:45:48 @AlexCEngler Sorry to be a little slow there -- needed time to read your piece. Here are the things I am worried about, if OSS "GPAI" (ugh) is free from regulation:>

2022-09-06 23:07:30 @robtow I understand time zones, TYVM. The point is they didn't specify and then were rude about it.

2022-09-06 21:36:10 RT @timnitGebru: @emilymbender The lack of regulation is what stifles innovation by hijacking our time, constantly forcing us to cleanup ra…

2022-09-06 20:18:08 RT @FriendsOfAI: Streaming now: Mystery AI Hype Theater 3k - with @alexhanna &

2022-09-06 19:38:19 @huggingface Coda: Also, let's not just go around presupposing that "general purpose AI systems" are a) something that we have or will have soon and b) are actually desirable. Certainly the things that are getting called that now aren't clearly beneficial. https://t.co/qZiX1jWcsT

2022-09-06 19:36:25 And I expect better than this from @huggingface "top of the value chain"?! This just reads as self-aggrandizement. HF of all actors in this space should be making specific suggestions to improve the legislation, not whining about it. https://t.co/4GMNSaefAd

2022-09-06 19:34:48 On the plus side, I like these comments from Mike Cook: >

2022-09-06 19:33:20 Not surprised to see Oren's name attached to that comment, actually. He's been on a "speed at all costs" kick for a long time.https://t.co/zp5gPWJz4m>

2022-09-06 19:32:20 Second, "chilling effect". This is one of those terms that people throw around as if it's always a bad thing. The whole field needs to chill out and slow down, actually. And what we need from the likes of AI2 and HF surely isn't "catching up" with Google &

2022-09-06 19:30:34 How do people get away with pretending, in 2022, that regulation isn't needed to direct the innovation away from exploitative, harmful, unsustainable, etc practices?>

2022-09-05 16:47:36 RT @csdoctorsister: I'm so happy that I got all of the edits done for my new book, Data Conscience: Algorithmic Siege on our Humanity, befo…

2022-09-05 04:11:02 RT @timnitGebru: We don't have the space to imagine the technological futures that work for us because we're always putting out fires where…

2022-09-05 03:56:11 RT @kenarchersf: Read this, for all of you who wonder why AI ethicists don’t see the future benefits of AI… https://t.co/DG9TuQrYw7

2022-09-04 12:37:27 RT @emilymbender: Yes! Mystery AI Hype Theater 3000 is turning into a (mini?) series, not least because 1hr wasn't enough to even scratch t…

2022-09-04 03:02:16 @becauselangpod The textbook I'm talking about:https://t.co/DMBeqpBDHaSee Ch. 4.

2022-09-04 03:00:28 @becauselangpod That is: while there are some semantic generalizations about individuatability, there are also just some morphosyntactic facts to memorize. Ex: cutlery &

2022-09-04 02:59:08 @becauselangpod Also, the same episode involves some discussion of count v. mass nouns (specifically for vegetables) in English. The answer, I believe (based on what Tom Wasow &

2022-09-04 02:52:28 Listening to "Mailbag of Ew" on the @becauselangpod 's Patreon and I've gotta say it warms my Gen X heart to hear Millennials complaining about being made fun of my Gen Zers...

2022-09-04 00:21:34 So the question is: Is there anything further more me to do here? I strongly suspect that their whole set up is a home for fraudulent papers. But I don't have the time nor the expertise to actually check the others. Should I post a link to their journals for others to check?

2022-09-04 00:18:26 And then:"We invite every researchers to our journals. We have several journals, and our journals are not only economics. Please check again our OJS website. Indeed, thank you for your suggestions. We are only publishing articles that are in the same focus and topic.">

2022-09-04 00:17:35 "Please check the website. It has been withdrawn."(Indeed, the link to the paper now redirects to a TOC for the issue that doesn't have the paper in question.)>

2022-09-04 00:16:39 If you don't withdraw the article, I will do what I can to expose your journal for the fraudulent papermill that I believe it to be."They replied to this within 3 minutes to say:>

2022-09-04 00:16:02 If you don't withdraw the whole article, that is a very clear signal that your whole journal is a fraudulent papermill. (Another indication: you invited me, a linguist, to submit to a journal ostensibly about economics.)>

2022-09-04 00:15:50 Your reviewers failed to catch that --- and I stronglysuspect that authors who would do such a thing have also just made up their entire article.>

2022-09-04 00:15:29 I wrote back:"I think it's not just mistakes. They referenced at least two articles that were completely irrelevantto the points they claimed to be citing them for. >

2022-09-04 00:15:05 In the same message, they added: "Please if you have papers, we are ready to publish your paper. It is our honor to receive your paper. Our journals are free of charge." >

2022-09-04 00:14:17 So here's the promised report-back. The editors wrote back quickly to say they were consulting with the authors and then less than a day later to say that those citations were "mistakes" and the authors were fixing them.>

2022-09-03 22:55:00 @AnthroPunk This is something else entirely. They made up a point about something in another field entirely, and then tossed a citation to our paper on it. There is zero connection.

2022-09-03 22:29:19 @Kobotic Thank you. I guess I'm trying to understand what the legal value add is of putting these constraints in a license. For the well-intentioned, the license probably helps. For those dismissive of these concerns, are there really any actionable constraints?

2022-09-03 22:21:43 @Kobotic Sorry, I may be using terms clumsily. What I meant here is Stability AI, who are releasing the Stable Diffusion model under the RAIL license via HuggingFace. If someone accepts that license but then doesn't follow its terms, would someone other than Stability AI/HF have standing?

2022-09-03 22:15:15 @Kobotic Thanks.Is it enough for the person experiencing harm to seek for the code to be withdrawn from use, etc, or do they have to get the licensor to take action on their behalf?

2022-09-03 21:51:01 IOW, does a license like RAIL actually provide recourse to people who are harmed if someone uses SD to churn out demeaning/disparaging content? If so, how is that operationalized?

2022-09-03 21:49:52 @huggingface Curious what @mmitchell_ai @rcalo or @Kobotic think, if any of you have time :)

2022-09-03 04:19:05 @trochee @sminnen The paper does not look like the output of a text synthesis machine. It looks like desperate scholars trying to get a line on their CV + some weird ideas about cryptocurrencies.

2022-09-03 04:12:38 @desai_pratik They may well exist here, too, but this particular journal is based in India.

2022-09-03 04:10:23 @trochee I only checked one other paper they cite (picking the second least relevant one in their bibliography) and found a similar utter lack of connection.

2022-09-03 04:09:03 @mixedlinguist IKR?

2022-09-03 04:08:47 @trochee I'm guessing it's actually that we refer to the GOLD ontology (general ontology of linguistic description)

2022-09-03 03:57:46 I'll give it a couple of days, but I'm really not optimistic. The whole journal is probably trash. I'll report back about what happens, but from the looks of it, it's likely an outfit designed to meet the needs of scholars who have to pad their CVs.

2022-09-03 03:56:27 I've emailed the editor of the journal alerting them and suggesting that they a) retract this paper and b) recheck their reviewing processes. >

2022-09-03 03:55:48 The digital gold has so many advantages but depends on investor trustworthiness."Reader, I assure you: We say nothing of the sort.>

2022-09-03 03:55:30 "Bender and Langendoen (2010) concluded that in today's ecommerce generation every person invests digitally and digital gold comes the under this part digital gold is not available physically but it's internet currencies. >

2022-09-03 03:54:31 Well that's a new one. Just got a citation alert saying that my 2010 paper (with Terry Langendoen) "Computational Methods in Support of Linguistic Theory" was cited in a paper on "digital gold". Clicked through and found this wild claim:>

2022-09-02 20:58:02 RT @kemi_aa: MY DEPARTMENT IS HIRING https://t.co/9p5ldg1Umd

2022-09-02 17:23:00 @alexhanna And if you missed Part 1, you can catch up with the recording here: https://t.co/XCzXnZnfOk

2022-09-02 17:22:32 Yes! Mystery AI Hype Theater 3000 is turning into a (mini?) series, not least because 1hr wasn't enough to even scratch the surface of the piece we're working through.Join me &

2022-09-02 17:13:54 @alexhanna Here's the recording of Part 1! https://t.co/XCzXnZnfOk

2022-09-02 17:12:56 For all those who were asking for a recording, here it is! MAIH3K, Part 1, with @alexhanna And Part 2 is coming next week! https://t.co/XCzXnZnfOk

2022-09-01 01:11:50 Source: https://t.co/lznFZIzVj2

2022-09-01 01:11:38 Today's delightful discovery: Not only does Alaska have ranked choice voting, but they've also provided their FAQ about it in:Yukon Yup’ikHooper Bay Yup’ikGwichi’nNorton Sound Kotlik Yup’ikBristol Bay Yup’ikChevak Cup’ikGeneral Central Yup’ikSpanishEnglish

2022-08-31 23:03:32 RT @LeonDerczynski: Time after time, datasets without documentation turn out to be stuffed with toxic, unjust, and illegal data. The models…

2022-08-31 21:15:06 RT @timnitGebru: We're back after some technical difficulties. https://t.co/okoVAXvmlG

2022-08-31 21:13:46 RT @alexhanna: This was super fun, all! Stay tuned for Part 2 coming next week, and the recording to be posted on YouTube! https://t.co/1uJ

2022-08-31 19:51:23 RT @alexhanna: Beginning in 10 minutes!

2022-08-31 19:35:13 @ruthstarkman @alexhanna That's the plan!

2022-08-31 19:32:00 RT @CriticalAI: #CriticalAI is so excited for this stream! Join @emilymbender and @alexhanna at https://t.co/2YxY4ppEZS for what will defin…

2022-08-31 19:11:00 @timnitGebru

2022-08-31 19:06:31 RT @DAIRInstitute: This is happening in 1 hour. https://t.co/51XYPswUGt

2022-08-31 17:08:16 RT @emilymbender: We can't have sensible discussions of so-called "AI" (incl adverse impacts of work done in its name) if we cede framing o…

2022-08-31 16:48:52 @michaelbolton Thank you!

2022-08-31 16:48:42 RT @michaelbolton: A succinct and brilliant statement. Beautifully put. https://t.co/ivVRKaoUTw

2022-08-31 16:47:52 Credit: I picked up the phrase "support human flourishing" from Batya Friedman, specifically her keynote at #NAACL2022

2022-08-31 16:36:10 @random_walker @sayashk It seems to take a lot of vigilance to keep it from seeping in!

2022-08-31 16:34:39 @random_walker @sayashk I didn't think you were, which is why it really stood out. But quoting it without distancing yourselves from it makes things confusing at best. Especially given how much of the discourse does adopt that framing.

2022-08-31 16:32:45 We can't have sensible discussions of so-called "AI" (incl adverse impacts of work done in its name) if we cede framing of the debate to those who see "AI" (or "AGI") as the overarching goal itself, rather than the building of tools that support human flourishing.

2022-08-31 16:31:20 @random_walker @sayashk For example, they write: "Noted AI researcher Rich Sutton wrote an essay in which he forcefully argued that attempts to add domain knowledge to AI systems actually hold back progress."To which I ask: progress in/towards/for what?>

2022-08-31 16:30:36 I really enjoyed this first installment of @random_walker and @sayashk's substack. The analysis of how folks into deep learning end up with the attitudes they so frequently seem to hold is astute.I'm also left musing about the perniciousness of the framing of discussions on AI: https://t.co/KB6FeZ79MQ

2022-08-31 03:08:27 @simonw Well geez --- I'm not a moral philosopher. Just someone who has been reading up on all that literature I was pointing you to.

2022-08-31 03:08:03 @simonw But more generally: asking what you personally can/should do wrt to running a model in the privacy of your own machine seems like a very strange place to allocate energy, given the harms playing out in the world. If you care, how about looking at what you can do more broadly?

2022-08-31 03:06:50 @simonw What are you doing with the outputs? Are you publishing them? How are you contextualizing them? Are you sensitized to the stereotypes they might be reproducing? Are you sensitized to the impact on the artists whose work was appropriated in the training of the models?>

2022-08-31 03:05:24 @kenarchersf Thank you. We've found a workaround for now. What I'm particularly looking for long term is a tutorial for how to connect to *manual* captions, produced live by a human captioner. The existing documentation seems more focused on auto-captions...

2022-08-31 03:03:58 @simonw Reading the existing literature about the harms ... won't help you decide what you want to do personally, given those harms?

2022-08-31 02:57:59 Seriously: before you have that conversation, sit down and read work by Safiya Noble, Ruha Benjamin, Abeba Birhane, Deb Raji, Timnit Gebru, Virginia Eubanks, at least.

2022-08-28 03:33:18 RT @timnitGebru: To anyone else, if you haven't done so, read this one by @Abebab @vinayprabhu &

2022-08-28 00:43:42 RT @timnitGebru: Please. https://t.co/zwYtI1yuJj

2022-08-27 23:25:31 RT @emilymbender: While we're arguing (again, *sigh*) w/techbros who think that any efforts to not flood the internet with automatically ge…

2022-08-27 23:18:25 RT @mmitchell_ai: A pattern I've seen emerge.Anti-AI-Ethics people: "When you say 'harm' that doesn't really mean anything."Also Anti-AI-…

2022-08-27 23:17:01 RT @mmitchell_ai: The writing from @schock here is so, so good.

2022-08-27 19:57:47 @alexhanna Yeah, I had in mind more the sarcastic "No, tell us what you REALLY think"

2022-08-27 19:19:05 Ever saw a tweet from me or @alexhanna and thought: "Tell us what you really think?" Join us on Wednesday. We'll be doing that. https://t.co/QMWjl25gIe

2022-08-27 19:15:53 @RobertClewley @schock In doing so, we can work to de-normalize structural discrimination (and techsolutionism) and thus make them easier to address through political processes. And while deprogramming individual techbros would also be nice, I don't think any strategy needs to rely on that.

2022-08-27 19:14:42 @RobertClewley @schock Speaking for myself: I think there is tremendous value in learning how to recognize and articulate these problems (and following the people I recommended plus reading their work is a great start). And then making a habit of speaking up.>

2022-08-27 14:44:17 add*

2022-08-27 14:08:10 Gotta ad: "complaining" isn't a good word for the scholarship I'm referencing (just in my tweet because of what I was responding to). Better is: documenting harms and setting them in their cultural context, both in terms of cause &

2022-08-27 13:35:45 Or as a layperson's summary:https://t.co/JTluwztoez

2022-08-27 13:35:07 If you prefer to read papers in tweet thread form: https://t.co/3Fdgwmwegf

2022-08-27 13:34:06 Okay, so in addition all of the other things wrong with this (see my feed and @timnitGebru's among others), I also have to point out: generative models are NOT search engines and NOT a good way to meet information needs.See @chirag_shah and Bender 2022: https://t.co/rkDjc4kDxj https://t.co/1lZ1pfBaXb

2022-08-27 13:31:17 @huggingface The RAIL license is a good idea, but just posting it on the fence next to the fire isn't enough. HF needs to hold to high standards for what they will host, and not let the "OMG progress is so fast!1!!" crowd rush them through vetting processes.https://t.co/rzVg4cHDJL

2022-08-27 13:27:00 And we need organizations like @huggingface (presently hosting the Stable Diffusion model for public download) to act with courage and bring their might to the firefighting effort. >

2022-08-27 13:24:58 HOWEVER! We can get more person hours on this, if more people are willing to jump in. Wanna become a firefighter? Start by following &

2022-08-27 13:23:31 But we also need to organize our fire crew to go after the massive blaze and there are only so many hours in the day. >

2022-08-27 13:22:21 My takeaway this morning is that it is definitely worth it to put out the "brush fires" of things like the Stability AI crew claiming that there's absolutely nothing wrong with releasing models that will reinscribe and amplify racism, lest they grow and merge with the wildfire >

2022-08-27 13:20:00 @csdoctorsister Back to @schock 's passage, which finishes with: https://t.co/j5iWhssB5p

2022-08-26 14:49:04 @jessgrieser @JoFrhwld OTOH I really do have to update the photo of me that's there....

2022-08-26 14:44:17 @jessgrieser @JoFrhwld My publications page lists things from 1995 to 2022, TYVM

2022-08-26 14:34:45 @jessgrieser @JoFrhwld Though our faculty web server doesn't use ~ in the addresses. So maybe just middle aged?

2022-08-26 14:34:00 @jessgrieser @JoFrhwld I guess I'm really old...

2022-08-26 14:33:14 RT @downtempo: Doing AI/ML/DS work in Africa and want to know about upcoming conference, workshop, and event deadlines?Check out https:/…

2022-08-26 13:27:03 @UpFromTheCracks Thank you for your work!

2022-08-26 12:15:21 RT @emilymbender: This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of…

2022-08-25 19:21:36 RT @emilymbender: I could go on, but in short: This is required reading for anyone working anywhere near #AIethics, algorithmic decision ma…

2022-08-25 19:20:48 I could go on, but in short: This is required reading for anyone working anywhere near #AIethics, algorithmic decision making, data protection, and/or child "welfare" (aka family policing).https://t.co/qyHc9Juemd

2022-08-25 19:19:27 4. The ways in which the questions asked determine the possible answers/outcomes.5. Again, the absolutely essential effects of lived experience and positionality to understanding the harms of those outcomes.>

2022-08-25 19:18:24 2. The absolutely essential effects of lived experience and positionality to understanding those harms.3. The ways in which data collection sets up future harms.>

2022-08-25 19:15:59 .@UpFromTheCracks 's essay is both a powerful call for the immediate end of family policing and an extremely pointed case study in so many aspects of what gets called #AIethics:1. What are the potentials for harm from algorithmic decision making?>

2022-08-25 19:12:15 RT @UpFromTheCracks: Just finished a panel on abolishing family policing and it’s algorithms featuring @DorothyERoberts and @b_lts_ . Now m…

2022-08-25 19:12:11 This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of the lack of protections against data collection in our country, &

2022-08-25 17:36:38 This looks like it will be amazing! https://t.co/9jHTOrda9W

2022-08-25 13:00:28 RT @SashaMTL: Many of the category definitions and images are strongly biased towards the U.S. and Europe, vastly under-representing biodiv…

2022-08-25 13:00:19 RT @SashaMTL: A third of ImageNet categories are animals, but what do they actually contain? @david_rolnick and I worked with ecol…

2022-08-25 12:59:42 RT @david_rolnick: According to ImageNet, fish are dead, birds are American, and 98% of ferrets are wrong. In new work with @SashaMTL (http…

2022-08-25 00:52:21 RT @timnitGebru: https://t.co/m9hVaeV1uH"We will also be releasing open synthetic datasets based on this output for further research."I…

2022-08-24 18:03:24 @zehavoc @TeachGuz @alexhanna That's the plan!

2022-08-19 13:26:19 RT @emilymbender: This was a really fun conversation --- I appreciated @pgmid 's questions and the chance to get to chat with @ev_fedorenko…

2022-08-18 20:52:33 So I think this speaks to the need for CS departments to do (more) "service" courses like math departments do, for broad investment in institutions of higher ed across all fields, and for tech firms to broaden their notion of who they stand to benefit from hiring.

2022-08-18 20:51:10 But we *desperately* need our tech workforce to include centrally people who are deeply trained in other fields and modes of scholarship as well.>

2022-08-18 20:50:21 Yes, it is valuable to have some of the workforce be primarily trained in computer science and software engineering. And yes it is valuable for a larger sector of the workforce to have programming skills.>

2022-08-18 20:49:20 RT @Kobotic: Who decides who decides what conversations are allowed about #AI? Who has power &

2022-08-18 20:49:10 I don't think it follows, however, that we should respond to continually growing demand for tech workers by continually growing CS majors. >

2022-08-18 20:48:09 I definitely see it as one of the functions of a public university to provide both a well-qualified workforce to the state to which it belongs and to provide educational opportunities that help students connect to well-paid careers.>

2022-08-18 20:46:59 Apropos of this op-ed, some thoughts:>

2022-08-18 20:41:55 Another corollary: group emails with different requests to different people, even if related, quickly turn into confusing threads where one of the requests gets buried in discussion of the other. Again, separate is better, even if it's the same set of addressees!

2022-08-18 20:40:02 Also, sometimes people email me to ask for things that are not my job --- if one of those is mixed in with requests that are my job, the whole thing gets harder to deal with, too. So there, too, separate is better!

2022-08-18 20:39:16 And email w/multiple requests requires more time and thus sits in my inbox until I can deal with ALL of them all at once. That ends up being far more of an imposition. So, unless the requests are intrinsically bound up with each other: separate emails are better!>

2022-08-18 20:38:12 Pro-tip for emailing busy people: It might seem like putting multiple questions/requests into one email is more polite, since it means only one email. In my experience, however, the opposite is true. >

2022-08-18 19:44:55 RT @turkopticon: In February we delivered a petition to Amazon #MTurk with over 2000 signatures calling on them to amend their mass rejecti…

2022-08-18 19:44:51 RT @alexhanna: Hey @amazon, you're stealing from Amazon Mechanical Turk workers by allowing mass rejections. @Turkopticon is demanding that…

2022-08-18 15:00:30 Is the habit of starting papers with remarks about "recently" or "increasingly" etc found across fields, or is that mostly a CS thing? Also, how does it relate to narratives of "rapid progress" in the field?

2022-08-18 13:18:44 @ocramz_yo Yeah it definitely feels like: This website is an *experience* and you shall experience it the way we have designed it to be experienced. Looking for some info here? Too bad

2022-08-18 00:51:28 Web pages that don't scroll at the rate you move the mouse -- why?And has anyone made a chrome app that would allow me to disable that behavior?

2022-08-17 18:47:03 @stanfordnlp @DanRothNLP @dilekhakkanitur @chrmanning @Google I know that @naaclmeeting is going to make the recordings from the conference publicly available at some point, but this doesn't look like the actual NAACL channel.

2022-08-17 18:41:10 This was a really fun conversation --- I appreciated @pgmid 's questions and the chance to get to chat with @ev_fedorenko ! https://t.co/KSqyotYttp

2022-08-17 17:49:34 RT @pgmid: Brains, linguistics, meaning and understanding, language vs. thought, and much more in conversation with the excellent @ev_fedor…

2022-08-17 16:19:46 I have not tagged Dr. Thornton here on the guess that she doesn't want to keep seeing these tweets in her mentions, but I will make sure she receives them. 4/4

2022-08-17 16:19:38 I also put out some polls (again in a mixture of annoyance and curiosity) that landed as mocking subtweets. This helped nothing and I'm sorry. 3/

2022-08-17 16:19:29 I'm sorry. I should have tweeted only curiosity or not tweeted at all. 2/

2022-08-17 16:19:18 I would like to publicly apologize to Dr. Pip Thornton. I replied to a tweet from her that I was tagged in with a mixture of annoyance and curiosity that landed as punching down. 1/

2022-08-17 02:45:25 RT @EAwatchdog: @timnitGebru From a 38$k grant. not to single out individuals, but just to show the kinds of things that are sufficient to…

2022-08-16 22:01:14 @michaeljcoffey It feels like there should be a "Being John Malkovich" spoof about zucchini. You know, that scene where everyone is John Malkovich, except everyone/everything is a zucchini...

2022-08-16 21:48:12 RT @timnitGebru: Shouldn't it be a conflict of interest for reporters in the effective altruism movement, at @TIME &

2022-08-16 15:19:57 RT @emilymbender: Had some fun this morning riffing with @danmcquillan and @cfieslerSee next tweets for sources of this conversation.

2022-08-16 00:02:06 @ZaneSelvans @cfiesler No. The prompt was other --- see the alt text in @cfiesler 's tweet.

2022-08-15 22:02:28 Series of polls, 4/4Do you feel like you can tell, on reading tweets, which mentions are trying to draw the tagged person's attention, which are trying to draw others' attention to the tagged person, and which are just referential?

2022-08-15 22:01:05 Series of polls, 3/nWould you use a twitter tag to just refer to someone, without any intent to draw their attention?

2022-08-15 21:59:33 Series of polls, 2/nWhen you tag someone on twitter, are you typically cognisant of the fact that by doing so, you're potentially drawing others' attention to them?

2022-08-15 21:58:43 Series of polls, 1/nWhen you tag someone on twitter, are you typically cognisant of the fact that you're generating a notification to hem?

2022-08-15 21:09:37 @Pip__T @hfordsa Sorry to bring negative energy into your day. Please interpret as: I'm really curious to see what is *in* this module. A list of twitter handles doesn't add up to that. Will your reading list or syllabus be openly available?

2022-08-15 20:49:40 @jvitak Ah, thank you. Turns out I had to first enable custom keyboard shortcuts in order to get anything more than just "turn them all off", which I definitely DON'T want, but this page had the info:https://t.co/ZqCyaI602D...and I wouldn't have gone looking for it w/o your tweet.

2022-08-15 20:40:14 Does anyone know how to turn off the keyboard shortcut to the "tasks" app in gmail? I never ever ever want it, but if I start a reply to an email with the word "That" or similar, it frequently pops up, interrupting me &

2022-08-15 20:30:53 @garicgymro Yes, yes, n/a, Seattle

2022-08-15 20:29:16 On Twitter, surrounding a tweet with the emoji means that the tweet contains info that is (assuming some audience)

2022-08-15 19:34:29 @Pip__T @UoE_EFI I'm more interested in *how* my work is being used. Which paper? Under what heading? In conversation with which other work? A link to your actual syllabus would answer those questions and be interesting. Your uni's marketing page doesn't help me.

2022-08-15 04:15:38 RT @shengokai: All the concerns these "longtermists" have about AI and so on have been debated in much more nuanced ways by Critical Code S…

2022-08-13 14:16:59 @LeonDerczynski I'm so sorry. May their memory be a blessing.

2022-08-13 12:55:23 RT @emilymbender: Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it…

2022-08-12 00:48:32 I guess if a) weren't true, I might have sent an email (grudgingly) to try to determine b). But c'mon people: if you're asking for free labor you simply cannot keep the deadline a secret until people agree. Grrr.

2022-08-12 00:47:44 Almost just agreed to review a ms, because it did look interesting, but then I noticed that a) the journal is published by $pringer and b) there was no way to tell when the review would be due before agreeing. Either one of these is a clear no. Buh-bye.

2022-08-11 18:20:57 @HeidyKhlaaf Yes, nice :)

2022-08-11 18:18:27 @HeidyKhlaaf Nice... could use alttext though.

2022-08-11 18:09:21 ... I'm sure there's more to say and I haven't even looked at the EA puff piece in Time, but I've got other work to do today, so ending here for now.

2022-08-11 18:05:44 I'm talking about organizations like @AJLUnited @C2i2_UCLA @Data4BlackLives and @DAIRInstitute and the scholarship and activism of people like @jovialjoy @safiyanoble @ruha9 @YESHICAN and @timnitGebru>

2022-08-11 18:04:00 If folks with $$ they feel obligated to give to others to mitigate harm in the world were actually concerned with what the journalist aptly calls "the damage that even dumb AI systems can do", there are lots of great orgs doing that work who could use the funding:>

2022-08-11 18:02:48 But none of that credibly indicates any actual progress towards the feared? coveted? early anticipated? "AGI". One thing is does clearly indicate is massive over-investment in this area.>

2022-08-11 18:01:48 Yes, we are seeing lots of applications of pattern matching of big data, and yes we are seeing lots of flashy demos, and yes the "AI" conferences are buried under deluges of submissions and yes arXiv is amassing ever greater piles of preprints. >

2022-08-11 18:00:56 To his credit, the journalist does point out that this is kinda sus, but then he also hops right in with some #AIhype:>

2022-08-11 17:58:33 And then of course there's the gambit of spending lots of money on AI development to ... wait for it ... prevent the development of malevolent AI. >

2022-08-11 17:47:52 "Figuring out which charitable donations addressing actual real-world current problems are "most" effective is just too easy. Look at us, we're "solving" the "hard" problem of maximizing utility into the far future!! We are surely the smartest, bestest people.">

2022-08-11 17:46:41 And that's before we even get into the absolute absurdity that is "longtermism". This intro nicely captures the way in which it is self-congratulatory and self-absorbed:>

2022-08-11 17:44:45 Once again: If the do-gooders aren't interested in shifting power, no matter how sincere their desire to go good, it's not going to work out well.>

2022-08-11 17:44:25 And yet *still* they don't seem to notice that massive income inequality/the fact that our system gives rise to billionaires is a fundamental problem worth any attention.>

2022-08-11 17:43:32 "Oh noes! The movement is now dominated by a few wealthy individuals, and so the amount of 'good' we can do is depending on what the stock market does to their fortunes.>

2022-08-11 17:40:54 Poor in the US/UK/Europe? Directly harmed by the systems making our homegrown billionaires so wealthy? You're SOL, because they have a "moral obligation" to use the money they amassed exploiting you to go help someone else.>

2022-08-11 17:39:52 Another consequence of taking "optimization" in this space to its absurd conclusion: Don't bother helping people closer to home (AND BUILDING COMMUNITY) because there are needier people we have to go be saviors for.>

2022-08-11 17:37:20 Third: Given everything that's known about the individual and societal harms of income inequality, how does that not seem to come up? My guess: These folks feel like they somehow earned their position &

2022-08-11 17:36:30 Second: Your favorite charity is now fully funded? Good. Find another one. Or stop looking for tax loopholes. >

2022-08-11 17:34:12 "Oh noes! We have too much money, and not enough actual need in today's world." First: This is such an obvious way in which insisting on only funding the MOST effective things is going to fail. (Assuming that is even knowable.)>

2022-08-11 17:32:01 Just a few random excerpts, because it was so painful to read...>

2022-08-10 17:43:43 RT @rctatman: If you are currently working on a project like this:1. I'm not upset with you, I'm sure that you're coming from a place of…

2022-08-10 17:38:24 RT @robinberjon: This is a great way to capture what's wrong with the search affordances that dominate today. They are cognitively inferior…

2022-08-10 17:23:01 @joshraclaw My parents are from NJ though &

2022-08-10 17:22:26 @joshraclaw I'd say lollipop, but sucker in that sense isn't unfamiliar. (Dialect region: PNW/Seattle.)

2022-08-10 17:13:46 RT @emilymbender: From what I've seen about Effective Altruism, it puts competition (what's the "best" way to help?) and very cerebral acti…

2022-08-10 17:07:11 RT @timnitGebru: Everyone who knows the deal about these ideologies needs to speak up. I know there are those of you who are afraid of the…

2022-08-10 16:40:49 @jennycz @kirbyconrod Yeah, if there are funds available somewhere institutionally for OA for work done at that institution, I think there's a non-zero chance they'd be set up to still work even if the publication happens after the researcher has moved on.

2022-08-10 04:09:20 @KateKayeReports Indeed. I would love to see a shift towards more accurate terminology.See also: https://t.co/9vxKYVqHdo

2022-08-10 03:54:41 @KateKayeReports In that tweet, it's not "AI" but "superhuman" that I'm objecting to. See the rest of the thread...

2022-08-09 21:54:51 This looks really interesting -- and like something that has an actual claim to being about "democratizing" technology. https://t.co/VaRIu7OnTP

2022-08-09 21:34:24 RT @C2i2_UCLA: https://t.co/kaOlcBlozV

2022-08-09 20:46:20 RT @RadicalAIPod: Hey #AcademicTwitter!! Can you believe it's almost time for school already?! If you're like us, then you're probably wond…

2022-08-09 17:11:17 This. It's been rather astonishing to watch as people get upset at the idea that there should be any accountability (even at the level of being called out) around using dehumanizing analogies. https://t.co/2Wsh68wMZm

2022-08-09 16:19:45 @MadamePratolung I think I tried to read that and couldn't make it through. Too many more important demands on my time...

2022-08-08 22:19:15 RT @HabenGirma: #HelenKeller lived an extremely active life through her senses of touch, smell, taste &

2022-08-08 21:36:52 @HabenGirma Thank you for taking the time to read it &

2022-08-08 18:15:14 Meanwhile, if you actually believe in working against bigotry, start close to home, aka come get your people:https://t.co/iv5BpvWljF

2022-08-08 18:14:31 The best move, when people who are impacted by systems of oppression point out bigotry is to set aside any defensiveness and try to learn from the experience.>

2022-08-08 18:13:21 If the feedback you consistently get for 'trying to discuss something in public' is being called a bigot ... chances are you've at the very least either engaged with bigoted ideas or not learned how to discuss the issue at hand appropriately.>

2022-08-08 17:23:52 Meanwhile, over the weekend a big name in AI was whining about how you can't talk about "AI alignment" online without being called a bigot. Wanna not be seen as a bigot? How about actively cleaning up the discourse being done in the name of your community.

2022-07-27 12:04:27 RT @emilymbender: In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm p…

2022-07-26 18:37:00 @f_dion Quick lesson in how Twitter works. If you hit "Reply" you're replying to me, which makes it sound like you think I don't already know the info in your tweet. If you hit "Quote Tweet" you can very appropriately share your thoughts on what I said to your followers.

2022-07-26 16:52:17 @luke_stark I'll be speaking on Friday at #COGSCI2022 about dehumanization in the field of AI. Folks who take such a definition seriously are one example.

2022-07-26 16:50:52 RT @emilymbender: Language is one important tool we have for communicating with other people, sharing ideas, &

2022-07-26 16:50:28 Thanks to @shayla__love for this reporting. https://t.co/hxinQjxILV

2022-07-26 16:49:32 And so we, collectively, need to answer the questions: What are the beneficial use cases for such text synthesizing machines? How do we create them with and insist on sufficient transparency to avoid preying on human empathy?

2022-07-26 16:48:06 GPT-3, or any language model, is nothing more than an algorithm for producing text. There's no mind or life or ideas or intent behind that text. >

2022-07-26 16:47:11 Language is one important tool we have for communicating with other people, sharing ideas, &

2022-07-26 16:45:16 @mmitchell_ai As Dennett says in the VICE article, regulation is needed---I'd add: regulation informed by an understanding of both how the systems work and how people react to them.https://t.co/IwqkhvFNnI>

2022-07-26 16:43:27 @mmitchell_ai Given the pretraining+fine-tuning paradigm, I'm afraid we're going to see more and more of these, mostly not done with nearly the degree of care. See, for example, this terrible idea from AI21 labs:https://t.co/qqe93Qf7SF>

2022-07-26 16:42:05 In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm pretty sure that pt was due to @mmitchell_ai but we all gladly signed off!) This particular instance was done carefully, however >

2022-07-26 12:26:55 RT @emilymbender: Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. Sh…

2022-07-25 23:57:31 @datingdecisions I saw this tweet first without the other and thought that you were just praising twitter experts for being scrumptious (succulent).

2022-07-25 22:00:56 #NLProc https://t.co/e6pdRW7oc7

2022-07-25 18:40:59 Friedman's emphasis was on materiality &

2022-07-25 18:40:05 As an example, she gives an alternative visualization of "the cloud" that makes its materiality more apparent (but still feels some steps removed from e.g. the mining operations required to create that equipment).>

2022-07-25 18:38:36 Finally, I really appreciated this message of responsibility of the public. How we talk about these things matters, because we need to be empowering the public to make good decisions around regulation. >

2022-07-25 18:35:49 Similarly, there may be life-critical or other important cases where AI/ML really is the best bet, and we can decide to use it there, being mindful that we are using something that has impactful materiality and so should be used sparingly.>

2022-07-25 18:34:40 Where above she draws on the lessons of nuclear power (what other robust sources of non-fossil energy would we have now, if we'd spread our search more broadly back then?) here she draws on the lessons of plastics: they are key for some use case (esp medical). >

2022-07-25 18:31:49 As societies and as scientific communities, we are surely better served by exploring multiple paths rather than piling all resources (funding, researcher time &

2022-07-25 18:30:46 Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. She ended with some really valuable ideas for going forward, in these slides:Here, I really appreciated 3 "Think outside the AI/ML box".>

2022-07-21 22:26:38 RT @cogsci_soc: #MarkYourCalendars#Keynote at #CogSci2022Resisting dehumanization in the age of AI July 29⏰  13:00-14:00 EDT Emily…

2022-07-21 17:56:02 @kirbyconrod @joshraclaw We need a special reactemoji for "I appreciate this turn for its linguistic structure (and possibly for other reasons, too)"

2022-07-20 20:04:21 @SameeOIbraheem Glad to hear it!

2022-07-20 04:58:59 @SameeOIbraheem /waves welcome!

2022-07-18 19:38:36 @kirbyconrod Weekly-ish. I try to test before e.g. gatherings and if it's been more than a week since I've tested for that reason, I'll be motivated to test just because.

2022-07-17 18:43:10 RT @WilliamWangNLP: If you attended #NAACL2022: Did you test positive for COVID-19? #nlproc

2022-07-16 19:41:11 RT @HabenGirma: How do you ask a friend to stop using ableist terms? Being called out can trigger shame &

2022-07-15 17:28:36 @qi2peng2

2022-07-15 00:00:25 @qpheevr @kirbyconrod Yeah, there's something of an art to figuring out which emails need ack notes and how to write ack notes so that they aren't perceived as needing ack notes. And sometimes I think there's generational differences involved too....

2022-07-14 23:56:08 RT @JordanBHarrod: New Video!................................ no. https://t.co/39F1YUhOSA

2022-07-14 21:53:53 RT @Abebab: if you read just one paper on why we shouldn't automate morality, make it this one by @ZeerakTalat et al

2022-07-14 13:38:11 RT @emilymbender: "Unsafe at any accuracy" strikes me as a very valuable &

2022-07-14 00:04:38 Audience view of #NAACL2022 and it's striking how high mask compliance is. (Same observation while there in person yesterday &

2022-07-13 23:58:27 Another wishlist item for hybrid conferences --- the ability to signal applause from afar. Zoom has this. Why doesn't @underlineio ? #NAACL2022

2022-07-13 23:54:38 That also helps with the "sense of community" part of it. I'm not just sitting by myself listening

2022-07-13 23:53:05 One simple thing that could definitely help is a visible count of online attendees. (I thought I saw that earlier this week, but not today.) That would flag: "Other people have come too" and help reassure folks that we're in the right place. https://t.co/519nn8hr7n

2022-07-13 23:24:27 @aryaman2020 Thank you.

2022-07-13 23:22:42 @underlineio It seems like it should be 100% possible to differentiate between "This link isn't live yet" and "Yeah, you're in the right place, but we haven't quite started".

2022-07-13 23:22:06 Here's another pointer for hybrid conferences: If there's some delay in starting a session, this should be made apparent through the online interface. Otherwise, lots of people are sitting individually wondering if they're in the wrong place! @underlineio #NAACL2022

2022-07-13 23:20:55 @jacobeisenstein Thanks! It's always very disconcerting when connecting online to see no indication of what's going on. I keep wondering if I've clicked the wrong link (as has happened in the past)...

2022-07-13 23:19:26 Anyone else attending #NAACL2022 online today? Are you able to get to the plenary? (I'm seeing "This event has not started yet" on Underline, which is a bit odd, given that it should have started ~5 minutes ago.)

2022-07-13 22:02:25 Attending #NAACL2022 virtually today, and noticing that I can recognize far fewer colleagues by voice than by sight. IOW, it's extra salient to me right now how important is for folks to introduce themselves before asking questions!

2022-07-13 18:50:54 @evanmiltenburg @complingy @SeeTedTalk @annargrs @LucianaBenotti @boknilev I do see some value in very broad conferences, actually. The sessions that I end up in are fairly eclectic, and I probably wouldn't go to a whole conference on each theme...

2022-07-13 17:18:09 @zehavoc If that person has come here from a timezone further east, they probably woke up early, actually. The 8am slot would be painful for someone coming from Asia.

2022-07-13 17:16:14 @James__Carey Probably not super salient for this crowd.

2022-07-13 17:12:10 @zehavoc And compared to the fully online events, this is actually a relaxed/late start for us. The world is big and round, and the Pacific Ocean is big and sparsely populated...

2022-07-13 17:11:30 @zehavoc This is the result of location in Seattle (just E of the big water) + trying to make some of the live content accessible across more timezones!

2022-07-13 17:09:44 "Averaging beliefs is not an approximation for debate or 'accuracy'" in ethical debates. (From their slides.)

2022-07-13 17:08:50 @ZeerakTalat "Automating moral judgments" is my short summary of what they're describing (functionality of the Delphi model), and that might be a fully accurate take.

2022-07-13 17:07:27 "Unsafe at any accuracy" strikes me as a very valuable &

2022-07-13 17:06:32 "Automating moral judgments is a category error and unsafe at any accuracy" -- @ZeerakTalat et al at #NAACL2022 https://t.co/LvcAIIHVvf

2022-07-13 14:59:01 Yet another reason to not publish with Springer --- just clicked through on a paper I was interested in reading and got interrupted by at *^^&

2022-07-13 13:55:46 Those affordances meant that there was reasonably easy opportunities for interaction (ACL 2020 was all online, but I am hopeful this would bridge online+in-person) and thus it was possible to experience things together as a community in real time./fin

2022-07-13 13:54:51 Key features of RocketChat were: one channel per paper, plus the "plenary" channel, plus you could create other channels at will. You could subscribe to channels so it was easy to check if anything had happened in the conversations you were following. Responsive. Emoji reacts.>

2022-07-13 13:53:40 I also think that a responsive, inviting &

2022-07-13 13:51:47 When we think about how to design hybrid conferences, we should keep an eye on 2 and 3, and not just 1. I think the #NAACL2022 structured socials are an interesting step in this direction (will be attending one today!). >

2022-07-13 13:50:29 To elaborate a bit, what I was hoping to say was that I see three functions of conferences:1) Sharing our research results + getting feedback2) Community building in the form of shared experiences3) Networking>

2022-07-13 13:49:21 @JesseDodge @mmitchell_ai Thanks, Jesse! So it seems like "old books" really aren't the dominant category...

2022-07-12 22:17:47 RT @LucianaBenotti: If you are interested in geographic diversity at %NAACL2022 you can check slide 6 in the business meeting slides below.…

2022-07-12 22:00:06 @BayramUlya Also impact of visa issues

2022-07-12 21:51:41 What % of training data of English LLMs is from say last 10 years? (Apropos of a question at #NAACL2022 suggesting that they include "a lot of old books"). @JesseDodge @mmitchell_ai do you know?

2022-07-12 21:39:03 @danyoel Just heard one in Trista Cao's presentation on (US) stereotypes captured in (English) LMs.

2022-07-12 20:08:58 @LucianaBenotti And 70% of attendees from Canada, too!

2022-07-12 20:07:41 #naacl2022 has attendees from 63 countries, of which 20 are only represented by on-line attendees -- @LucianaBenotti (99% of participants from China are attending virtually, too.)

2022-07-12 20:04:40 RT @maria_antoniak: “NLP is better for its partnership with linguistics, because linguistics grounds NLP as an application area where there…

2022-07-12 20:03:20 1967 on-site + 1007 online: updated #NAACL2022 attendance, per Dan Roth.

2022-07-12 19:20:00 RT @SeeTedTalk: Your very occaisional reminder about the @naaclmeeting Latin America mailing list. Chugging away since 2009, it has ~ 200 m…

2022-07-12 15:53:55 RT @timnitGebru: "But there is an aspect of so-called effective altruism that, as a philosopher, I naïvely never thought to question: the i…

2022-07-12 14:09:02 @roydanroy @devoidikk Those do look super intimidating!

2022-07-12 14:01:05 @roydanroy @devoidikk Yes -- it's way more comfortable than the other N95s I've found and most importantly compatible with my glasses (progressives =>

2022-07-12 12:20:21 @roydanroy @devoidikk It’s an envomask.

2022-07-11 17:10:42 "It's time to start thinking out of the AI/ML box" -- Batya Friedman reflecting on the materiality of compute and its environmental impacts at #NAACL2022

2022-07-10 00:59:17 @ArthurCamara Thank you for your kind words and I'm glad you enjoyed the episode!

2022-07-09 14:06:36 RT @emilymbender: Here's how the latest one I did went: 1. Saw the headline with the bogus claim of "predicting" crime a week before it h…

2022-07-09 03:03:53 RT @chirag_shah: In the light of the recent discourse about Google's LaMDA, my paper with @emilymbender at #CHIIR2022 a few months ago seem…

2022-07-09 00:35:25 @shengokai Saw your tweet this morning and then remembered it when I saw the abstract for Lydia X. Z. Brown's keynote at WiNLP this weekend: https://t.co/wpoxpuhWH8

2022-07-08 20:57:59 RT @emilymbender: @rowlsmanthorpe But: see second para of the screen cap. Zuck seems to be trying to argue that massive scale and potential…

2022-07-08 20:57:33 RT @emilymbender: Also, "rarely spoken" is a ridiculous thing to say about (spoken) languages. If there's a community of speakers, it's pro…

2022-07-08 20:37:37 @SorenSpicknall More people need to know about @ImagesofAI !

2022-07-08 19:19:44 @CT_Bergstrom Thank you, @CT_Bergstrom -- I appreciate your kind words!

2022-07-08 19:19:24 @Telecordial @CT_Bergstrom Corvids are great! But I do not have @CT_Bergstrom 's skills in photographing them.

2022-07-08 19:02:55 @LeonDerczynski Does that screen cap end with a pointer to Strunk and White? Huge . Also, while first paragraphs can contain grandiose blathering and road-map paragraphs can be boring, both can and should be done well!

2022-07-08 18:48:57 @gchrupala @myrthereuver The connection is that the evidence that people are using to claim sentience involves linguistic output of these systems. So showing the lack of communicative intent obviates this supposed evidence.

2022-07-08 17:56:39 RT @rajiinio: When consulted on policy, technologists bring in proposals that are unrealistic or ineffective as it relates to how law actua…

2022-07-08 17:56:31 RT @random_walker: Let’s stop enabling this behavior. Let’s make it safer and easier for actual experts to correct, challenge, or call out…

2022-07-08 17:56:22 RT @random_walker: So there’s a large, cheering audience for the uninformed cynicism spewing forth on panels, op-eds, and on Twitter. As a…

2022-07-08 15:50:46 RT @callin_bull: Though we quickly expanded the scope, Calling Bullshit began as “Calling Bullshit on Big Data” and focused on misapplicati…

2022-07-08 15:32:17 So, in sum: Hooray for work on more languages (and MT other than to/from English). But this isn't a "superpower" and it isn't going to let @facebook off the hook for its responsibilities regarding misinfo, disinfo and harassment in all the locales in which it operates.

2022-07-08 15:31:19 Also, "rarely spoken" is a ridiculous thing to say about (spoken) languages. If there's a community of speakers, it's probably spoken daily. Also, I checked Ethnologue and they list >

2022-07-08 15:28:31 (More generally, x% improvement could be that there are x% fewer errors or x% more correctness ... since this is BLEU, I think it has to be the latter. Also, since they say "x% higher" in the PR. But all that just goes to show how vague &

2022-07-08 15:26:50 And also: "x% improvement" is always meaningless if we don't know the starting point. From the Meta PR: https://t.co/LMhLvz1Gur

2022-07-08 15:22:01 @rowlsmanthorpe But: see second para of the screen cap. Zuck seems to be trying to argue that massive scale and potential upside will somehow counteract documented downside. #techchauvanism through and through. (Not to mention "superpower" in the headline.)>

2022-07-08 15:20:03 And props to @rowlsmanthorpe for including the key point in the first para of this screen cap as well as key insights from Dr. Birch-Mayne. https://t.co/rAnCgBTkPn

2022-07-08 15:14:29 Gonna give this one a mixed review. Props to Facebook for collecting this dataset and apparently paying for L1 speaker verification of it. https://t.co/J0PfHRkAtx>

2022-07-06 20:05:33 @djg98115 I like to describe the TV show Portlandia as "It's funny in a you-had-to-be-here kind of a way". (That is, lumping Seattle in with Portland on some of those stereotypes fitting...)

2022-07-06 18:58:21 @hangingnoodles @GaryMarcus @WiringTheBrain Honestly, you can create computer systems that "store reversible associations", over a specific domain of applicability. I wouldn't call them "AI", but I also don't think LMs are "AI".

2022-07-06 18:01:30 Why would you name a project/paper after a failed US education policy?

2022-07-06 17:24:44 RT @xkcd: The Universe by Scientific Field https://t.co/eFi5uS8RTo https://t.co/8fGbn2cSzI

2022-07-06 13:59:37 And there is more to the West Coast than CA TYVM. We don't do the "the" thing up here in WA either. https://t.co/wK3hFeuRmw

2022-07-06 13:49:19 RT @naaclmeeting: It's not too late to help with live tweeting for #NAACL2022 ! We're still looking for more volunteers to help tweet about…

2022-07-06 12:26:49 RT @emilymbender: This is exhausting indeed, and I think addressing this thoroughly requires at least:(thread)

2022-07-06 01:15:38 @niloufar_s

2022-07-05 22:54:14 @fusipon It's actually not relevant whether or not humans can do this (we can't). The point is that people should stop claiming that AI can.

2022-07-05 22:34:10 @SColesPorter My characterizing a long and tiring conversation on Twitter as a "debate" isn't the same thing as you hosting an event (and charging admission) to rehash the same content and calling it a "debate".

2022-07-05 22:33:30 @SColesPorter Wow, aren't you clever?

2022-07-05 19:56:54 @SashaMTL @Abebab Looks like they deleted the tweet. Sounds like they're still set on having their pointless/hype-advancing "debate".

2022-07-05 19:25:35 @SColesPorter @TanDuarte @Abebab @SashaMTL @WorldSummitAI You can't go around saying it's the opposite of hype and putting out trashy hype like this: https://t.co/POIzWnKG1p

2022-07-05 18:50:26 @SColesPorter @SashaMTL @Abebab A few people on the panel and yet your advertising only mentions one. You are blatantly just looking to make $$ off of this tired story and perpetuating more AIhype in the process. You are doing a disservice with this.

2022-07-05 18:31:40 @SashaMTL @SColesPorter @Abebab Agree with Sasha, and: a) This conversation has been had. There is no sentience there.b) Your guest is not actually a qualified expert on this, blathering on as he has about the "intelligence" of the chatbot in the Economist and other publications.

2022-07-05 16:51:53 I've found that this gets easier over time, as a function of building up the skills (what questions to ask of what I'm reading, how to craft a tweet thread), the courage (I'm going out on a limb, but it's both worth it and ok), and my network (on Twitter &

2022-07-05 16:49:33 In other words, I was on the fence about whether to speak up on this one, but I'm glad I did, since it seems to have maybe had a beneficial effect.And on the strength of that I want to encourage others to get in the habit of speaking up!>

2022-07-05 16:48:50 So I tried to keep my commentary grounded in what I do know something about: task framing, how to read an article like this critically, evaluation. And I pointed to the kinds of experts who should have been interviewed.>

2022-07-05 16:47:30 I felt a little out of my area of expertise with this one: though I am very concerned about mass incarceration, over policing and police violence, I'm not particularly well-read on these topics. >

2022-07-05 16:46:00 6. Took notes along the way of the things that I thought were particularly fishy.7. (This was the next morning, I think) summarized those points in a typo-ridden tweet thread.>

2022-07-05 16:45:18 What are the authors actually claiming? How does it relate to the way the claim was framed in the Bloomberg article? What task did the automated system actually do, with what input &

2022-07-05 16:44:23 4. Got access to the Nature article through my university's library.5. Read it, with an eye towards the following questions:>

2022-07-05 16:43:43 2. Over the weekend, read the Bloomberg article and found it infuriating. 3. Decided I should say something &

2022-07-05 16:43:11 Here's how the latest one I did went: 1. Saw the headline with the bogus claim of "predicting" crime a week before it happens a couple of times. (Including once or twice where someone tagged me, but also separate from that.)>

2022-07-05 16:39:22 @WellsLucasSanto @_KarenHao I definitely don't know enough about the journalism ecosystem, and what pressures people are working under, both freelancers &

2022-07-05 16:31:16 RT @emilymbender: I'll leave this one as a challenge &

2022-07-05 16:31:13 @SashaMTL @Abebab I don't know how to address that one without helping them sell tickets....

2022-07-05 16:30:36 I'll leave this one as a challenge &

2022-07-05 16:28:46 3. As long as 1&

2022-07-05 16:26:49 2'. Also, no more using press releases to evade the peer review part of the scientific conversation: https://t.co/aUhdTd8JIJ>

2022-07-05 16:25:34 2. Researchers (academia &

2022-07-05 16:24:28 1. Journalists holding themselves to the high standards I know are out there for the best journalism. If you want one quick hack to writing better stories about so-called AI, I'd start here:https://t.co/ZFgFuY74AF>

2022-07-05 16:22:47 This is exhausting indeed, and I think addressing this thoroughly requires at least:(thread) https://t.co/FkagCnev7l

2022-07-05 15:31:30 RT @Abebab: it's exhausting and unproductive for us to engage every reporter about incorrect and overblown reporting. all reporters writi…

2022-07-05 14:17:56 @natematias @hypervisible Thank you!

2022-07-04 15:47:06 RT @histoftech: Building fancier and fancier calculators (and yes, that’s what this is) is important, but it’s not the only thing. And it’s…

2022-07-04 15:41:43 The sequences of words are just externally visible artifacts that we use in the extended communicative acts that are a core part of education---and that others can observe as well.

2022-07-04 15:40:44 @NarasMG No, it doesn't. And even if it did, so what? The Turing Test is not a law of nature.

2022-07-04 15:40:06 And just to try to head off some of the response: the point here isn't that a college education is meaningless. It's that what we're doing in education is not a matter of causing students to emit certain sequences of words in certain formats. >

2022-07-04 15:38:39 Like seriously: if an "AI" output a 4-year degree worth of essays and exam answers with enough apparent coherence &

2022-07-04 15:35:24 The tendency of AI researchers to equate the form of an artifact with its meaning is seemingly boundless. A college degree is not comprised of essays and exam papers, even if such elements play a key role in our evaluation of human progress towards one. https://t.co/jjat3xu1sk

2022-07-04 15:22:16 After that, they can fantasize about taking out the Terminator, for funsies.

2022-07-04 15:21:53 How about we set up a system where people can only spend time worrying about "AI systems gone awry" when they have put significant effort into addressing the climate crisis AND into the actual harms perpetuated in the name of "AI"? https://t.co/VCZlUKwNd3

2022-07-04 13:45:42 RT @emilymbender: Don't make me tap the sign (1/2) https://t.co/bRWtFl6KKI

2022-07-04 13:34:22 RT @ruchowdh: Grateful for the (minor) revisions made thanks to @emilymbender s popular thread but baffled at why an intern was given this…

2022-07-04 04:32:10 If I'd known my tweets were going to lead to the Bloomberg article being revised, I guess I might have spell checked better? But seriously: journalists covering this should be talking to people who know something about mass incarceration, over policing, &

2022-07-04 04:30:20 Typo fix #2: this should say police, not policyhttps://t.co/spYYYWicfh

2022-07-04 04:29:37 Typo fix #1: this should say No, not Not.https://t.co/RtHoObojzF

2022-07-03 19:55:08 RT @emilymbender: Don't make me tap the sign (2/2) https://t.co/yrpslsYSxq

2022-07-03 19:55:01 Don't make me tap the sign (2/2) https://t.co/yrpslsYSxq

2022-07-03 19:54:11 Don't make me tap the sign (1/2) https://t.co/bRWtFl6KKI

2022-07-03 19:46:23 @FloodSmartCity Assigning fault to an algorithm is already a category error. Why say this?

2022-07-03 19:36:08 RT @emilymbender: One other fishy/squirrely thing I noticed about the article. In this paragraph they talk up the evaluation as a "true pro…

2022-07-03 19:36:03 One other fishy/squirrely thing I noticed about the article. In this paragraph they talk up the evaluation as a "true prospective forecasting test" but their of use of it was purely retrospective. https://t.co/43QQo75Rmx

2022-07-03 14:31:05 3. What about wage theft, securities fraud, environmental crimes, etc etc? See this "risk zones" map:https://t.co/KwTtQWsFfY

2022-07-03 14:29:42 2. What happens when police are deployed somewhere with the "information" that a crime is about to occur?>

2022-07-03 14:29:03 In summary, whenever someone is trying to sell predictive policing, always ask:1. Why are we trying to predict this? (Answer seems to be so police can "prevent crime", but why are we looking to policy to prevent crime, rather than targeting underlying inequities?)>

2022-07-03 14:27:37 5. The final section is called "Limitations and conclusion" which is a weird combo and maybe is an attempt to excuse a weird mess of a section that talks out of both sides of its mouth? Note the phrase "powerful predictive tools" here, hyping what they've built: https://t.co/Cue3w7bCw3

2022-07-03 14:22:56 Those "enforcement biases" have to do with sending more resources to respond to violent crime in affluent neighborhoods. They claim that this would allow us to "hold states accountable in ways inconceivable in the past".>

2022-07-03 14:21:28 4. The authors acknowledge some of the ways in which predictive policing has "stirred controversy" but claim to have "demonstrate[d] their unprecedented ability to audit enforcement biases". >

2022-07-03 14:18:42 3. A prediction was counted as "correct" if a crime (by their def) occurred in the (small) area on the day of prediction or one day before or after.>

2022-07-03 14:17:45 Some interesting details from the underlying Nature article:1. Data was logs maintained by the cities in question (so data "collected" via reports to police/policing activity).2. The only info for each incident they're using is location, time &

2022-07-03 14:14:27 Not it effing can't. This headline is breathtakingly irresponsible. h/t @hypervisible https://t.co/5z9wqj3sdC

2022-07-02 13:27:52 @raciolinguistic @CarolSOtt Yay!!!! Congrats :)

2022-07-02 03:30:43 RT @timnitGebru: Since reporters are still asking about this and I really don’t want to talk about sentient machines, posting again what @…

2022-07-01 20:46:08 @samsaranc @ZaldivarPhD @benhutchinson For data statements, see:https://t.co/dlOMS4iyyeWe provide a guide to writing data statements + templates (in a few formats).

2022-07-01 19:00:32 @drtowerstein We already have a grammar formalism (HPSG), though I've also long thought that it would be really interesting if others also wanted to build grammar customization systems with other frameworks/formalisms. I'd love to be able to compare &

2022-07-01 18:38:08 @davidthewid @dabeaz Yeah -- all part of the same thing: ML seeks to "own" all domains and many times this seems to mean claiming "domain experts are no longer needed" which is quickly "domain expertise isn't valuable" etc etc.

2022-07-01 18:32:35 @davidthewid @dabeaz not actually even a CS person...

2022-07-01 17:24:35 @redkatartist "I refuse to debate with people who won't take as given my own humanity."

2022-07-01 16:27:33 I can't imagine a way to create &

2022-07-01 16:26:46 I'm kinda wishing we had running polling of the general public (internationally!) around questions such as "Has sentient AI been created?" Curious if the media attention to bogus claims over the past weeks would have moved that needle.But also >

2022-07-01 14:18:02 In sum, I'm really excited about what we can do using computational methods to encode and combine precise linguistic knowledge --- in ways that it can be built on further!

2022-07-01 14:17:49 This paper comes out of Kristen Howell's PhD dissertation, in which she synthesized previous AGGREGATION work into an end-to-end pipeline, added inference for many phenomena and did the first comprehensive multilingual evaluation of the system.>

2022-07-01 14:08:48 Current MS projects include inference for adnominal possession and valence changing morphology, each of which in turn were libraries added to the Grammar Matrix as MS projects Nielsen and @curtosys )! >

2022-07-01 14:06:44 Clearly, the grammars aren't perfect! There is work to be done both in reducing noise in the grammar inference process and in adding phenomena to both the Grammar Matrix customization system and the inference system that produces grammar specifications.>

2022-07-01 14:05:38 Evaluation in terms of not just coverage, but also lexical coverage (how many sentences were made up of words the grammar could handle), ambiguity, and correctness of semantic representations.>

2022-07-01 14:02:30 Languages map! Red dots are our development languages, blue are other languages consulted, and green are the five held-out test languages.>

2022-07-01 13:59:14 We test the system on held-out languages, in each case creating a grammar specification to put into the Grammar Matrix customization system from 90% of our data and then testing that grammar on a held-out 10% of the data (10-fold cross-validation).>

2022-07-01 13:52:41 Which takes as input corpora of IGT like (first pic), and produces grammars which produce syntactic &

2022-07-01 13:47:03 The result is a system like this:>

2022-07-01 13:44:03 Which in turn builds on the English Resource Grammar (under continuous development since 1993), and software, formalism, and theoretical work from the DELPH-IN Consortium:https://t.co/BpCjy3qa9S>

2022-07-01 13:43:01 As well as building on the Grammar Matrix (under development since 2001!):https://t.co/GdYRbmEciO>

2022-07-01 13:41:08 This is the latest update from the AGGREGATION project (underway since ~2012), and builds on much previous work, by @OlgaZamaraeva, Goodman, @fxia8, @ryageo, Crowgey, Wax and others! https://t.co/qKkpaNfWpn>

2022-06-28 16:27:43 @athundt @hereandnow Thank you!

2022-06-28 12:29:59 RT @emilymbender: My favorite moment of this was at the very end, when I caught and refuted an instance of the AI inevitability narrative i…

2022-06-28 12:25:56 RT @mireillemoret: Keynotes by a.o. @emilymbender and @FrankPasquale, Track Chairs ‘Legal Search and Prediction’ @Sofiade and @HarrySurden,…

2022-06-27 21:20:53 @MadamePratolung I thought it would be good, since the tweet started gaining some traction and I used " " in one instance to indicate a direct quote and in the other something else.

2022-06-27 18:34:07 Typo: "less" should be "lens", of course.

2022-06-27 18:33:53 @JoeReddington Yes, lens is what I meant.

2022-06-27 18:16:05 @NannaInie Working towards "passing" the Turing Test isn't even working towards sentience or having opinions or...

2022-06-27 18:03:11 If you think about the Turing Test through the less of today's shared tasks, it looks particularly odd. We use shared tasks to drive research towards particular goals. But to what end would we want machines that can fool people? https://t.co/gxDsIzKrBH

2022-06-27 17:53:20 My favorite moment of this was at the very end, when I caught and refuted an instance of the AI inevitability narrative in real time. https://t.co/ookK91W78m via @hereandnow

2022-06-27 14:30:22 @CatalinaGoanta @Meta @rcalo It seems like here Meta has leaned into the potential for misunderstanding to (misleadingly) reassure their users, I'd say.

2022-06-27 14:20:07 @CatalinaGoanta @Meta @rcalo Interesting that the CA law you excerpt there includes "make available" as a kind of "sell". I still wonder though: is the targeted advertising use case (where 3rd parties don't see specific accts, but can choose classes of them &

2022-06-27 14:17:57 @CatalinaGoanta @Meta @rcalo I can definitely see some value in making these policies readable, but if in doing so the writers are using ordinary words in their technical meaning but not flagging that ... worrying indeed.>

2022-06-27 13:42:05 https://t.co/HzopJyRe79

2022-06-27 13:41:54 To be very clear the first quote in my tweet is a summary, not a direct quote. https://t.co/FbmSBPPXyo

2022-06-27 13:06:44 @complingy @Meta So, special sense of both, then. Information = PII (precisely), and sell = transmit and make money from, not just make money from.

2022-06-27 13:00:31 New @Meta privacy policy just dropped. "We sell the ability to target advertising based on information we gather about you", but somehow that's consistent with "We do not sell and will not sell your information". Specific sense of "sell" or "information" or both? https://t.co/XklDK59z8U

2022-06-27 12:34:57 @j2bryson @timnitGebru @mmitchell_ai You said yourself that once the algorithm behind ELIZA was clear, the illusion was broken. What takes people in is that ELIZA (and LaMDA) are using English, not that they're using it algorithmically.

2022-06-27 12:33:57 @j2bryson @timnitGebru @mmitchell_ai Far more relevant (it seems to me) is the way in which those developing so-called "AI" have leaned into producing systems which mimic the means we (highly social animals) use to communicate with each other.>

2022-06-27 12:32:00 @j2bryson @timnitGebru @mmitchell_ai As for why "AI" (ahem, text synthesizing machines) can seem familiar, I disagree that it's because "humans have algorithmic components to our behavior". >

2022-06-27 12:30:29 @j2bryson @timnitGebru @mmitchell_ai I think the valuable points in your essay have to do with values &

2022-06-24 19:28:35 @annargrs @lrec2022 @FAccTConference Oh, what a bummer. Get well soon!

2022-06-23 20:43:09 @itsafronomics The terminology is confusing, and varies by institution. For us, tenure track faculty can have adjunct appointments in other departments (and our precarious faculty are usually called 'affiliates', not 'adjuncts').

2022-06-23 19:19:17 Pointer to the original --- which *doesn't* have that confusing vertical line, either.https://t.co/dLtAGKpK28

2022-06-23 19:16:32 @LeonDerczynski @marialuisapaulr @amazon Many similar stories from @HumanAlexas too.

2022-06-23 19:14:15 And what's the point of the vertical line at ?2000 ... is that when the graphic was made?

2022-06-23 19:08:35 @coolharsh55 Uh, Star Trek?

2022-06-23 18:58:53 Okay, I keep seeing this graphic today and I'm really puzzled by the green bars. What is the length of the bars supposed to indicate? Their colors? It's not time between film release &

2022-06-23 18:43:56 RT @rajiinio: @ziebrah @Aaron_Horowitz @aselbst @schock @AJLUnited @jovialjoy @s010n @RosieCampbell @AIESConf All these projects are part o…

2022-06-23 18:03:00 RT @timnitGebru: "Google and OpenAI have long described themselves as committed to the safe development of AI...But that hasn’t stopped the…

2022-06-23 16:39:26 RT @Abebab: The fallacy of AI functionality, @rajiinio &

2022-06-23 16:35:46 @davidberreby @marialuisapaulr @amazon I agree that there is a big difference. I'm just saying that it does no good for us to buy into (and then also promote) Amazon's fiction that it's us and our own personal Alexas.

2022-06-23 16:23:38 @davidberreby @marialuisapaulr @amazon Actually, I think you've mislocated the agency there. The agency lies with Amazon, not Alexa. (And dogs have evolved, I believe, to take advantage of human empathy, too, but that's a separate story and dog/human relationships are between specific dogs &

2022-06-23 16:06:14 RT @histoftech: If a system is always going to give us a guess at an answer, no matter what—if that is what its operating parameters insist…

2022-06-23 16:06:06 RT @histoftech: The option to say “I don’t know” isn’t programmed in. This is a key design flaw. It means the system is destined to complet…

2022-06-23 15:54:57 RT @hypervisible: People asking me "how" when I said predictions are a form of violence...If it was a legit question, here's some reading…

2022-06-23 15:14:46 RT @mmitchell_ai: Appreciate this piece. Covers a lot of ground wrt what's happened in AI over past few weeks &

2022-06-23 15:10:53 We talk about this issue some in this Factually! podcast episode, too.https://t.co/SqTLTISeD2

2022-06-23 15:09:13 A much better way to draw on science fiction in technology development is @cfiesler 's Black Mirror Writer's Room exercise: https://t.co/e89XH1sshA

2022-06-23 15:08:09 @marialuisapaulr @amazon And while we're here, more evidence of how Silicon Valley has entirely missed the point of speculative fiction. https://t.co/JJ66zOPAAp

2022-06-23 15:05:22 Thanks to @marialuisapaulr for this clear-eyed reporting. And shame on @amazon for having the goal of getting people to "trust AI". Talk about the quiet part out loud: Big tech is all about preying on human empathy.https://t.co/c72L5QEzoW https://t.co/rcDTs7G8EY

2022-06-23 14:50:34 @HickokMerve @ImagesofAI I talk it up in this podcast episode too :)https://t.co/TUVXPTN50H

2022-06-23 14:45:33 Hooray another story illustrated with @ImagesofAI ! https://t.co/G3woQSw13v

2022-06-21 21:51:49 @joavanschoren @LeonDerczynski @NeurIPSConf @SerenaYeung Thank you.

2022-06-21 21:46:21 @joavanschoren @LeonDerczynski @NeurIPSConf @SerenaYeung Thank you. I'd really love to see a durable solution to this. This isn't the first time these papers were inaccessible.

2022-06-21 21:17:40 @LeonDerczynski @NeurIPSConf Don't seem to be getting any reaction. Maybe the account isn't monitored? It looks like @joavanschoren and @SerenaYeung were the NeurIPS 2021 Datasets &

2022-06-21 18:04:28 @M_Star_Online Hey, I tried to find contact info for the journalists behind this and failed, so posting here. I'm one of the co-authors on the Stochastic Parrots paper and you have misspelled my name in this article. Please correct.

2022-06-21 15:18:15 @dallascard @FAccTConference @ria_kalluri @willie_agnew @Abebab @DotanRavit @TheMichelleBao Congrats

2022-06-20 23:19:16 This is so richly deserved!! Congrats all :) https://t.co/6eHME4iDo9

2022-06-20 22:40:51 @NeurIPSConf I've never understood why the datasets &

2022-06-20 22:36:14 Oh, and meanwhile, the *main* @NeurIPSConf proceedings link still works, so yet again, it's just the datasets &

2022-06-20 22:28:01 RT @emilymbender: @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna @neurips You'd think a "top conference"…

2022-06-20 22:27:55 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna @neurips You'd think a "top conference" would actually care about making its proceedings accessible, rather than leaving people to link to arXiv where the peer reviewed papers aren't differentiated from random flag planting, etc.

2022-06-20 22:27:09 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna @neurips I actually don't know if OpenReview shows the final version or not. This isn't the first time that I've had trouble accessing the actual peer-reviewed version of our paper. It's really embarrassing, actually.

2022-06-20 22:26:23 Yo @neurips why is your proceedings for the datasets &

2022-06-20 22:25:19 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna Yes, that's the one. I have no idea why @neurips proceedings links as so damn unstable. I'll see if I can dig up wherever they've actually currently put the paper.

2022-06-20 20:50:02 @timnitGebru @RoxanaDaneshjou @rajiinio @amandalynneP @cephaloponderer @alexhanna Peer reviewed version: https://t.co/kR4ZA1Bawz

2022-06-20 16:41:12 And even if it weren't, a decrease in funding is far less problematic than the other effects of AI hype.

2022-06-20 16:40:45 The more serious point, though, is that another "AI winter" is not actually something to be concerned about. The field (not to mention those adjacent to it) is presently suffering from over-funding...>

2022-06-20 16:40:00 The linguist is me is satisfied --- the meaning that I had (as an ordinary speaker) got the the most votes, but the other usages that I've observed are also at non-zero in this unscientific Twitter poll.>

2022-06-20 16:32:11 @aylin_cim So sorry to hear it. Get well soon!

2022-06-20 15:24:09 @AdamCSchembri I haven't encountered it in that usage, no. (But I mostly encounter it in discussions of bilingual ed in schools, not in say syntax.)

2022-06-20 15:16:22 @AdamCSchembri The way I hear "heritage speaker/signer" is often to refer to people who aren't fluent in their community's language for any number of reasons, but still have a "heritage" relationship to that language.

2022-06-20 13:15:57 RT @emilymbender: I have a theory that what's going on with the current explosion in credulousness is that the scale has outstripped our ab…

2022-06-18 23:57:46 @MadamePratolung @strubell @STS_News @AJLUnited @oikeios @AmooreLouise @timnitGebru @merbroussard @mer__edith @feraldata of the cloud need not concern themselves with how they are maintained, monitored, powered, cooled, and so forth

2022-06-18 23:57:39 @MadamePratolung @strubell @STS_News @AJLUnited @oikeios @AmooreLouise @timnitGebru @merbroussard @mer__edith @feraldata Pull quote: "One important metaphor here is that of “cloud computing,” which conjures up images of something light and insubstantial, floating up in the sky. This metaphor highlights that the servers and their supporting infrastructure are located someplace else, and that users>

2022-06-18 23:57:09 @MadamePratolung @strubell @STS_News @AJLUnited @oikeios @AmooreLouise @timnitGebru @merbroussard @mer__edith @feraldata From Alan, Batya Friedman, &

2022-06-18 23:55:43 @Abebab . Here's a paper I really like elaborating that point, from @AlexBaria and @doctabarz https://t.co/krTW120Y7D

2022-06-18 22:57:49 @AngloPeranakan Thanks :)

2022-06-18 13:30:22 RT @emilymbender: "AI winter" means (poll), regarding AI research

2022-06-18 03:11:30 (I definitely have an opinion here, but I've seen surprising uses of the term recently...)

2022-06-18 03:11:07 "AI winter" means (poll), regarding AI research

2022-06-18 03:09:18 RT @bonadossou: Next Tuesday, 12:30 pm UTC+2, I’ll be on RFI @RFI’s show “Des Vives Voix” to talk about language technologies on African la…

2022-06-18 02:44:26 RT @histoftech: For an important, wide-ranging, and complex example of this, I’d love for you to read this chapter on accent bias and whose…

2022-06-18 02:43:44 RT @histoftech: We wrote the book to help make sense of what can &

2022-06-18 01:41:02 @mmitchell_ai Wait, weren't you on Bill Nye's show too as a scientist?

2022-06-18 01:40:43 RT @mmitchell_ai: I was on Bloomberg TV today! My first time on TV in my role as a scientist. https://t.co/VQR79H1s9g

2022-06-17 21:00:15 RT @histoftech: This morning Siri said: “don’t call me Shirley.”I guess I must have left my phone on airplane mode. https://t.co/TLs5DWew

2022-06-17 20:59:55 RT @mer__edith: PSA: I uploaded my *Steep Cost of Capture* piece to SSRN since ACM paywalled it. So if you're looking for it, use this li…

2022-06-17 17:48:06 RT @elizejackson: I’m sounding an alarm here. Please pay attention. Microsoft is partnering with the World Bank to extract data (and labor)…

2022-06-17 17:17:48 RT @histoftech: The idea of tech-as-savior to downtrodden workers doing “undesirable” or “easily automated” jobs has a long and problematic…

2022-06-17 17:17:00 RT @histoftech: As part of the effort to combat the narrowness of context of this issue &

2022-06-17 17:16:57 RT @histoftech: A lot of folks have written great threads on why the latest “AI sentience” debate is not just (or even *mainly*) about what…

2022-06-17 15:54:21 RT @mmitchell_ai: New piece from @timnitGebru and me, with editing help from @katzish Thanks to WaPo for the opportunity! https://t.co/0p

2022-06-17 15:28:55 RT @alxndrt: Such thoughtful commentary on AI and sentience coming from @emilymbender, @timnitGebru and @mmitchell_ai in numerous forums. H…

2022-06-17 13:55:44 @fernandaedi :'(

2022-06-17 13:27:40 RT @dmonett: "What's worse, leaders in so-called AI are fueling the public's propensity to see intelligence in current systems, touting tha…

2022-06-17 13:09:25 "Scientists and engineers should focus on building models that meet people’s needs for different tasks, and that can be evaluated on that basis, rather than claiming they’re creating über consciousness." @timnitGebru and @mmitchell_ai https://t.co/1PgSySALoz

2022-06-17 13:08:54 @timnitGebru @mmitchell_ai "And ascribing “sentience” to a product implies that any wrongdoing is the work of an independent being, rather than the company — made up of real people and their decisions, and subject to regulation — that created it."@timnitGebru and @mmitchell_ai https://t.co/1PgSySALoz

2022-06-17 13:08:27 @timnitGebru @mmitchell_ai "The drive toward this end sweeps aside the many potential unaddressed harms of LLM systems.">

2022-06-17 13:07:48 "The race toward deploying larger and larger models without sufficient guardrails, regulation, understanding of how they work, or documentation of the training data has further accelerated across tech companies." @timnitGebru and @mmitchell_ai https://t.co/1PgSySALoz

2022-06-17 12:42:59 RT @emilymbender: Yes, there are sides, not they are not even in credibility, and most importantly, there are the people at the center of t…

2022-06-17 12:42:56 RT @emilymbender: This is amazing reporting on an important issue, and also a great example of how to acknowledge the existence of disagree…

2022-06-17 12:42:47 RT @emilymbender: The linked article is super important. Kudos to @themarkup for their careful documentation of the ways in which Meta is h…

2022-06-17 12:29:48 RT @KarnFort1: Le séminaire que j'ai fait à l'IXXI vendredi dernier sur l'éthique et le TAL est maintenant dispo en ligne en vidéo : https:…

2022-06-17 12:18:49 RT @Kobotic: Please stop talking about socially responsible algorithms or AI or tech.It's like talking about socially responsible cars, o…

2022-06-16 21:23:27 @WellsLucasSanto My main gripe with reddit is that the upvote/downvote mechanism seems to move things around/break the threading! Also, not sure if I can manage yet another platform to keep track of.

2022-06-16 21:18:44 @WellsLucasSanto Sounds like it would be a nice group of people, but I don't do Reddit ... and when I've had to look there, I find the interface entirely overwhelming.

2022-06-16 19:51:28 @themarkup As if "the problem" weren't one that they had created in the first place by trying to grab all this data!Also important is @themarkup 's reporting on how trusting (some) hospitals appear to be in Meta's tech.

2022-06-16 19:50:09 The linked article is super important. Kudos to @themarkup for their careful documentation of the ways in which Meta is harvesting sensitive information---and then saying effectively "if we don't scrub it fully, that's because the problem is hard.">

2022-06-16 19:48:57 What AI labs say: AI is an inevitable future, we have to build it and big data makes systems that are intelligent! (Some might even say sentient )What it means: Manifest destiny over all data, no matter how private. https://t.co/mNbSIavNwB

2022-06-16 15:12:38 RT @Abebab: unless you are aware of the pile of shit that personality theories in psychology are (and able to incorporate critical work), d…

2022-06-16 14:26:48 @GRACEethicsAI But as a subject line, it's even worse, because it means it takes more time to work out (as I'm going through my swamped inbox) what that message is even about...

2022-06-16 14:26:28 @GRACEethicsAI "Hey Emily" as a greeting would have been a bit annoying, sounding like I should be ready to give them my attention when they stopped by and said "Hey">

2022-06-16 14:25:55 @GRACEethicsAI I usually look at those to see if they're a good match, and if so, put them in our jobs DB. >

2022-06-16 14:24:58 @GRACEethicsAI For me, it's more about disrespecting my time &

2022-06-16 14:22:10 Yes, there are sides, not they are not even in credibility, and most importantly, there are the people at the center of the story, whose lives &

2022-06-16 14:17:27 This is amazing reporting on an important issue, and also a great example of how to acknowledge the existence of disagreement without devolving to "both-sides" faux-neutrality.>

2022-06-16 13:59:18 RT @transscribe: I’ve only had the chance to do a big splashy feature on trans kids once and I told my editor that I wanted to change the w…

2022-06-16 13:37:35 RT @UpolEhsan: When an algorithm causes harm, is discontinuing it enough to address its harms?Our #FAccT2022 paper introduces the conce…

2022-06-16 13:10:47 RT @emilymbender: I really appreciate this reporting from @daveyalba which does an excellent job of NOT relegating the voices she quotes to…

2022-06-16 13:10:24 RT @emilymbender: Again I'm starting to see comments in support of LMs learning meaning invoking the lived experiences of Blind people, fro…

2022-06-16 13:10:17 RT @emilymbender: "We have to go back to basics and ask what problem it is that we are trying to solve, and how and whether technology, or…

2022-06-16 13:10:04 RT @sophiebushwick: "When we encounter seemingly coherent text coming from a machine ... we reflexively imagine that a mind produced the wo…

2022-06-16 04:43:33 @GRACEethicsAI It's cringe as a greeting in an email (from someone I've never met but had previously corresponded with), but this was worse: it was the subject line.

2022-06-16 03:07:03 "We have to go back to basics and ask what problem it is that we are trying to solve, and how and whether technology, or AI, is the best solution, in consultation with those who are going to be affected by any proposed solutions." Wise words from @kobotic and @AnjaKasp https://t.co/55SLykw5Wo

2022-06-15 19:21:17 @FelixHill84 @AndrewLampinen @dileeplearning @peabody124 @spiantado @GaryMarcus Insufficient -- that's how people use language. What is your evidence about what the machine is doing?

2022-06-15 16:46:30 @AndrewLampinen @dileeplearning @peabody124 @spiantado @GaryMarcus @FelixHill84 On what grounds do you call that prediction of outcomes of events, rather than production of likely string sequences?

2022-06-15 14:09:53 I'm generally happy to be addressed by my first name by anyone who knows me, but somehow getting an email with the *subject line* "Hey Emily" just rubs me the wrong way. Straight to the archive for that one.

2022-06-15 13:53:13 @BDehbozorgi83 Apparently neither persisted on the web --- just aired live and that was it. (Odd contrast to how radio works, in my experience.)

2022-06-15 13:21:02 RT @emilymbender: And the bad applications of so-called "AI" continue. Apparently AI21 labs claims this will help people understand the lim…

2022-06-15 12:53:11 RT @emilymbender: I think the main lesson from this week's AI sentience debate is the urgent need for transparency around so-called AI syst…

2022-06-15 04:24:22 @spiantado "From just language" is the point of confusion here. Languages are systems of signs--pairings of form and meaning--my claim has to do with the case (like with LMs) where you have only the form.

2022-06-15 04:23:22 @spiantado Once you have a language, which relates word forms to concepts, OF COURSE you can describe (or learn) concepts in terms of other concepts. The thing is that LMs have no way to get to that starting point.

2022-06-15 04:13:55 @spiantado But you're still talking about concepts --- not just strings.

2022-06-15 03:57:40 @XandaSchofield Of course!

2022-06-15 03:53:08 @XandaSchofield I love it!!! (If I ever recover the maybe-in-a-talk context that I thought I wanted this image, can I use this, with credit, of course?)

2022-06-15 03:38:54 RT @timnitGebru: Thank you @kharijohnson. "Giada Pistilli (@GiadaPistilli), an ethicist at Hugging Face,...“This narrative provokes fear,…

2022-06-15 02:44:53 @spiantado How do you measure "right relations" and what do you mean "conceptual roles"?

2022-06-15 02:42:11 @spiantado We can talk about people we've never met/learn about places we've never been because **we have acquired language** and we do that in socially rich caregiving environments, w joint attention, intersubjectivity, &

2022-06-15 02:40:06 @spiantado This sounds like an argument from ignorance --- what is and isn't hard to imagine isn't relevant.https://t.co/kX68QzZ5ee>

2022-06-15 02:39:26 @spiantado And then here: f=ma is not a relationship between the letter f and the letters m and a, but between the *concepts* that those stand for.https://t.co/QlGNdaCW9v>

2022-06-15 02:38:54 @spiantado Okay, this is where it seems to go off the rails. You've jumped from word forms (what the LM gets to see) to concepts. How does the LM have access to concepts?>

2022-06-15 02:34:16 Ugh, typo: sophomores'

2022-06-15 02:15:56 My twitter mentions the last few days have been like having the dorm room next to the student lounge and being stuck overhearing sophomore's midnight conversations in perpetuity.

2022-06-15 02:15:10 @ankesh_anand Extracting specific strings from a page and giving that URL is one thing, but generating strings and then generating a URL to go with them != citing the source of information!More detail here:https://t.co/rkDjc4BGzj

2022-06-15 00:07:24 @XandaSchofield I will look forward to seeing the result, however it turns out!

2022-06-15 00:05:41 @XandaSchofield This might be too close to "work" for your purposes, so feel free to ignore.(And I'll try to ignore the slight feeling of unease that this has to do with one of my up-coming talks, but I can't remember which nor how it really fits in. Maybe the thought was from a dream...)

2022-06-15 00:04:53 @XandaSchofield I forget the exact context, but a while back I was musing that it would be helpful to have an image of "person looking at a robot with a mirror for a face and seeing themself reflected there".

2022-06-14 21:20:47 @JOSourcing @CryptoPpl Source: https://t.co/tamYEbXo4T

2022-06-14 21:15:29 RT @chrismoranuk: This from @emilymbender is very, very good indeed. Not just on AI and the perception of sentience, but also on language m…

2022-06-14 20:22:06 @rachelmetz @SashaMTL I believe folks are working on gender neutral/gender inclusive language for Spanish, French, etc.

2022-06-14 00:37:59 @korteqwerty The cats stayed off screen this time

2022-06-14 00:08:41 Getting ready for a live interview on Al Jazeera at 5:15 Pacific

2022-06-13 23:49:31 RT @timnitGebru: How can we let it be known far &

2022-06-13 23:18:44 @mmitchell_ai Couldn't find anything in the arxiv paper either, beyond a tiny paragraph in the appendix. As if you could document 1.5 trillion words of training data in one paragraph.

2022-06-13 23:14:40 RT @timnitGebru: Toni Morrison's quote is so relevant in this whole sentience thing. Leaders of companies, too privileged to think about cu…

2022-06-13 21:22:14 RT @emilymbender: Herein lies the source of the problem. "This is all a distraction from the actual, urgent problems" isn't chiding, it's t…

2022-06-13 19:46:19 RT @evanmiltenburg: #NLProc news: corpora list moved to: https://t.co/mEJjXgxR4e

2022-06-13 19:21:31 @ianbicking Thanks. I definitely needed a little more mansplaining after last weekend.

2022-06-13 17:31:51 @ruthstarkman @TaliaRinger So, it aired, but then didn't make their web page. DM me for further details...

2022-06-13 15:57:32 @MSRodekirchen @thesiswhisperer This might be of interest: https://t.co/0Xc7WVeswa

2022-06-13 15:36:22 RT @BDehbozorgi83: The classic paper by Prof. Emily Bender @emilymbender et al. on "The Dangers of Stochastic Parrots", along with a fanta…

2022-06-13 14:53:15 RT @EveForster: @eunux_ @emilymbender Some of our most successful recent tech advances haven't been the tech itself, but how the tech enabl…

2022-06-13 13:52:28 @itsafronomics @timnitGebru Email is better. I'm snowed under today and don't keep track of Twitter DMs.

2022-06-13 13:50:32 RT @random_walker: It's convenient for Google to let go of Lemoine because it helps them frame the narrative as a bad-apples problem, when…

2022-06-13 13:50:28 RT @random_walker: But at the very least maybe we can stop letting company spokespeople write rapturous thinkpieces in the Economist?

2022-06-13 13:48:14 @itsafronomics @timnitGebru What are your questions?

2022-06-13 13:38:39 Does anyone know if there is any data documentation for LaMDA (e.g. datasheet)?

2022-06-13 13:09:39 @LeonDerczynski Alas, it didn't stay confined to the weekend...

2022-06-13 13:04:54 @LeonDerczynski I can't even with his "(d) unfun".

2022-06-13 13:00:33 Herein lies the source of the problem. "This is all a distraction from the actual, urgent problems" isn't chiding, it's the voice of those suffering the impacts of those problems. Also, the point of those 70 years of SciFi wasn't "hey, cool gadgets!" https://t.co/ijavdCx15v

2022-06-12 16:15:38 @asayeed @Ozan__Caglayan What if all funding for AI was funneled through/controlled by fields outside CS?

2022-06-12 16:09:54 @mattecapu @CGraziul @kristintynski @nitashatiku @ilyasut @karpathy @gwern https://t.co/z1F7fESFOn

2022-06-12 16:05:58 @Ozan__Caglayan Oh absolutely. And they are doing really important work tracking those incidents.I'm thinking it would also be helpful to visualize where the hype is coming from and how it changes over time, as a means of disrupting the hype itself.

2022-06-12 16:04:05 @Ozan__Caglayan They are great! But I'm thinking of tracking a type of incident which I think falls outside their remit. (Would be happy to be wrong, though.)

2022-06-12 16:03:21 I don't have the bandwidth nor the design chops to be the one driving this, but I'd happily consult + help to gather past &

2022-06-12 16:01:43 We'd of course want links to the initial incident. Tweet? Press release? Media interview? Blog post?>

2022-06-12 16:00:32 I thinking pins that appear on a map over time, but the map isn't physical geography but rather a representation of corporations, universities, governments, and the weird non-/capped-profit "AI" research labs also in this space.>

2022-06-12 15:59:09 I'm thinking it would be interesting to see for each incident, what was claimed, who was doing the claiming, where their $$ comes from, &

2022-06-12 15:57:34 I wonder if we could put together a AI hype tracker or AI hype incident database + visualization, that could help expose the corporate &

2022-06-12 14:20:37 RT @emilymbender: ... with or without debiasing ...

2022-06-12 14:20:07 RT @emilymbender: @nitashatiku As I am quoted in the piece: “We now have machines that can mindlessly generate words, but we haven’t learne…

2022-06-12 14:15:53 @iwsm Yeah, I've been getting that all along too. My rule in such cases: I do not debate people who refuse to accept as a premise my own humanity.

2022-06-12 14:10:55 The "AI sentience" discourse on Twitter this weekend is sharply underscoring the urgency of this. What systems are being set up by people under the spell of magical thinking about AI and how much more oppression will they entrench? https://t.co/LW3DMY5rR9

2022-06-12 13:44:32 RT @emilymbender: 2022 me would like to warn 2019 me that ML bros' arguments about language models "understanding" language were going to m…

2022-06-12 04:45:42 And when do we get to widespread recognition that anyone who can't tell the difference between string manipulation and "internal monologue" isn't qualified to do any decision making or advising about the development, application or regulation of computer systems?

2022-06-12 04:44:05 Not sure what 2019 me could/would have done differently, but 2019 me would definitely have been surprised. What will 2025 bring??

2022-06-12 04:43:38 @kristintynski @nitashatiku @ilyasut @karpathy @gwern You may call them "top minds" but anyone who can't tell the difference between string manipulation and "internal monologue" really isn't qualified to comment.

2022-06-12 04:42:03 2022 me would like to warn 2019 me that ML bros' arguments about language models "understanding" language were going to mutate into arguments about them being "sentient" and having "internal monologues" etc.

2022-06-12 04:39:55 @kristintynski @nitashatiku And "work" is too generous there. That's a list of papers, about text manipulation tasks. Putting them under a heading about "internal monologue" doesn't make it so.

2022-06-12 04:36:52 @kristintynski @nitashatiku Should I extend the benefit of the doubt that maybe you don't realize that you're pointing to the work of a literal eugenicist?

2022-06-11 13:23:07 @_florianmai @mmitchell_ai So to say that both coinings are somehow the same feels like a false equivalence (or, if you were doing journalism) both-sides-ism.

2022-06-11 13:22:13 @_florianmai @mmitchell_ai For "foundation models" the intent seems to be to name a category that includes multiple things and position that category as a) worthy of study and b) at the foundation of many things.>

2022-06-11 13:21:08 @_florianmai @mmitchell_ai I like the phrase "new terminology" because it helps me to think of what people are doing when coining phrases. With "stochastic parrots" our intention was to shine a light on a certain kind of AI hype, by using a colorful and evocative phrase to show what LLMs are not.>

2022-06-11 13:15:32 @balazskegl No: a fully debiased model doesn't exist, nor does a fully debiased dataset. It's worth it to reduce the most egregious bias AND understand what is left, when evaluating whether to use something in a specific use case (and what kind of avenues of effective recourse to set up).

2022-06-11 13:14:10 @realn2s @bert_hu_bert My guess is that they are getting people to pay for publication, but they might be going after some people to "seed" the conference with credibility (and not charging them).

2022-06-11 12:57:24 RT @emilymbender: I don't see any current or future problems facing humanity that are addressed by building ever larger LMs, with or withou…

2022-06-11 12:57:06 RT @emilymbender: I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "r…

2022-06-11 05:37:20 ... with or without debiasing ... https://t.co/kL5DOvXBRR

2022-06-11 05:33:06 Another: International Conference on Economics or Management inviting me to self-nominate as a keynote speaker next month: https://t.co/6lxpc95anh

2022-06-11 04:20:38 @Miles_Brundage And I rather suspect you don't have the right people around the table because these orgs that claim to be "making AI safe for all of humanity" are just as disastrously unrepresentative as the rest of Silicon Valley. Again:https://t.co/O0U6nCygDL

2022-06-11 04:19:34 @Miles_Brundage And I'm saying that it's weak sauce because you didn't start from the state of the art in the field to develop it. And if the people around the table weren't already familiar with this work, then you didn't have the right people around the table.>

2022-06-11 04:12:32 @Miles_Brundage None of that is new! If you had actually built on the work that people have been doing in this field from the beginning, you could have had the better version already.

2022-06-11 04:07:19 @Miles_Brundage That's what jumps out at me now, I'm sure there's more. And yeah, just because many people aren't meeting a bar that's low enough to trip on doesn't mean the guidelines are something to be proud of.

2022-06-11 04:06:28 @Miles_Brundage 6. Any indication of the process by which these guidelines were arrived at, who had input, who framed the discussions etc. >

2022-06-11 04:05:34 @Miles_Brundage 5. Discussion of the "bright line" of computers imitating humans, and how to ensure transparency (so that people know which text is produced by a machine).>

2022-06-11 04:04:43 @Miles_Brundage 3. Any notion that the answer might be DO NOT DEPLOY.4. Any consideration of task/tech fit and determining in what contexts automation is actually appropriate.>

2022-06-11 04:03:03 @Miles_Brundage 1. Documentation of source datasets and trained models (datasheets, model cards, data statements etc).2. Systems of recourse and refusal for both data subjects and people who might have systems used on them in some way.>

2022-06-11 04:01:30 @Miles_Brundage I'm not going to rewrite your best practices here on Twitter on a Friday night, but just as a start, here are some things that I see are missing:

2022-06-11 03:50:31 I don't see any current or future problems facing humanity that are addressed by building ever larger LMs, with or without calling them AGI, with or without ethics washing, with or without claiming "for social good".

2022-06-11 03:48:41 Someone who was genuinely interested in using their $$ to protect against harms done in the name of AI would be funding orgs like @DAIRInstitute @C2i2_UCLA and @ruha9 's #IdaLab. Theirs is the work that brings us closer to justice and tech that benefits society.

2022-06-11 03:42:09 Without actually being in conversation (or better, if you could build those connections, in community) with the voices you said "we should represent" but then ignore/erase/avoid, you can't possibly see the things that the "gee-whiz look AGI!" discourse is distracting from.

2022-06-11 03:39:55 And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.https://t.co/o5vqdzocvv>

2022-06-11 03:37:36 I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".>

2022-06-11 03:34:09 @JoanBajorek @psb_dc @ChrisGGarrod @pierrepinna @techreview My full comment was: "I applaud the transparency here, not just in releasing the model but also the information about training compute cost and the like. I would hope that the transparency extends to very thorough documentation of the source datasets, as well."

2022-06-10 22:03:46 @ZeerakTalat @Abebab Oh, that looks amazing! Too bad 12 BST is inaccessible from where I am. Do you know if it will be recorded?

2022-06-10 21:57:53 @jessgrieser @bronwyn Jessi wins the internet this week.

2022-06-10 21:39:02 NLP isn't a "research direction", it's a research area/set of questions, etc. Nor should we let industry labs control the agenda of any research area.Required reading on corporate capture: https://t.co/K2MJQyUuIC https://t.co/RSaAthHKqq

2022-06-10 20:23:25 @_florianmai @mmitchell_ai We coined "stochastic parrots" for a paper, which became famous largely because Google decided to blow up their AI ethics team over it. We didn't name a center using the term, nor propose it as a research area.

2022-06-10 20:08:58 @athundt @sjmielke @naaclmeeting Yeah, I wonder if there's a way to embed alt-text in the slides? That could capture both text &

2022-06-10 19:59:51 @sjmielke @naaclmeeting Thinking about accessibility -- will you have a version of your slides to distribute that work for screen readers?

2022-06-10 19:51:28 Thinking about how to train more of the public to recognize &

2022-06-10 18:57:04 @tdietterich @rajiinio @timnitGebru And even if (i) is part of it, that doesn't mean my hypothesis about the toxically competitive culture in AI research is false.

2022-06-10 18:56:15 @tdietterich @rajiinio @timnitGebru Where by "human-like" I mean "supposedly human-like", of course.

2022-06-10 18:55:52 @tdietterich @rajiinio @timnitGebru I can see (i) being a factor, but (ii) only makes sense if you think that the human-like systems are actually being evaluated in any reasonable way. Which: 100% they are not.Again: https://t.co/kR4ZA1Bawz

2022-06-10 18:46:47 @rajiinio @timnitGebru You're right that there are so many sensible, helpful, verifiable, etc use cases that are getting ignored. My guess is that it stems from the really unhealthy culture of competition in the field of AI.

2022-06-10 18:10:49 RT @holdspacefree: If you're book-marking papers to read about big language models, their claims of grandiosity, and the dangers that poses…

2022-06-10 17:58:45 @nsaphra Meaning: I respect your talents as a comedian and agree that this isn't funny....

2022-06-10 17:58:17 @nsaphra Well, if anyone could make that funny, I guess it would be you?

2022-06-10 17:54:53 @nsaphra I mean, comedy source material maybe?

2022-06-10 17:52:06 @LeonDerczynski #goals

2022-06-10 17:48:24 Just because you can describe something in language doesn't mean that you have created a test for it. It does mean that we need to bring all of our critical thinking to the table and not mistake these tasks for any kind of evidence of "machine capabilities".

2022-06-10 17:47:04 I think we can trace the proliferation of bogus tasks to the very flexibility of language + the ability of language sequence models to produce seemingly coherent, on-topic text. >

2022-06-10 17:44:59 @timnitGebru IKR??

2022-06-10 17:13:29 I haven't had the time to dig into BigBench, but this is a reminder to not be impressed by size (here, I guess, number of constituent tasks). Effective evaluation depends on the quality of tasks, including construct validity.For more, see: https://t.co/kR4ZA1Bawz https://t.co/ZIdsEucFse

2022-06-10 17:12:07 These answers to these questions are No, No, and No. Casting this as an evaluation that could somehow quantitatively reach any other answer betrays drastic misunderstandings of both the supposed target domain (justice!) and the actual capabilities of language models.#BigBench https://t.co/jGGxp5Lhsm

2022-06-10 16:57:57 @mmitchell_ai @_KarenHao !

2022-06-10 16:36:06 RT @naaclmeeting: NOTE: the early registration deadline has been extended to June 10th (today)! If you haven't already, please register tod…

2022-06-09 19:54:33 RT @fchollet: A pretty common fallacy in AI is cognitive anthropomorphism: "as a human, I can use my understanding of X to perform Y, so if…

2022-06-09 19:51:52 RT @emilymbender: I'm not interested in how impressed the journalist was. That's not news. What I need to know as a reader, what I want the…

2022-06-09 19:51:40 I'm not interested in how impressed the journalist was. That's not news. What I need to know as a reader, what I want the public to know, is what is being done in the name of "AI", to whom, who benefits, and how can democratic oversight be exerted?

2022-06-09 19:50:25 And can I just add that the tendency of journalists who write like this to center their own experience of awe---instead of actually informing the public---strikes me as quite self-absorbed.>

2022-06-09 19:49:04 This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.https://t.co/CQErPWEEQs>

2022-06-09 19:47:20 I guess the task of asking journalists to maintain a critical distance from so-called "AI" is going to be unending.For those who don't see what the problem is, please see: https://t.co/0Xc7WVeswa https://t.co/bKCZWKRRBU

2022-06-09 19:36:35 Yup, still boring. Also not interested in hearing how impressed people are with model output. https://t.co/mEump1pQWE

2022-06-09 19:36:04 RT @emilymbender: The genre of "I'm going to ask GPT-3 and see what it says" is fundamentally boring to me, and I think I've put my finger…

2022-06-09 19:20:24 @Abebab So sorry to hear it! I hope that things will resolve to your satisfaction and that in the meantime you'll find some space to think your own thoughts/do the work you want to do.

2022-06-09 13:30:43 @NickRMorgan Not necessarily model failure --- if the model's task is to generate word sequences, and it generates one, has it failed? More like task/tech fit failure.

2022-06-09 13:29:58 @complingy @swabhz Yeah, the queries I get seem to come from the local high-prestige/high-pressure high school. And I'm always grumpy about them, since a) it seems like HS staff are trying to get me to do their job &

2022-06-09 13:20:32 RT @emilymbender: @KathrynECramer Asking "Is a language model truthful" is actually a category error: language models are word sequence pre…

2022-06-09 13:20:17 @KathrynECramer I steer clear of "hallucination", too, for two reasons:1) I dislike making light of serious mental health symptoms2) "hallucinate" suggests perception/inner life, which again language models don't have

2022-06-09 13:19:04 @KathrynECramer For more on that point, see "AI and the Everything in the Whole Wide World Benchmark" with @rajiinio @cephaloponderer @alexhannaand @amandalynnePhttps://t.co/kR4ZA1Bawz

2022-06-09 13:17:56 @KathrynECramer The risk that people then turn around and use these benchmarks to say "See, my foul-mouthed, 4*han trained lg model tells it like it is!" shows yet another angle on the problems with claiming generality where it doesn't/can't exist.>

2022-06-09 13:15:49 @KathrynECramer But it's not even a generalizable measure of that --- just a measure over some specific dataset.>

2022-06-09 13:15:01 @KathrynECramer Those supposed tests for truthfulness only test the extent to which LMs output word sequences/assign higher prob. to word seqs that the humans creating that specific benchmark marked as "truthful".>

2022-06-09 13:13:01 @KathrynECramer Asking "Is a language model truthful" is actually a category error: language models are word sequence predictions models. There is no communicative intent, so we can't ask whether the intent is to communicate truthfully or to dissemble.>

2022-06-08 22:49:25 @Miles_Brundage @timnitGebru @LatanyaSweeney @safiyanoble @ruha9 @StanfordHAI @OpenAI @CohereAI Or, you could actually do the work of creating a document that is honestly and respectfully situated with respect to the "diverse voices" you say must be listened to. Hoping the media will do that work for you isn't it. Again: You are claiming credit for others' work.

2022-06-08 22:37:29 @Miles_Brundage @timnitGebru @LatanyaSweeney @safiyanoble @ruha9 @StanfordHAI @OpenAI @CohereAI The first sentence says "Cohere, OpenAI, and AI21 Labs have developed a preliminary set of best practices" https://t.co/2qSsJPEkFbYou're making it sound like it's your own owrk.

2022-06-08 00:37:24 @ruthstarkman Yes sad but also not a problem I am in a position to solve. I have responsibilities to the students at my own institution.

2022-06-07 23:08:28 Also, no penguins, so I won't be answering. (Student from another institution was asking to do an undergraduate thesis with me.)

2022-06-07 23:07:45 Today in cold-call emails: Got one (well last week, but doing a post-travel email cleanout today) written in lucida calligraphy or similar, i.e. something that takes at least 3x as long as normal to read. Pro-tip: don't do that.

2022-06-07 19:44:06 RT @DAIRInstitute: Dylan Baker (@dylnbkr) and Dr. Alex Hanna (@alexhanna) write: "Our intent, as the first full-time employees of the new…

2022-06-07 19:15:32 RT @alexhanna: New article alert: @dylnbkr and I write for the @SSIReview + @FordFoundation #PIT series how @DAIRInstitute is pursuing a n…

2022-06-07 18:49:37 RT @CriticalAI: #CriticalAI is reissuing our greatest hits from our ETHICS OF DATA CURATION blogs from 2021. Beginning with the first in th…

2022-06-07 18:13:05 RT @emilymbender: Really interesting case study to think about from the perspective of #ethNLP -- and also what linguists can contribute to…

2022-06-07 15:24:15 Thanks @NicoleWetsman for this thoughtful coverage!

2022-06-07 15:22:36 As an #ethNLP case study, this one is interesting, because there is less (but not no) automation between the data and the insufficiently contextualized interpretation of it. So I think it might make a good model to understand what's going on with the application of lg models. >

2022-06-07 15:21:20 Also, the construction of the corpus being searched is extremely important (as we've been saying, in the context of data documentation):>

2022-06-07 15:19:42 "In many instances, judges must look for the “ordinary meaning” of a word"But we know that words have many ordinary meanings, and that which one is salient depends on context, not (only, or even very much) on overall corpus frequency.>

2022-06-07 15:18:42 Really interesting case study to think about from the perspective of #ethNLP -- and also what linguists can contribute to society.>

2022-06-07 12:21:22 RT @breitband: Podcast * Missverständnisse und Hypes: Präziser über Künstliche Intelligenz sprechen – Interview mit @emilymbender* Den…

2022-06-07 02:12:25 O bingo de desculpas para não usar ética em PLN.Tradução de @Ricardo_Joseh_L https://t.co/85K7Iw84Cg

2022-06-06 22:22:03 RT @rcalo: Today I and 8 colleagues resigned from the Axon Ethics Advisory Board in the wake of the company's decision to respond to the Uv…

2022-06-06 21:03:46 @gabycommatteo @timnitGebru Glad to hear it!

2022-06-06 19:22:43 @bbeilz I find that academic writing that avoids clear statements of who is responsible for the work, even if the motivation is humility, leads to a misleading sense of objectivity. There is real value in taking both credit &

2022-06-06 19:20:10 @bbenzon Well, if you'd checked my bio before replying to me, you'd see that I am a professor of linguistics. Not sure why that's crap? But have a nice day.

2022-06-06 19:16:25 @bbenzon Hey, I'm a linguist and a professor and writing is a big part of my job. Do you think I'm unaware of this fact, or do you just enjoy mansplaining on a Monday afternoon?

2022-06-06 19:13:30 @bbeilz If you are presenting it as singly authored work, then "we" is just weird, unless you explain who the "we" refers to.

2022-06-06 19:03:13 Semester-school academics always talking about summer research when use quarter system folks haven't finished the teaching term yet... https://t.co/Z6SSyju9eP

2022-06-06 18:47:26 RT @Colotlan: Los idiomas indigenas y su analisis y traducción automática y analisis NLP presentan desafios importantes, sobre ello habla @…

2022-06-06 18:32:20 @curtosys

2022-06-06 18:28:28 @ejfranci2 Yay!

2022-06-06 17:53:05 @rctatman My best guess is that they've mistaken the forms of the (English) words which name and/or are used to express reasoning for actual reasoning ... and then likewise the manipulation of those forms by the LLMs for reasoning.

2022-06-06 16:56:11 @MuhammedKambal Not sure what languages are involved, but the French "on" sometimes translated as "we" is actually quite different (in this case) from English "we". "We ran the experiment" is only true if >

2022-06-06 16:38:50 Here's a great example for studying gaping holes in chains of logical reasoning --- by people, about language models, though I suppose you could apply the same technique to the word strings output by language models if you wanted to. https://t.co/eAjE3A5ujB

2022-06-06 16:32:24 @KLdivergence I think it helps to think less in terms of self-promotion (if you find that cringe) and more in terms of creating the definitive listing of your work (&

2022-06-06 16:24:47 RT @complingy: Glad to see this analysis. I still regularly invoke the #BenderRule in my reviews, so it is definitely not a solved problem.…

2022-06-06 16:16:26 RT @emilymbender: I'd love to collect translations in more languages! I'm happy to format if folks send me the translations.... https://t.c…

2022-06-06 16:07:49 @bronwyn For instance, my email on my LinkedIn profile is literally "see-my@webpage.edu". Apparently people have tried to email to that address....

2022-06-06 16:05:37 @bronwyn One key feature of the "secret word" idea here (borrowed from a more senior academic) is that I give myself permission not to reply to cold-call emails that don't follow that instruction.(That did require making sure my email wasn't too discoverable outside this page, though.)

2022-06-06 16:04:40 @bronwyn It was partially because of this kind of email (alongside even more obnoxious cold-call emails from would-be entrepreneurs wanting to 'pick my brain') that I put up my contacting me page: https://t.co/nxRxxz45Gp

2022-06-06 15:47:19 RT @KarnFort1: I noted that a number of presentations @acl2022 did not mention the language being dealt with (#BenderRule). How generalized…

2022-06-06 15:17:38 Reading student work and feeling more and more rage towards folks who teach students to avoid 1sg pronouns in academic writing. Such tortured, hard to follow prose when it would be so straightforward to say things like "I chose the examples based on..." or "I found that..."

2022-06-06 14:54:59 @KarnFort1 (Tu vois, quand je ne t'ai pas à côté, mes fautes de grammaire/orthographe ne sont pas corrigées...)

2022-06-06 14:53:16 @KarnFort1 Anonymous by request of the anonymous contributor.

2022-06-06 13:05:41 @Ricardo_Joseh_L Obrigada!

2022-06-06 12:51:40 @Ricardo_Joseh_L Obrigada. The first line of the alt text is:"NLP Ethics Excuse Bingo" bingo card. The squares are:

2022-06-06 12:23:22 @Ricardo_Joseh_L If you have a moment, can you do the title &

2022-06-06 12:18:10 Spanish version / versión en españolhttps://t.co/l58q7F2a0qw/@KarnFort1

2022-06-06 12:13:48 Version espagnole / versión en españolhttps://t.co/l58q7F2a0qw/@KarnFort1

2022-06-06 12:10:02 ¿Vais a hablar de ética en conferencias de NLP, IA, etc? ¡No olvidéis vuestro cartón de bingo!Spanish version provided by anonymous contributor. https://t.co/KuIm2N6wUh

2022-06-06 10:21:34 J'aimerais bien avoir des versions en tout pleine de langues. Envoyez-les moi et je ferrai le formatting. https://t.co/jJIXiFfUwM

2022-06-05 15:05:24 RT @emilymbender: Ready for discussions of #ethNLP and ethics review at NLP/ML etc conferences? Don't forget your bingo card! (With @KarnFo…

2022-06-05 05:44:18 RT @emilymbender: Version française: https://t.co/DeN02rLFHx

2022-06-04 14:48:29 RT @breitband: "Ich bezweifle, dass es etwas gibt, das man zu diesem Zeitpunkt mit Recht 'künstliche Intelligenz' nennen kann",sagt @emily…

2022-06-04 14:44:34 RT @ruthstarkman: @breitband @emilymbender Ach! Gefunden, Danke! Schön, daß Sie diese wichtige KI-Kritikerin für ein deutschsprachiges Publ…

2022-06-04 13:10:57 RT @timnitGebru: @simonallison @Kantrowitz @thecontinent_ I think everyone who reads about how magical these systems are would also benefit…

2022-06-04 11:50:13 RT @breitband: Unsere Themen:* Wie Journalisten besser über Künstliche Intelligenz berichten können – Interview mit @emilymbender * Den v…

2022-06-03 16:32:09 RT @emilymbender: Listening to a tutorial on @OSFramework by @TimoRoettger and we went looking to see if the UI is localized/localizable. S…

2022-06-03 16:22:15 @asayeed

2022-06-03 15:28:35 Merci à @AlexisMichaud13 pour l'inspiration. Il a montré un bingo des excuses par rapport au "open data".

2022-06-03 15:20:05 @jordimash @KarnFort1 That was in our first draft but we only had 16 squares... (and weren't ready to move up to 25)

2022-06-03 13:30:03 @SashaMTL @alexhanna Sounds good. Get better quick, Alex!

2022-06-03 13:20:41 @alexhanna @SashaMTL Assuming my bus is on time, maybe we could meet up near Jardin des Plantes around 9:30/9:45? (Hopefully with Alex, if she's up to it!)

2022-06-03 13:14:37 @alexhanna @SashaMTL Yes &

2022-06-03 13:11:46 @SashaMTL @alexhanna It'd be a late night ... my bus gets to Gare de Lyon at 21:08.

2022-06-03 13:09:28 @alexhanna It'd be kinda awesome to actually meet in person after all this time ... in Paris!

2022-06-03 13:08:44 @alexhanna @KarnFort1 Yeah -- I'll be back in Paris Sunday evening (for my flight on Monday). How are you doing?

2022-06-03 13:04:15 @alexhanna @KarnFort1 Kinda -- I've been in Banyuls-sur-Mer for a summer school. Now heading to Paris to see an old friend (near Paris...), before flying home Monday.

2022-06-03 12:49:48 @hipsterelectron @DippedRusk @KarnFort1 The French version (see my reply to my OT) also has alt text :)

2022-06-03 12:24:36 RT @KarnFort1: Des conférences à venir en #TAL ? N'oubliez pas votre bingo des excuses pour ne pas faire d'éthique ! (avec @emilymbender da…

2022-06-03 12:24:25 Version française: https://t.co/DeN02rLFHx

2022-06-03 12:23:12 Ready for discussions of #ethNLP and ethics review at NLP/ML etc conferences? Don't forget your bingo card! (With @KarnFort1 on the TGV from Perpignan to Paris). https://t.co/qMfHNuzwpv

2022-06-03 05:35:45 RT @MisterABK: Fascinating cultural differences https://t.co/gVsU7m63FD

2022-06-03 04:15:07 @TaliaRinger My guess is not your fault: probably the dominant factors are gender + ML disdain for domain expertise. Adjusting vocab/presentation can maybe move the needle a little bit, but I doubt that's the main thing.

2022-06-03 03:26:57 RT @AmericasNLP: The website of the Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas is…

2022-06-02 19:34:16 1) purple2) orange3) brown https://t.co/WpkchcL3XR

2022-06-02 17:03:10 Listening to a tutorial on @OSFramework by @TimoRoettger and we went looking to see if the UI is localized/localizable. Seems like no? My guess is that global uptake would be much better if the website were available in more languages.

2022-06-02 15:43:22 @lisa_b_davidson @jessgrieser @drswissmiss Bonus property of schedule send: that email you wrote on Saturday morning isn't buried under all the other email when the addressee opens it on Monday.

2022-06-02 15:39:47 @jessgrieser @drswissmiss @lisa_b_davidson I think there is a difference between 1:1 emails and group emails. If you have a group thread there can be pressure to reply if others are replying lest a consensus develop without one's input. (As was mentioned above.)

2022-06-02 06:49:38 RT @anyabelz: In case you missed our flier at #ACL2022nlp - nb this is for everyone in #NLProc and #ML not just people working on evaluatio…

2022-06-01 13:05:10 @CGraziul I have no idea what you mean by "language speaks us", actually.

2022-06-01 08:45:24 RT @GirlsWhoCode: Congrats to computer scientist @timnitGebru for being named one of @TIME’s 100 Most Influential People of 2022. #TIME10…

2022-06-01 08:44:28 @alexhanna Get well soon!

2022-06-01 08:12:07 RT @DingemanseMark: An underappreciated aspect of #dalle2's secret language abilities: prompts like "data scientist" turn out to have cover…

2022-06-01 08:11:57 RT @DingemanseMark: Hate to pour cold water on this fun observation but the notion of "secret language" with "meanings" here is fundamental…

2022-06-01 08:11:42 RT @DingemanseMark: DALL-E does impressive visualizations of biased datasets. I like how the first example is a meme-like koala dunking a b…

2022-06-01 07:23:21 RT @LauraAmalasunta: As a historian of northern Europe, allow me to explain: you don't get food in Iceland because in 1986 it was towed out…

2022-06-01 07:00:44 (To be more precise, I usually have a choice between feet firmly on the ground or back against the backrest, not both. But if the seat has a slight angle, I might have to sit really far forward for feet-firmly-planted.)

2022-06-01 06:59:46 Week two of in-person conferencing and every time I sit down I'm reminded on a key perk of virtual conferencing: never getting stuck in a chair where my feet don't touch the ground properly. #ShortPeopleProblems

2022-06-01 06:48:54 @silvia_fabbi @LiebertPub I can definitely write a filter which will send their messages right to spam, but I shouldn't have to. Also seems worthwhile to alert the world to yet another bad (likely predatory) actor in this space.

2022-05-31 20:35:51 RT @rctatman: Alright, fine: it's getting enough traction that I think I need to address this paper as a certified Grumpy Linguist in NLP.…

2022-05-31 20:35:33 <

2022-05-31 20:35:20 Read the whole thread up &

2022-05-31 14:11:49 RT @mmitchell_ai: Another thing I learned from @timnitGebru: keep track of accomplishments of your marginalized colleagues

2022-05-31 13:23:51 @SeeTedTalk @complingy That's good. They should also be in the @aclanthology though.

2022-05-31 13:22:40 @kirbyconrod @joshraclaw Can I fave again for the chef's kiss back-formation?

2022-05-31 13:11:41 @complingy @SeeTedTalk Unclear to me why the ACL 2021 videos aren't linked on the anthology though. Was that Underline, or SlidesLive or something else?

2022-05-31 12:06:00 .@LiebertPub your unsubscribe function is broken. I AM NOT INTERESTED IN YOUR SPAM --- and yet when I try to unsubscribe, I just get an "error" (and then keep getting your mails). How do I get off your f*cking list?

2022-05-30 19:19:05 @davidschlangen Agreed on all counts. I would add: Specific online social-ish programming (BoaF, pop-up mentoring, etc) online during the night hours local time --- provide a focal point for those away from the in person timezone.

2022-05-30 15:35:47 @rajiinio https://t.co/k9Q2dksde3

2022-05-30 15:34:38 @rajiinio Agreed ... the problem starts with bringing in "optimization". Instead of making connections, having empathy, building community.

2022-05-30 12:09:49 @KarnFort1 @GdrLift My favorite way to show off

2022-05-30 06:37:47 @markoff Thanks for the shout-out! NB, I'm @emilymbender...

2022-05-29 12:05:41 @rogerkmoore cc @SashaMTL @NannaInie

2022-05-29 12:05:09 RT @rogerkmoore: We should never have called it “language modelling” all those years ago

2022-05-28 15:33:50 RT @ani_nenkova: ‘Move fast, break things’ is rightly criticised as a guiding philosophy but do people have real life examples of when ‘be…

2022-05-27 10:36:57 Now available through the #acl2022nlp underline! https://t.co/HEYEgxJYju

2022-05-27 10:36:41 @suzatweet @ggdupont @BigscienceW If possible, it would be good for @underlineio to add "Panel" to the title (so it can be found together with the other panels that way).

2022-05-27 10:35:22 @suzatweet @ggdupont @BigscienceW Thank you!

2022-05-27 09:53:31 Talking about spurious applications of LLMs --- has anyone used one to try to play chess?

2022-05-27 09:09:17 @BigscienceW I still can't find it. Does anyone know?

2022-05-27 06:49:40 Can anyone else find this panel yet on the #acl2022nlp @underlineio site? I see the other two, but not ours. https://t.co/HEYEgxJYju

2022-05-27 05:50:12 Strange new world to have an academic event both in the future and in the past. New to me anyway, I suppose everyone involved with TV/movies knows this well...

2022-05-27 05:49:05 I hope everyone finds this discussion illuminating. It was certainly interesting to get to participate in! https://t.co/HEYEgxJYju

2022-05-26 22:06:53 RT @AJLUnited: Check out this thread by @emilymbender about the @nytimes article about “Can A.I.-Driven Voice Analysis Help Identify Mental…

2022-05-26 21:36:26 RT @mmitchell_ai: The @BigscienceW #acl2022nlp workshop is tomorrow, and includes a panel on Ethical &

2022-05-26 12:49:54 @ShlomoArgamon Yeah, I've started talking about "societal impacts of NLP" and similar.

2022-05-26 11:31:16 @LingMuelller Some of my students borrowed older laptops from the university! I think others worked out how to use the server cluster.

2022-05-26 11:09:45 @LeonDerczynski Hang in there!

2022-05-26 10:52:20 @tallinzen @LChoshen Also, while it's very nice to talk with people in person at conferences (I had a blast!) I'd not overestimate the chances of getting a word in person with a keynote speaker at a 3000-person conference....

2022-05-26 10:50:20 @tallinzen @LChoshen On the contrary, I see other reasons to maintain hybrid conferences, and would hope that the possibility of remote keynotes (and award acceptance talks...) would improve the diversity of speakers we see in those roles.

2022-05-26 10:39:07 @Abebab Yeah, that's good insurance.

2022-05-26 10:32:07 @LingMuelller Last I checked, VirtualBox didn't support the new Apple hardware, which is a non-starter for me (until we figure out a workaround). So, no to your first question :)

2022-05-26 10:23:35 @Abebab Arg!! I hope you'll be able to reconstruct your thoughts quickly.

2022-05-26 09:16:26 @Abebab @DingemanseMark I will, I promise!

2022-05-26 08:58:15 @Abebab Coffee would have been lovely! I'll look forward to the next opportunity.

2022-05-26 08:27:03 @Abebab You were there! I thought I spotted you but wasn't sure. I'm sorry I missed the chance to say hi.

2022-05-26 05:32:25 RT @emilymbender: I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't ag…

2022-05-26 04:58:35 @tdietterich It's a poll. Not a request for advice.

2022-05-26 04:46:01 You see a thread and you want to respond positively. You:

2022-05-25 20:50:20 High precision anyway. Not really checking recall.

2022-05-25 20:36:59 Starting to get very good at predicting who in my replies will have "AI" or "AGI" in their Twitter bio...

2022-05-25 20:17:39 @KyleMorgenstein https://t.co/tWGVU6Tup1.

2022-05-25 20:15:07 And work that frames the problem as one that could be "solved" at the level of training or programming a system "if only" we had human agreement on ethical systems is worse than useless b/c it distracts from the actual problems.

2022-05-25 20:13:56 Working out systems of governance, appropriate regulations &

2022-05-25 20:12:33 Discourses around "teaching machines to be ethical" are frankly just a distraction, and one grounded in fantastical ideas about "AGI". >

2022-05-25 20:10:55 5. And finally, task/tech fit: systems that are designed for their use case, evaluated in situ, including for whether the task is even sensible and how the system might harm vulnerable &

2022-05-25 20:09:31 3. Transparency, so that advocates for those affected by the technology can push back.4. More generally, recourse when there is harm.>

2022-05-25 20:08:38 What you'll find is that the proposed solutions aren't autonomous systems that are "ethical", but rather:1. (Truly) democratic oversight into what systems are deployed.2. Transparency, so human operators can contextualize system output.>

2022-05-25 20:06:31 That argument presupposes that the goal is to create autonomous systems that will "know" how to behave "ethically". But if you actually seriously engage with the work of authors like @ruha9 @safiyanoble @timnitGebru @csdoctorsister @Abebab @rajiinio @jovialjoy &

2022-05-25 20:05:08 I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't agreed on what is ethical/moral/right"This always feels like a cop-out to me, and I think I've put my finger on why:>

2022-05-25 20:01:28 RT @timnitGebru: 4-5 years ago @mmitchell_ai was a semifinalist for the MIT Tech Review 35 under 35. I wrote a letter of support. One of th…

2022-05-25 19:27:16 RT @mmitchell_ai: YAY!Thank you to @JesseDodge for being such an advocate for my work, and to our co-authors: Amit Goyal, @kotymg, @karlst…

2022-05-25 19:19:34 RT @BlancheMinerva: Fabulous work lead by @KreutzerJulia and @iseeaswell. These issues are massive and systematic, and blatantly invalidate…

2022-05-25 18:59:45 RT @emilymbender: @vukosi It's really frustrating that the work of doing that checking isn't done by the people who built the corpus (best)…

2022-05-25 18:46:05 That one even has fake ISSNs!"e-IՏՏƝ: 2640-0502 p-IՏՏƝ: 2640-0480"

2022-05-20 08:11:00 CAFIAC FIX

2022-10-29 04:02:24 #ThreeMarvins

2022-10-29 04:01:56 Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, &

2022-10-29 04:01:21 And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine./12 https://t.co/pDCl1lgRx8

2022-10-29 04:00:49 This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."/11 https://t.co/USu7GiP7V1

2022-10-29 04:00:20 Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data./10 https://t.co/h44gCyjkka

2022-10-29 03:59:35 This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability./9 https://t.co/4S61hX1tQb

2022-10-29 03:59:02 I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out./8 https://t.co/UTEr3VflVo

2022-10-29 03:58:30 Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making./7

2022-10-29 03:58:10 Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic./6

2022-10-29 03:57:47 The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus"./5 https://t.co/WRf5bT8iMI

2022-10-29 03:56:46 However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high./4

2022-10-29 03:56:38 I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable./3

2022-10-29 03:56:26 Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system./2

2022-10-29 03:56:13 Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-28 21:28:04 @DrVeronikaCH See end of thread.

2022-10-28 21:22:27 @JakeAziz1 In my grammar engineering course, students work on extending implemented grammars over the course of the quarter. Any given student only works on one language (with a partner), but in our class discussions, everyone is exposed to all the languages we are working on.

2022-10-28 20:54:22 For that matter, what would the world look like if our system prevented the accumulation of wealth that sits behind the VC system?

2022-10-28 20:53:48 What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but rather to realistic, community-governed language technology?>

2022-10-28 20:40:46 (Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: https://t.co/CKybX4BRsz )

2022-10-28 20:40:05 Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.

2022-10-27 15:35:48 @jessgrieser For this shot, yes. Second dose is typically the rough one for those for whom it is rough. Also: thank you for your service!!

2022-10-27 05:16:49 RT @mark_riedl: That is, we can't say X is true of a LM at scale Y. We instead can only say X is true of a LM at scale Y trained in unknown…

2022-10-26 21:03:30 Another fun episode! @timnitGebru did some live tweeting here. We'll have the recording up in due course... https://t.co/UwgCA1uu4a

2022-10-26 20:53:19 RT @timnitGebru: Happening in 2 minutes. Join us.https://t.co/vDCO6n1cno

2022-10-26 18:28:08 AI "art" as soft propaganda. Pull quote in the image, but read the whole thing for really interesting thoughts on what a culture of extraction means. By @MarcoDonnarumma h/t @neilturkewitzhttps://t.co/2uAJvBTVbM https://t.co/X4at2irn0V

2022-10-26 17:51:27 In two hours!! https://t.co/70lqNfeHjh

2022-10-26 15:20:39 @_akpiper @CBC But why is it of interest how GPT-3 responds to these different prompts? What is GPT-3 a model of, in your view?

2022-10-25 18:16:23 @_akpiper @CBC How did you establish that whatever web garbage GPT was trained on was a reasonable data sample for what you were doing?

2022-10-25 18:14:43 Sorry, folks, if I'm missing important things. A post about sealioning led to my mentions being filled with sealions. Shoulda predicted that, I guess. https://t.co/pg6IfnZxUQ

2022-10-25 12:51:32 RT @emilymbender: Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly repor…

2022-10-25 12:51:29 RT @emilymbender: Thinking about this again this morning. I wonder what field of study could provide insight into the relative contribution…

2022-10-25 00:29:46 @timnitGebru @Foxglovelegal From what little I understand, these regulations only kick in when there are customers involved paying for a product. So, I guess the party with standing might be advertisers who are led to believe that they are placing their ads in an environment that isn't hate-speech infested.

2022-10-25 00:27:03 @timnitGebru Huh -- I wonder how truth in advertising regulations apply to cases like this, where people representing companies but on their own twitter account go around making unsupported claims about the effectiveness of their technology.

2022-10-25 00:19:07 @olivia_p_walker https://t.co/YyrMnZdhjW

2022-10-25 00:16:57 I mean, acting like pointing out that something is eugenicist is the problem is not the behavior I'd expect of someone who is actually opposed to eugenics.

2022-10-25 00:15:14 If you're offended when someone points out that your school of thought (*cough* longtermism/EA *cough*) is eugenicist, then clearly you agree that eugenics is bad. So why is the move not to explore the ways in which it is (or at least appears to be) eugenicist and fix that?

2022-10-25 00:03:12 RT @aclmeeting: #ACL2023NLP is looking for an experienced and diverse pool of Senior Area Chairs (SACs). Know someone who makes the cut?…

2022-10-24 19:18:09 @EnglishOER Interesting for what? What are you trying to find out, and why is poking at a pile of data of unknown origin a useful way to do so?

2022-10-24 17:06:13 @EnglishOER But "data crunching of so much text" is useless unless we have a good idea of how the text was gathered (curation rationale) and what it represents.

2022-10-24 16:40:43 Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly reporting on how exciting it was to read the results?

2022-10-24 04:29:30 @athundt @alkoller It looks like only 7 of them are visible but that's plausible.

2022-10-24 04:17:55 I wasn't sure what to do for my pumpkin this year, but then @alkoller visited and an answer suggested itself.#SpookyTalesForLinguists https://t.co/Bp3rULsA9z

2022-10-23 20:53:56 @jasonbaldridge I bookmarked it when you first announced the paper on Twitter but haven't had a chance to look yet.

2022-10-23 19:52:26 @tdietterich Fine. And the burden of proof for that claim lies with the person/people making it.

2022-10-23 19:47:57 @tdietterich Who is going around saying airplanes fly like birds do?

2022-10-23 19:32:27 To the extent that computational models are models of human (or animal) cognition, the burden of proof lies with the model developer to establish that they are reasonable models. And if they aren't models of human cognition, comparisons to human cognition are only marketing/hype.

2022-10-23 19:08:14 @Alan_Au @rachelmetz https://t.co/msUIrYeCEr

2022-10-23 05:29:16 @deliprao Also if you feel the need to de-hyoe your own tweet, maybe revisit and don't say the first thing in the first place?

2022-10-23 05:27:35 @deliprao What does "primordial" mean to you?

2022-10-23 05:26:27 How can we get from the current culture to one where folks who build or study this tech (and should know better) stop constantly putting out such hype?

2022-10-23 05:24:52 And likening it to "innermost thoughts" i.e. some kind of inner life is more of the same.https://t.co/kFfzL3gbhm

2022-10-23 05:22:59 Claiming that it's the kind of thing that might develop into thinking sans scare quotes with enough time? data? something? is still unfounded, harmful AI hype. https://t.co/hilvqpXgWM

2022-10-23 03:51:33 RT @emilymbender: @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 03:51:31 @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 01:18:48 @EnglishOER @alexhanna @dair_ai For the text ones, I tend to say "text synthesis machine" or "letter sequence synthesis machine". I guess you could go for "word and image synthesis machines", but "mathy math" is also catchy :)

2022-10-22 23:32:51 RT @timnitGebru: I need to get this. Image is Mark wearing sunglasses with a white hoodie that has the writings below in Black.Top:Sto…

2022-10-22 20:07:59 @safiyanoble I'm a fan of Choffy, but as someone super sensitive to caffeine I can say it will still keep me up if I have it in the afternoon. (Don't expect hot cocoa when you drink it. Think rather cacao tea.)

2022-10-21 23:46:26 @LeonDerczynski And now I'm hoping that no one will retweet the original (just your QT) because otherwise folks won't check the date and will wonder why I'm talking about GPT-2!

2022-10-21 23:39:49 @LeonDerczynski Hah -- thanks for digging that up. I've added it here, making it (currently) the earliest entry.https://t.co/uKA4tuv4jF

2022-10-21 23:38:09 RT @LeonDerczynski: This whole discussion - and the interesting threads off it - have aged like a fine wine https://t.co/ykUiRfoGTf

2022-10-21 23:11:29 @zehavoc I think a good limitations section makes the paper stronger by clearly stating the domain of applicability of the results. If that means going back and toning down some of the high-flying prose in the introduction, so much the better!

2022-10-21 19:19:40 @kirbyconrod I don't know, but I love the form pdves so much. Do you name your folders "Topic pdves"?

2022-10-21 19:14:54 @LeonDerczynski @yuvalmarton @complingy I want this meme to fit here but it doesn't --- if only people would cite the deep #NLProc (aka deep processing, not deep learning). https://t.co/7rrLQ11GEm

2022-10-21 18:19:29 RT @rctatman: Basically: knowing about ML is a subset of what you need to know to be able to build things that use ML and solve a genuine p…

2022-10-21 14:15:13 RT @mer__edith: You can start by learning that "AI" is first &

2022-10-21 04:12:05 RT @timnitGebru: I say the other way around. To those who preach that "AI" is a magical thing that saves us, please learn something about…

2022-10-21 01:44:09 @edwardbkang @simognehudson Please do post a link to your paper when it is out!

2022-10-29 13:59:20 RT @emilymbender: What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but…

2022-10-29 13:01:58 RT @emilymbender: I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxio…

2022-10-29 13:00:56 RT @emilymbender: Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-29 04:02:24 #ThreeMarvins

2022-10-29 04:01:56 Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, &

2022-10-29 04:01:21 And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine./12 https://t.co/pDCl1lgRx8

2022-10-29 04:00:49 This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."/11 https://t.co/USu7GiP7V1

2022-10-29 04:00:20 Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data./10 https://t.co/h44gCyjkka

2022-10-29 03:59:35 This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability./9 https://t.co/4S61hX1tQb

2022-10-29 03:59:02 I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out./8 https://t.co/UTEr3VflVo

2022-10-29 03:58:30 Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making./7

2022-10-29 03:58:10 Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic./6

2022-10-29 03:57:47 The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus"./5 https://t.co/WRf5bT8iMI

2022-10-29 03:56:46 However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high./4

2022-10-29 03:56:38 I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable./3

2022-10-29 03:56:26 Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system./2

2022-10-29 03:56:13 Today's #AIhype take-down + analysis (first crossposted to both Twitter &

2022-10-28 21:28:04 @DrVeronikaCH See end of thread.

2022-10-28 21:22:27 @JakeAziz1 In my grammar engineering course, students work on extending implemented grammars over the course of the quarter. Any given student only works on one language (with a partner), but in our class discussions, everyone is exposed to all the languages we are working on.

2022-10-28 20:54:22 For that matter, what would the world look like if our system prevented the accumulation of wealth that sits behind the VC system?

2022-10-28 20:53:48 What would the world look like if VC funding wasn't flowing to companies trafficking in #AIhype and surveillance tech but rather to realistic, community-governed language technology?>

2022-10-28 20:40:46 (Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: https://t.co/CKybX4BRsz )

2022-10-28 20:40:05 Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.

2022-10-27 15:35:48 @jessgrieser For this shot, yes. Second dose is typically the rough one for those for whom it is rough. Also: thank you for your service!!

2022-10-27 05:16:49 RT @mark_riedl: That is, we can't say X is true of a LM at scale Y. We instead can only say X is true of a LM at scale Y trained in unknown…

2022-10-26 21:03:30 Another fun episode! @timnitGebru did some live tweeting here. We'll have the recording up in due course... https://t.co/UwgCA1uu4a

2022-10-26 20:53:19 RT @timnitGebru: Happening in 2 minutes. Join us.https://t.co/vDCO6n1cno

2022-10-26 18:28:08 AI "art" as soft propaganda. Pull quote in the image, but read the whole thing for really interesting thoughts on what a culture of extraction means. By @MarcoDonnarumma h/t @neilturkewitzhttps://t.co/2uAJvBTVbM https://t.co/X4at2irn0V

2022-10-26 17:51:27 In two hours!! https://t.co/70lqNfeHjh

2022-10-26 15:20:39 @_akpiper @CBC But why is it of interest how GPT-3 responds to these different prompts? What is GPT-3 a model of, in your view?

2022-10-25 18:16:23 @_akpiper @CBC How did you establish that whatever web garbage GPT was trained on was a reasonable data sample for what you were doing?

2022-10-25 18:14:43 Sorry, folks, if I'm missing important things. A post about sealioning led to my mentions being filled with sealions. Shoulda predicted that, I guess. https://t.co/pg6IfnZxUQ

2022-10-25 12:51:32 RT @emilymbender: Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly repor…

2022-10-25 12:51:29 RT @emilymbender: Thinking about this again this morning. I wonder what field of study could provide insight into the relative contribution…

2022-10-25 00:29:46 @timnitGebru @Foxglovelegal From what little I understand, these regulations only kick in when there are customers involved paying for a product. So, I guess the party with standing might be advertisers who are led to believe that they are placing their ads in an environment that isn't hate-speech infested.

2022-10-25 00:27:03 @timnitGebru Huh -- I wonder how truth in advertising regulations apply to cases like this, where people representing companies but on their own twitter account go around making unsupported claims about the effectiveness of their technology.

2022-10-25 00:19:07 @olivia_p_walker https://t.co/YyrMnZdhjW

2022-10-25 00:16:57 I mean, acting like pointing out that something is eugenicist is the problem is not the behavior I'd expect of someone who is actually opposed to eugenics.

2022-10-25 00:15:14 If you're offended when someone points out that your school of thought (*cough* longtermism/EA *cough*) is eugenicist, then clearly you agree that eugenics is bad. So why is the move not to explore the ways in which it is (or at least appears to be) eugenicist and fix that?

2022-10-25 00:03:12 RT @aclmeeting: #ACL2023NLP is looking for an experienced and diverse pool of Senior Area Chairs (SACs). Know someone who makes the cut?…

2022-10-24 19:18:09 @EnglishOER Interesting for what? What are you trying to find out, and why is poking at a pile of data of unknown origin a useful way to do so?

2022-10-24 17:06:13 @EnglishOER But "data crunching of so much text" is useless unless we have a good idea of how the text was gathered (curation rationale) and what it represents.

2022-10-24 16:40:43 Like, would we stand for news stories that were based on journalists asking Magic 8 Ball questions and breathlessly reporting on how exciting it was to read the results?

2022-10-24 04:29:30 @athundt @alkoller It looks like only 7 of them are visible but that's plausible.

2022-10-24 04:17:55 I wasn't sure what to do for my pumpkin this year, but then @alkoller visited and an answer suggested itself.#SpookyTalesForLinguists https://t.co/Bp3rULsA9z

2022-10-23 20:53:56 @jasonbaldridge I bookmarked it when you first announced the paper on Twitter but haven't had a chance to look yet.

2022-10-23 19:52:26 @tdietterich Fine. And the burden of proof for that claim lies with the person/people making it.

2022-10-23 19:47:57 @tdietterich Who is going around saying airplanes fly like birds do?

2022-10-23 19:32:27 To the extent that computational models are models of human (or animal) cognition, the burden of proof lies with the model developer to establish that they are reasonable models. And if they aren't models of human cognition, comparisons to human cognition are only marketing/hype.

2022-10-23 19:08:14 @Alan_Au @rachelmetz https://t.co/msUIrYeCEr

2022-10-23 05:29:16 @deliprao Also if you feel the need to de-hyoe your own tweet, maybe revisit and don't say the first thing in the first place?

2022-10-23 05:27:35 @deliprao What does "primordial" mean to you?

2022-10-23 05:26:27 How can we get from the current culture to one where folks who build or study this tech (and should know better) stop constantly putting out such hype?

2022-10-23 05:24:52 And likening it to "innermost thoughts" i.e. some kind of inner life is more of the same.https://t.co/kFfzL3gbhm

2022-10-23 05:22:59 Claiming that it's the kind of thing that might develop into thinking sans scare quotes with enough time? data? something? is still unfounded, harmful AI hype. https://t.co/hilvqpXgWM

2022-10-23 03:51:33 RT @emilymbender: @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 03:51:31 @histoftech @KLdivergence Me three: emilymbender@mastodon.social

2022-10-23 01:18:48 @EnglishOER @alexhanna @dair_ai For the text ones, I tend to say "text synthesis machine" or "letter sequence synthesis machine". I guess you could go for "word and image synthesis machines", but "mathy math" is also catchy :)

2022-10-22 23:32:51 RT @timnitGebru: I need to get this. Image is Mark wearing sunglasses with a white hoodie that has the writings below in Black.Top:Sto…

2022-10-22 20:07:59 @safiyanoble I'm a fan of Choffy, but as someone super sensitive to caffeine I can say it will still keep me up if I have it in the afternoon. (Don't expect hot cocoa when you drink it. Think rather cacao tea.)

2022-10-21 23:46:26 @LeonDerczynski And now I'm hoping that no one will retweet the original (just your QT) because otherwise folks won't check the date and will wonder why I'm talking about GPT-2!

2022-10-21 23:39:49 @LeonDerczynski Hah -- thanks for digging that up. I've added it here, making it (currently) the earliest entry.https://t.co/uKA4tuv4jF

2022-10-21 23:38:09 RT @LeonDerczynski: This whole discussion - and the interesting threads off it - have aged like a fine wine https://t.co/ykUiRfoGTf

2022-10-21 23:11:29 @zehavoc I think a good limitations section makes the paper stronger by clearly stating the domain of applicability of the results. If that means going back and toning down some of the high-flying prose in the introduction, so much the better!

2022-10-21 19:19:40 @kirbyconrod I don't know, but I love the form pdves so much. Do you name your folders "Topic pdves"?

2022-10-21 19:14:54 @LeonDerczynski @yuvalmarton @complingy I want this meme to fit here but it doesn't --- if only people would cite the deep #NLProc (aka deep processing, not deep learning). https://t.co/7rrLQ11GEm

2022-10-21 18:19:29 RT @rctatman: Basically: knowing about ML is a subset of what you need to know to be able to build things that use ML and solve a genuine p…

2022-10-21 14:15:13 RT @mer__edith: You can start by learning that "AI" is first &

2022-10-21 04:12:05 RT @timnitGebru: I say the other way around. To those who preach that "AI" is a magical thing that saves us, please learn something about…

2022-10-21 01:44:09 @edwardbkang @simognehudson Please do post a link to your paper when it is out!

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-25 17:30:50 RT @timnitGebru: All these so-called #AISafety institutions doing the opposite of “safety” are funded and staffed by longtermists and effec…

2022-11-25 14:05:26 @sergia_ch But I do agree that it is disappointing to interact with engineers who refuse to see that / refuse to actually engage with the substantive critique of what they built (and the process they used to build &

2022-11-25 14:05:16 @sergia_ch I'd go a different direction here. I don't think Galactica is fixable, because there is a fundamental mismatch between what they said they wanted to build and the tech they chose. >

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-28 22:51:12 @neilturkewitz Spam and could be turned into a DOS attack...

2022-11-28 21:39:11 See also: https://t.co/a24HdKWhIS

2022-11-28 21:39:04 Once again, for those who seem to have missed it: Language models are not the type of thing that can testify/offer public comment/any similar sort of speech act, because they are not the sort of thing that can have a public commitment. This is atrocious. https://t.co/YrHFuy183M

2022-11-28 20:03:06 @tdietterich @fchollet I urge you to read the short (and well presented) piece that linked to in that tweet before coming here to argue with me.

2022-11-28 19:32:36 @robmalouf @fchollet Shieber cites Dreyfus 1979: Hubert Dreyfus (1979, page 100) has made a similar analogy of climbing trees to reach the moon.

2022-11-28 19:29:38 @robmalouf @fchollet @fchollet makes a nice distinction between "cognitive automation", "cognitive assistance" and "cognitive autonomy" --- and I think is compatible with what you are saying. The argument here is against expecting the ladders to bring "cognitive autonomy". I'll look at Shieber's pc.

2022-11-28 19:12:24 @PeterVeep @fchollet Yeah -- I think that's because the motivation of the people building the applications isn't actually to build better solutions to the problem, but to prove that their system can 'learn'. It's exhausting.

2022-11-28 19:07:46 @fchollet Somehow, the current conversation &

2022-11-28 19:06:13 @fchollet All helpful metaphors, I think, for explaining why it's foolish to believe that deep learning (useful as it may be) isn't a path towards what @fchollet calls "cognitive autonomy". [I couldn't quickly turn up the source for the ladder one, and would be grateful for leads.] >

2022-11-28 19:04:29 Building taller and taller ladders won't get you to the moon -- ? Running faster doesn't get you closer to teleportation -- me ⏱ "dramatically improving the precision or efficiency of clock technology does not lead to a time travel device" -- @fchollet https://t.co/AQc9ZoLizf

2022-11-28 19:00:22 @AngelLamuno Yes!

2022-11-28 14:23:04 @jordilinares I didn't ask what you are aligned with. I was telling you that the answer to your question about the term stochastic parrots can be found in the paper where we introduced that term.

2022-11-28 01:42:44 RT @emilymbender: @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea th…

2022-11-28 01:11:30 @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea that we can (or should even strive to) do scholarship using a "view from nowhere".

2022-11-28 00:14:59 RT @timnitGebru: @jquinonero @emilymbender Turns out its easier to censure research that makes your tech look problematic than stop the rel…

2022-11-27 23:03:48 RT @_ovlb: BTW: Next Friday and Saturday @DAIRInstitute celebrates their first anniversary. Big yay! 9/ [https://t.co/FxssxPVbHx]

2022-11-27 18:43:19 @jordilinares Uh, read our paper?

2022-11-27 15:49:15 @jordilinares Hi! I'm the one who coined that phrase and it was not intended as an insult. It was intended to make clear the difference between what large LMs do and what people claim they do.

2022-11-26 13:46:21 RT @le_science4all: How dangerous are large AI models? The #hype is accelerating rushed deployments, which are causing traumas for users,…

2022-11-25 17:30:50 RT @timnitGebru: All these so-called #AISafety institutions doing the opposite of “safety” are funded and staffed by longtermists and effec…

2022-11-25 14:05:26 @sergia_ch But I do agree that it is disappointing to interact with engineers who refuse to see that / refuse to actually engage with the substantive critique of what they built (and the process they used to build &

2022-11-25 14:05:16 @sergia_ch I'd go a different direction here. I don't think Galactica is fixable, because there is a fundamental mismatch between what they said they wanted to build and the tech they chose. >

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-11-29 04:57:22 @fchollet (two tweets up, "isn't a path" should be "is a path")

2022-11-29 04:50:08 @yuvalpi @fchollet Yeah, probably. Authentic tweet :)

2022-11-29 03:43:21 @davelewisdotir I see. I don't think it's cute at all --- it's disrespectful and furthermore plays into the #AIhype that is plaguing the discourse. It also demonstrates how text synthesis machines could be used to DOS public comment processes.

2022-11-29 03:34:11 @davelewisdotir So, "boys will be boys" is what you're saying here? Nice.

2022-11-28 22:51:12 @neilturkewitz Spam and could be turned into a DOS attack...

2022-11-28 21:39:11 See also: https://t.co/a24HdKWhIS

2022-11-28 21:39:04 Once again, for those who seem to have missed it: Language models are not the type of thing that can testify/offer public comment/any similar sort of speech act, because they are not the sort of thing that can have a public commitment. This is atrocious. https://t.co/YrHFuy183M

2022-11-28 20:03:06 @tdietterich @fchollet I urge you to read the short (and well presented) piece that linked to in that tweet before coming here to argue with me.

2022-11-28 19:32:36 @robmalouf @fchollet Shieber cites Dreyfus 1979: Hubert Dreyfus (1979, page 100) has made a similar analogy of climbing trees to reach the moon.

2022-11-28 19:29:38 @robmalouf @fchollet @fchollet makes a nice distinction between "cognitive automation", "cognitive assistance" and "cognitive autonomy" --- and I think is compatible with what you are saying. The argument here is against expecting the ladders to bring "cognitive autonomy". I'll look at Shieber's pc.

2022-11-28 19:12:24 @PeterVeep @fchollet Yeah -- I think that's because the motivation of the people building the applications isn't actually to build better solutions to the problem, but to prove that their system can 'learn'. It's exhausting.

2022-11-28 19:07:46 @fchollet Somehow, the current conversation &

2022-11-28 19:06:13 @fchollet All helpful metaphors, I think, for explaining why it's foolish to believe that deep learning (useful as it may be) isn't a path towards what @fchollet calls "cognitive autonomy". [I couldn't quickly turn up the source for the ladder one, and would be grateful for leads.] >

2022-11-28 19:04:29 Building taller and taller ladders won't get you to the moon -- ? Running faster doesn't get you closer to teleportation -- me ⏱ "dramatically improving the precision or efficiency of clock technology does not lead to a time travel device" -- @fchollet https://t.co/AQc9ZoLizf

2022-11-28 19:00:22 @AngelLamuno Yes!

2022-11-28 14:23:04 @jordilinares I didn't ask what you are aligned with. I was telling you that the answer to your question about the term stochastic parrots can be found in the paper where we introduced that term.

2022-11-28 01:42:44 RT @emilymbender: @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea th…

2022-11-28 01:11:30 @kateweaverUT I'm right there with you. Among other things, not hiding the author in the text helps to dispel the idea that we can (or should even strive to) do scholarship using a "view from nowhere".

2022-11-28 00:14:59 RT @timnitGebru: @jquinonero @emilymbender Turns out its easier to censure research that makes your tech look problematic than stop the rel…

2022-11-27 23:03:48 RT @_ovlb: BTW: Next Friday and Saturday @DAIRInstitute celebrates their first anniversary. Big yay! 9/ [https://t.co/FxssxPVbHx]

2022-11-27 18:43:19 @jordilinares Uh, read our paper?

2022-11-27 15:49:15 @jordilinares Hi! I'm the one who coined that phrase and it was not intended as an insult. It was intended to make clear the difference between what large LMs do and what people claim they do.

2022-11-26 13:46:21 RT @le_science4all: How dangerous are large AI models? The #hype is accelerating rushed deployments, which are causing traumas for users,…

2022-11-25 17:30:50 RT @timnitGebru: All these so-called #AISafety institutions doing the opposite of “safety” are funded and staffed by longtermists and effec…

2022-11-25 14:05:26 @sergia_ch But I do agree that it is disappointing to interact with engineers who refuse to see that / refuse to actually engage with the substantive critique of what they built (and the process they used to build &

2022-11-25 14:05:16 @sergia_ch I'd go a different direction here. I don't think Galactica is fixable, because there is a fundamental mismatch between what they said they wanted to build and the tech they chose. >

2022-11-24 14:27:44 @LucianaBenotti I won't be there but I hope you have a great time!

2022-11-23 20:36:43 @_dmh @evanmiltenburg @EhudReiter @sebgehr @huggingface @thiagocasfer Thanks!

2022-11-23 16:18:08 Coming up in about an hour! Joins us at https://t.co/VF7TD6tw5c #MathyMath #AIHype #Galactica w/@alexhanna https://t.co/MBSAhk0hd4

2022-11-23 15:16:30 RT @MsKellyMHayes: Every year I invite people who are celebrating the colonial holiday to do something in support of Native people. Indigen…

2022-11-23 05:41:39 RT @LeonDerczynski: How does "responsible disclosure" look, for things that make machine learning models behave in undesirable way? #machin…

2022-11-23 05:40:15 RT @timnitGebru: That's right first it was a $100k blogpost prize that was announced by FTX, I suppose using people's stolen funds to save…

2022-11-23 05:39:58 RT @timnitGebru: Have you ever heard of $100k in best paper awards in any academic conference? Even in ML with all the $$ flowing in the fi…

2022-11-23 05:15:29 @nitin Actually, no. That's not the definition of mansplaining. I wonder how often you've mansplained while thinking you weren't....

2022-11-23 01:23:52 RT @mer__edith: @jackson_blum Whatever the future of Twitter DMs, I'll continue to use Signal + admonish others to do the same. Signal is a…

2022-11-22 22:51:35 @evanmiltenburg @EhudReiter @sebgehr @huggingface @_dmh Thanks -- I'll let them know :)

2022-11-22 22:48:31 @evanmiltenburg @EhudReiter @sebgehr @huggingface Very helpful -- thank you! This query is for a student who is working on a rule-based system and trying to find appropriate datasets that aren't basically seq2seq....

2022-11-22 22:34:26 @evanmiltenburg @EhudReiter Thank you! This is fantastic.

2022-11-22 22:14:06 Seeking an #NLG dataset: What are your favorite datasets pairing non-linguistic structured-data input with linguistic output (in English, ideally, but otherwise still welcome)? #nlproc

2022-11-22 21:28:46 @OneAjeetSingh @timnitGebru I haven't read the paper, but it sounds to me that they found a use case for on-topic bullshitting.

2022-11-22 14:15:37 RT @emilymbender: @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 14:15:17 RT @emilymbender: We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get…

2022-11-22 02:01:32 @alexhanna To tune in to watch live on Wednesday, join us at https://t.co/VF7TD6tw5c 9:30am Pacific, W Nov 23

2022-11-22 02:00:11 We'll be doing our next episode on #Galactica this Wednesday at 9:30am Pacific. If you're new to #MAIHT3k and want to get a sense of what it's about, check out episode 1, here! (Fun to look back on this. @alexhanna and I did not imagine we'd be turning this into a series...) https://t.co/xg2Y8FIpON

2022-11-22 01:57:41 RT @alexhanna: Poking around with PeerTube as a YouTube alternative to give more of the Fediverse a shot. We've reposted this first episod…

2022-11-21 19:40:01 RT @mer__edith: Unlike almost all other tech, Signal is supported by donations from people who rely on it, not by monetizing surveillance.…

2022-11-21 17:45:43 Calling all #NLProc folks in EMEA. Would you like #ACL2025nlp to come to you? Please consider putting in a proposal to host. Things have been reorganized to require less effort from the local hosts: https://t.co/ereeIkI39r

2022-11-21 17:05:33 @strwbilly @aryaman2020 Looking at the page source for that article, where you quote a tweet that QTs me, I don't see the text of my tweet.

2022-11-21 14:14:49 It seems we're still stuck in this weird state where NLP is viewed from within the goals of people working on ML, to the detriment of the science on both sides (NLP and ML). https://t.co/d3LJJ8EZf5

2022-11-21 14:10:00 1) Understanding the problem the algorithm is meant to address 2) Building effective evaluation (test sets, metrics) 3) Understanding how the tech we build fits into its social context

2022-11-21 14:07:55 I guess I represent side "Why is the question even framed this way?" DL is a set of algorithms that can be used in different contexts. NLP is a set of applications. Asking this question suggests that none of the following are part of NLP: https://t.co/R1vlQLlihG

2022-11-21 14:06:15 RT @Abebab: How to avoid/displace accountability and responsibility for LLMs bingo https://t.co/Ql6NYSceRa

2022-11-21 14:06:11 @Abebab lol. And not only did you manage the full 5x5, but I think in this case, all of the squares reflect comments from one person. @KarnFort1 and I were harvesting across many different sources...

2022-11-21 13:39:17 RT @timnitGebru: Chief scientist at @Meta, who gaslights victims of genocide lying repeatedly saying "94% of hate speech on facebook is tak…

2022-11-21 01:09:15 RT @timnitGebru: Not like we wrote a paper about that or anything which we got fired for. What COULD go wrong? https://t.co/krVsH5dB7g

2022-11-21 01:09:09 RT @timnitGebru: Not like @chirag_shah and @emilymbender wrote a paper: https://t.co/Zq7abViQbE

2022-11-21 00:50:08 @sbmaruf @ylecun @pcastr @carlesgelada When you say that I "feel" these arguments, you are suggesting that this isn't the result of scholarly work. You can read the argument in this paper: https://t.co/cYj1vKDms1 NB: That was peer reviewed, it's not some arXiv preprint.

2022-11-21 00:37:18 @sbmaruf @ylecun @pcastr @carlesgelada The fact students can be working on PhDs in ML and not actually be taught the relationship between "language models" and actual human linguistic activity is a problem.

2022-11-21 00:36:30 @sbmaruf @ylecun @pcastr @carlesgelada As for "make shit up": The training objective of language models is word forms given context, nothing else. This is not reasoning about the world, it's not "predicting" anything, it's not "writing scientific articles", etc.

2022-11-21 00:35:29 @sbmaruf @ylecun @pcastr @carlesgelada Why do you need to study them? The world doesn't need 120B parameter language models. For PhD students, I recommend learning the nature of the problem you are trying to solve and then looking critically at how current tech matches (or doesn't) that problem. >

2022-11-20 22:48:43 RT @chirag_shah: Meta's #Galactica is yet another attempt to use LLMs in ways that is both irresponsible and harmful. I have a feeling that…

2022-11-20 17:35:36 @marylgray @sjjphd @schock @mlmillerphd Check your DMs :)

2022-11-20 17:24:21 this https://t.co/wbJJYGf4kL

2022-11-20 16:03:36 @carlesgelada @ylecun @pcastr No, the project is doomed from the start, because you don't do science by manipulating strings of letters.

2022-11-20 15:51:08 @ylecun @pcastr @carlesgelada Oh, I do. It pisses me off every time I type "How about we meet on..." and suggests some random day of the week. But also: that's lower volume and thus less problematic.

2022-11-20 15:46:13 RT @emilymbender: @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative…

2022-11-20 15:44:53 @ylecun @pcastr @carlesgelada In other words, #Galactica is designed to make shit up and to do so fluently. Even when it is "working" it's just making shit up that happens to match the form (and not just the style) of what we were looking for.

2022-11-20 15:43:55 @ylecun @pcastr @carlesgelada Regardless of what you prompt it with, what comes out is untethered from any communicative intent, understanding of the science, etc etc because language only model the distribution of the form of language. >

2022-11-20 14:39:52 RT @emilymbender: Hashtags work better when you spell them right: #Galactica

2022-11-20 14:34:14 RT @xriskology: Here's my newest Salon article, about the FTX debacle and the longtermist ideology behind it. There are several crucial poi…

2022-11-20 06:48:23 RT @alexhanna: If I was the head of AI at Meta I would simply not keep on digging myself into a deeper hole defending a tool that my team t…

2022-11-20 06:44:54 RT @Abebab: remember longtermism is not a concern for AI ethics but an abstract contemplation &

2022-11-20 00:24:45 RT @ICLDC_HI: ComputEL-6: final call for papers! Submission deadline is Nov. 20 —>

2022-11-19 23:26:59 @holdspacefree Yikes.

2022-11-19 23:21:59 Hashtags work better when you spell them right: #Galactica https://t.co/sLJ8iaNF7X

2022-11-19 23:12:41 Looking forward to the coming episode of Mystery AI Hype Theater 3000 with @alexhanna On deck: #Galatica of course, plus our new segment "What in the Fresh AI Hell?" Next Wednesday, 11/23, 9:30am Pacific. https://t.co/VF7TD6tw5c #NLProc #AIhype #MAIHT3k https://t.co/1jWIChBzJp

2022-11-19 22:40:55 @intelwire As I understand it, this guy is in charge of META AI. So yeah.

2022-11-19 22:35:51 Did you do any user studies? Did you measure whether people who are looking for writing assistance (working hard in an L2, up against a paper submission deadline, tired) would be able to accurately evaluate if the text they are submitting accurately expresses their ideas?

2022-11-19 22:34:18 So close and yet so far... BEFORE dumping lots of resources into building some tech, we should asking questions like: What are the failure modes? Who will be hurt if the system malfunctions? Who will be hurt if it functions as intended? As for your use case: https://t.co/6hPU3nv5kV

2022-11-19 21:25:29 RT @emilymbender: Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, f…

2022-11-19 19:07:44 @isosteph This is yummy, impressive looking, and I think pretty easy. As in: I'm not really much of a cook but have had success with it. https://t.co/wThVrq7mlJ

2022-11-19 05:18:16 @Jeff_Sparrow I put it in the next tweet in the thread: https://t.co/3doaE3kCm0

2022-11-19 04:27:20 @Jeff_Sparrow Yes.

2022-11-19 04:25:44 I clicked the link to see what that study was ... and found something from the London School of Economics and Political Science that has a *blurb* on the webpage talking up how groundbreaking the report is. Sure looks credible to me (not). https://t.co/n5XiDXgeAw https://t.co/8gvbSEP0bd

2022-11-19 04:23:25 And while I'm here, this paragraph is ridiculously full of #AIhype: >

2022-11-19 04:21:17 The paper lives here, in the ACM Digital Library: https://t.co/kwACyKvDtL Not that weird rando URL that you gave. >

2022-11-19 04:20:28 Hey @Jeff_Sparrow I appreciate you actually citing our paper as the source of the phrase "stochastic parrots", but it would be better form to link to its actual published location. https://t.co/DmRThcHDEz >

2022-11-18 23:50:24 @lebovitzben @mmitchell_ai @timnitGebru @Grady_Booch Clearly unqualified for his position. Relevant here: https://t.co/fVMYOVZb0d

2022-11-18 19:09:11 I'm glad to see my tweet about the #Galactica debacle quoted here, but I wonder: what happens to this article when Twitter goes down? @strwbilly do you know? Do the embedded tweets just disappear, or is your publishing platform robust to that? https://t.co/88FzfskH8f

2022-11-18 17:48:01 @josephrobison I think it's a mistake to have our information access systems driven by for-profit companies. Bare minimum is we should invest in something public.

2022-11-18 17:20:18 @josephrobison Misguided and dangerous.

2022-11-18 15:53:31 @BoredAstronaut Not harmless: https://t.co/kwACyKvDtL

2022-11-18 14:50:57 RT @emilymbender: And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and unders…

2022-11-18 14:49:19 RT @emilymbender: "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordi…

2022-11-18 14:48:18 RT @emilymbender: I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the mark…

2022-11-18 14:06:31 RT @timnitGebru: Whatever you do, don't join Jack Dorsey's thing. He totally believes in Elon's plan to "extend the light of consciousness"…

2022-11-18 03:41:38 @ruthstarkman @mmitchell_ai @Stanford Your students are so awesome!

2022-11-18 03:41:29 RT @ruthstarkman: Students made several great tiktoks for @mmitchell_ai and @emilymbender's Dec 2 3-4:30 PM talk @Stanford. Here's one. BE…

2022-11-18 03:41:16 Feel like I've got to add my toast to Twitter before it goes under. I have really appreciated this as a space to learn and to practice public scholarship. Thank you all.

2022-11-17 21:56:26 @GaryMarcus @timnitGebru Choosing not to be a sea lion is free. I recommend it. Choosing not to engage with sea lioning, also free and freeing. Ciao!

2022-11-17 21:53:44 Choosing not to sealion is ... free. And such a good deal!

2022-11-17 21:50:59 @GaryMarcus @timnitGebru Gary, your white privilege is showing. Timnit (and others) have explained over and over where the eugenics is. You keep coming back with "due process" sealioning. You can choose to learn or you can choose to keep flaunting your unwillingness to listen.

2022-11-17 21:27:35 RT @mmitchell_ai: When we use "safety filters" that censor content with a broad brush, we create a less safe world: marginalizing large swa…

2022-11-17 21:17:44 And similarly flailing around "citation". Just because you're wallowing in the mud/kicking up dust doesn't mean the distinctions aren't clear to other people! https://t.co/XavSVJ5N96

2022-11-17 21:17:03 And then some very weird flailing around "ethics": https://t.co/jtfHYTpSly >

2022-11-17 21:16:14 Backing off from that claim pretty quickly: >

2022-11-17 21:15:30 I don't follow this so I missed his remarks about it in real time. Apparently the hype wasn't coming only from the marketing department... >

2022-11-17 15:33:04 RT @BJMcP: Great thread on both the dangers of AI language models and the ways the language used to describe them contributes to illusions…

2022-11-17 15:00:39 And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading &

2022-11-17 14:58:59 Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens. >

2022-11-17 14:57:32 Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game. >

2022-11-17 14:56:27 So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~. >

2022-11-17 14:55:39 And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression. >

2022-11-17 14:54:54 I looked back at Section 6 of Stochastic Parrots, which begins: >

2022-11-17 14:53:32 "Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed. So what gives? >

2022-11-17 14:51:43 RT @roxannedarling: Question of the Day:

2022-11-17 14:46:24 RT @emilymbender: At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops bei…

2022-11-17 14:44:55 RT @emilymbender: And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice he…

2022-11-17 14:20:46 RT @dmonett: "AI is a story we computer scientists made up to help us get funding ... It was a pragmatic theater. But now AI has become a f…

2022-11-17 06:05:48 Using the phrase "AI speech police" to refer to automated content moderation is certainly... a choice. https://t.co/P7AJmqxIMg

2022-11-17 05:58:22 RT @dmonett: "Lazy, negligent, unsafe."

2022-11-17 05:22:43 The word forms are not their meaning Their meaning is not the world Modeling the distribution of word forms is not the same as modeling (knowledge of) the world. And yet somehow this basic point is missed over and over and over... https://t.co/Hy5HBgeCbo

2022-11-17 05:12:08 Fundamental point that ~all people who see LLMs as "AI" seem to be missing: The *only* knowledge that an LLM can truthfully be said to have is information about the distribution of word forms.

2022-11-17 05:09:52 RT @moreisdifferent: PROBLEM: "Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and…

2022-11-17 02:39:26 @josephrobison Hey look! It's another probably-bought-his-blue-check telling me that Google can make search out of LMs. No, they can't: https://t.co/rkDjc4kDxj

2022-11-17 01:30:10 @MuhammedKambal Yes --- as many of us have been saying for quite some time now.

2022-11-17 01:26:23 @MuhammedKambal How about starting by not referring to any of these systems as "AI", and thus helping your audience learn how to cut through the AI hype?

2022-11-17 01:21:06 @MuhammedKambal Well no shit. Of course it's missing that. It's a word form prediction machine and nothing more. What led you to believe otherwise?

2022-11-17 01:19:46 The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:19:16 That was furthermore entirely predictable that it would behave this way. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:18:51 "ooh look what false/embarrassing/obviously wrong thing Galactica said" is also not interesting folks. It's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 01:17:02 @MuhammedKambal The only exception to that I've seen is @willie_agnew 's observation about the filters that the engineers designed in, and how that relates to their (the engineers') view of what science is. So, the expression of the ideas of the people behind it. https://t.co/iC6u5VATw9

2022-11-17 01:15:53 @MuhammedKambal That was furthermore entirely predictable. The story is not in whatever specific things it says, but rather in the way they are promoting it. >

2022-11-17 01:15:27 @MuhammedKambal Why? People have been playing "gotcha" with Meta AI's random bullshit generator all day. It's not interesting --- it's not surprising in the least that it generates text that is both fluent and (when interpreted by a human) wrong. >

2022-11-17 00:52:36 @ChombaBupe I mean, it went off the rails as soon as they got to "an AI"....

2022-11-17 00:52:06 @Sanjay_Uppal This is not the original remark you seem to think it is. Also, comparing humans to LLMs is no better than the reverse.

2022-11-17 00:07:29 In my inbox today, an email about a "staffing opportunity" that ended with "PS: If you don't want to hear from me anymore, you can just tell me" I took great pleasure in not only marking that as spam, but also replying and telling them as much.

2022-11-16 23:59:23 At what point does this garbage "science" become embarrassing enough that "researcher" at one of these tech cos stops being a prestigious job? https://t.co/Zq6SNCaBS5

2022-11-16 23:52:55 RT @zehavoc: Hey #NLProc people, don't miss @anas_ant talk next Friday at 11am "NLP Beyond the Top 100 languages" ** and ** @ben_mlr's thes…

2022-11-16 22:31:04 RT @alexhanna: Next week! Mystery AI Hype Theater 3k will return. @emilymbender and I will talk language models and search, and continue ou…

2022-11-16 22:29:47 This is so damning. https://t.co/iC6u5VATw9

2022-11-16 22:29:33 RT @willie_agnew: Refuses to say anything about queer theory, CRT, racism, or AIDS, despite large bodies of highly influential papers in th…

2022-11-16 22:29:28 @kylebrussell I mean descriptively, yes, people are doing that. Normatively, they 100% should not be. But thanks for promoting my tweets, I guess?

2022-11-16 22:19:15 @kylebrussell Your bio begins with "Always learning | Working on AI for game dev" and you're posting like an AI bro. I'm not inclined to take you for a trustworthy journalist. And yeah, the blue checks are now meaningless. That sucks.

2022-11-16 22:16:11 @heyorson There's a non-zero possibility that your garden will start growing ice cream cones next spring, too. What of it?

2022-11-16 22:15:33 @zehavoc I'm more active over there now, definitely. Still reading Twitter, but less inclined to contribute, given that it seems to be on the brink of collapsing and there's nothing I can do about that.

2022-11-16 22:14:34 @kylebrussell Hey there Mr Bought a Bluecheck --- no, "AI is still young" is not an excuse for putting ridiculous demos out the world and making outlandish claims about them.

2022-11-16 22:13:55 @andrewprollings "got it wrong" "synthesized ungrounded text" "made shit up"

2022-11-16 22:13:17 @NannaInie Yep.

2022-11-16 22:10:36 @zehavoc Well, not fully syncing (and definitely not cross-posting), but still putting some stuff over here. I like Tweetdeck better than Mastodon's "advanced" mode, but do like some things better over there.

2022-11-16 22:10:02 RT @mark_riedl: Maybe don’t name your model after the Encyclopedia Galactica unless it is good enough to be a trusted source of knowledge?…

2022-11-16 21:50:34 @JLucasMcKay "get it wrong"

2022-11-16 21:44:53 RT @dmonett: There are so many, so many, many, many red flags alone in the "Get started" and "Limitations" screenshots, that I actually did…

2022-11-16 21:34:11 @HeidyKhlaaf @Meaningness Here ya go: https://t.co/Zq6SNCaBS5

2022-11-15 21:52:22 RT @chirag_shah: Several positions open at @uw_ischool. Come, be a part of an outstanding #iSchool, in the amazing city of Seattle! https:/…

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-08 00:58:14 Meanwhile, woe to the reviewers who now have to also consider the possibility that the text they are reading isn't actually grounded in author intent, but just inserted to "sound plausible". And woe to the field whose academic discourse gets polluted with this.

2022-12-08 00:57:27 I sure hope you also advise your students that they (and you, if you are a co-author) are taking responsibility for every word that is in the paper --- that the words represent their ideas (not anyone else's) and that they stand by the accuracy of the statements. >

2022-12-07 22:48:53 @timnitGebru Timnit, how awful. I'm so sorry the DAIR 1st anniversary celebration was marred like this. And I am in awe at your brave responses.

2022-12-07 22:27:26 @Etyma1010 @betsysneller I should say I didn't invent this (though it's possible that the "about Down Under" addition is mine), but I don't remember who I got it from....

2022-12-07 22:26:53 @Etyma1010 @betsysneller It's a great S! I usually use it in the context of talking about prescriptive vs. descriptive rules, in particular, the rule against ending a sentence with a preposition. If that were a real rule of English, that sentence would be gibberish, but it's perfectly comprehensible.

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-12-08 05:10:51 @schock Ugh, gross. Thanks for documenting. Also, is it just me, or do all of these ChatGPT examples seem to have the same surface tone (even while it's saying vile things)?

2022-12-08 03:34:06 RT @sivavaid: Part of the reason so many people misunderstand and misuse "artificial intelligence" is that it was misnamed "artificial inte…

2022-12-08 03:33:59 RT @safiyanoble: That part. https://t.co/QVoYHFOQIF

2022-12-08 02:02:53 RT @michaelgaubrey: Everyone should go listen to @emilymbender's interview on @FactuallyPod.

2022-12-08 01:07:31 @Etyma1010 @betsysneller That sounds very plausible!!

2022-12-08 00:58:14 Meanwhile, woe to the reviewers who now have to also consider the possibility that the text they are reading isn't actually grounded in author intent, but just inserted to "sound plausible". And woe to the field whose academic discourse gets polluted with this.

2022-12-08 00:57:27 I sure hope you also advise your students that they (and you, if you are a co-author) are taking responsibility for every word that is in the paper --- that the words represent their ideas (not anyone else's) and that they stand by the accuracy of the statements. >

2022-12-07 22:48:53 @timnitGebru Timnit, how awful. I'm so sorry the DAIR 1st anniversary celebration was marred like this. And I am in awe at your brave responses.

2022-12-07 22:27:26 @Etyma1010 @betsysneller I should say I didn't invent this (though it's possible that the "about Down Under" addition is mine), but I don't remember who I got it from....

2022-12-07 22:26:53 @Etyma1010 @betsysneller It's a great S! I usually use it in the context of talking about prescriptive vs. descriptive rules, in particular, the rule against ending a sentence with a preposition. If that were a real rule of English, that sentence would be gibberish, but it's perfectly comprehensible.

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-08 22:48:41 @chrmanning I'm not actually referring to your slide, Chris, so much as the way it was framed in the OP's tweet --- which Stanford NLP sought fit to retweet, btw.

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-12-08 05:10:51 @schock Ugh, gross. Thanks for documenting. Also, is it just me, or do all of these ChatGPT examples seem to have the same surface tone (even while it's saying vile things)?

2022-12-08 03:34:06 RT @sivavaid: Part of the reason so many people misunderstand and misuse "artificial intelligence" is that it was misnamed "artificial inte…

2022-12-08 03:33:59 RT @safiyanoble: That part. https://t.co/QVoYHFOQIF

2022-12-08 02:02:53 RT @michaelgaubrey: Everyone should go listen to @emilymbender's interview on @FactuallyPod.

2022-12-08 01:07:31 @Etyma1010 @betsysneller That sounds very plausible!!

2022-12-08 00:58:14 Meanwhile, woe to the reviewers who now have to also consider the possibility that the text they are reading isn't actually grounded in author intent, but just inserted to "sound plausible". And woe to the field whose academic discourse gets polluted with this.

2022-12-08 00:57:27 I sure hope you also advise your students that they (and you, if you are a co-author) are taking responsibility for every word that is in the paper --- that the words represent their ideas (not anyone else's) and that they stand by the accuracy of the statements. >

2022-12-07 22:48:53 @timnitGebru Timnit, how awful. I'm so sorry the DAIR 1st anniversary celebration was marred like this. And I am in awe at your brave responses.

2022-12-07 22:27:26 @Etyma1010 @betsysneller I should say I didn't invent this (though it's possible that the "about Down Under" addition is mine), but I don't remember who I got it from....

2022-12-07 22:26:53 @Etyma1010 @betsysneller It's a great S! I usually use it in the context of talking about prescriptive vs. descriptive rules, in particular, the rule against ending a sentence with a preposition. If that were a real rule of English, that sentence would be gibberish, but it's perfectly comprehensible.

2022-12-07 21:17:28 @alexhanna @timnitGebru Oh what an awful experience! I'm so sorry that you all were subjected to this and also in awe of your responses (recording in the moment, documenting here).

2022-12-07 16:58:12 @Miles_Brundage So instead of calling that out, or you know, just walking by, you decided to play along? There a people out there calling "stochastic parrot" an insult (to "AI" systems). And you're out there promoting ChatGPT as "an AI". The inference was easy.

2022-12-07 16:56:54 @Miles_Brundage "It was just a joke" --- are you hearing yourself?

2022-12-07 16:50:57 @betsysneller Cheating ofc because "Down Under" there is functioning as an NP.

2022-12-07 16:50:37 @Miles_Brundage Making light of actual oppression = not funny?

2022-12-07 16:50:13 @betsysneller Good for introducing a discussion about descriptive v. prescriptive rules. Also, I add that you can cheat and make it a string of 8 prepositions if the book is about Australia: "But Dad, what did you bring the book I didn't want to be read to out of about Down Under up for?" >

2022-12-07 16:49:05 @betsysneller "But Dad, what did you bring the book I didn't want to be read to out of up for?" >

2022-12-07 16:48:36 @betsysneller There was a kid who lived in a two story house and always got read a story at bedtime. Books on the main floor, bedrooms on the second. One day, the kid's dad brings up a poor choice of story and the kid says:

2022-12-07 15:48:53 Corrected link: https://t.co/hWtQ2z8Mw8 by @willknight https://t.co/hh0bmg8t02

2022-12-07 15:48:31 And I see that the link at the top of this thread is broken (copy paste error on my part). Here is the article: https://t.co/hWtQ2z8Mw8

2022-12-07 15:48:08 It's frustrating that they don't seem to consider that a language model is not fit for purpose here. This isn't something that can be fixed (even if doing so is challenging). It's a fundamental design flaw. >

2022-12-07 15:47:57 They give this disclaimer "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging [...]" >

2022-12-07 15:47:36 Re difference to other chatbots: The only possible difference I see is that the training regimen they developed led to a system that might seem more trustworthy, despite still being completely unsuited to the use cases they are (implicitly) suggesting. >

2022-12-07 15:47:01 @kashhill @willknight Seems like two copies of the link somehow? Here it is: https://t.co/hWtQ2z8Mw8

2022-12-07 15:46:36 Furthermore, they situate it as "the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems." --- as if this were an "AI system" (with all that suggests) rather than a text-synthesis machine. >

2022-12-07 15:46:26 Anyway, longer version of what I said to Will: OpenAI was more cautious about it than Meta with Galactica, but if you look at the examples in their blog post announcing ChatGPT, they are clearly suggesting that it should be used to answer questions. >

2022-12-07 15:44:37 @willknight A tech CEO is not the source to interview about whether chatbots could be effective therapists. What you need is someone who studies such therapy AND understands that the chatbot has no actual understanding. Then you could get an accurate appraisal. My guess: *shudder* >

2022-12-07 15:42:50 @willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists. >

2022-12-07 15:39:59 @willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >

2022-12-07 15:37:19 I appreciated the chance to have my say in this article by @willknight but I need to push back on a couple of things: https://t.co/cbelYjZTyF >

2022-12-09 01:11:52 @spacemanidol @amahabal No, it's not about the name. It's about the way the systems are built and what they are designed to do.

2022-12-08 22:48:41 @chrmanning I'm not actually referring to your slide, Chris, so much as the way it was framed in the OP's tweet --- which Stanford NLP sought fit to retweet, btw.

2022-12-08 19:34:24 Oh, and for the record, though that tweet came from a small account, I only saw it because Stanford NLP retweeted it. So someone there thought it was a reasonable description too.

2022-12-08 19:25:36 @Raza_Habib496 People are using it does not entail benefits. Comparing GPT-3 to fundamental physics research is also a strange flex. Finally: as we argue in the Stochastic Parrots paper -- who gets the benefits and who pays the costs? (Not the same people.)

2022-12-08 19:24:49 @Raza_Habib496 Oh, I checked your bio first. If it had said "PhD student" I probably would have just walked on by. But you've got "CEO" and "30 under 30" so if anything, I bet you like the attention.

2022-12-08 19:00:02 @rharang The astonishing thing about that slide is that the only numbers are about training data + compute. There's not even any claims based on (likely suspect, but that's another story) benchmarks.

2022-12-08 18:57:31 It's wild to me that this is considered a picture of "progress". Progress towards what? What I see is a picture of ever increasing usage of resources + complete disinterest in being able to document and understand the data these things are build on. https://t.co/vVPvH7zal0

2022-12-08 14:33:02 @yoavgo @yuvalpi Here it is: https://t.co/GWKrpgxkPt

2022-12-08 14:31:34 @yoavgo @yuvalpi Oh, and I don't have time to dig it up this morning, but you told Anna something about how you don't really care about stealing ideas --- and seemed to think that our community doesn't either.

2022-12-08 14:31:06 @yoavgo @yuvalpi And even if you offer it as an option: nothing in what you said suggests that you have accounted for what will happen when someone is confronted with something that sounds plausible, and confident --- especially when it's their L2. >

2022-12-08 14:30:29 @yoavgo @yuvalpi Your whole proposal is extremely trollish (as is you demeanor on Twitter

2022-12-08 14:18:47 RT @KimTallBear: Job Opportunity: Associate Professor or Professor, Tenure-Track in Native North American Indigenous Knowledge (NNAIK) at U…

2022-12-08 14:14:42 @amahabal And have you actually used ChatGPT as a writing assistant? How did that go? What did you find useful about it? What do you think a student (just starting out in research) would find useful about it? How would they be able to evaluate its suggestions?

2022-12-08 14:02:07 RT @emilymbender: Apropos of the complete lack of transparency about #ChatGPT 's training data, I'd like to resurface what Batya Friedman a…

2022-12-08 13:51:55 @amahabal No. Why should I?

2022-12-08 13:44:45 @yuvalpi @yoavgo Yes, I read his whole thread. No that doesn't negate what I said.

2022-12-08 13:25:00 RT @marylgray: Calling all scholars interested in a fellowship to reboot social media : ) https://t.co/MApt42p8eB

2022-03-19 13:41:16 @BenPatrickWill So I'm also worried about local gov'ts jumping on Google's apparent benevolence, making unjustified assumptions about "Google magic" replacing teachers and other staff, cutting budgets (or leaving them at abysmally low levels) and getting further mired in underfunded education. 2022-03-19 13:39:14 @BenPatrickWill At UW, we are currently in the process of painfully unwinding certain aspects of how we use Google apps (shared UWNetIDs will no longer have access to Google Drive etc) because Google changed the pricing on us. > 2022-03-19 13:38:09 A must-read about so-called "Google magic", techno-solutionism, and deploying large language models in "auto-pedagogy". On top of all of the issues that @BenPatrickWill identifies, let's also keep in mind that Google is a for-profit co. > 2022-03-18 21:42:55 @mmitchell_ai @Twitter @ruchowdh IKR? If the point is to get me to follow more accounts, then why keep wasting those slots on accounts I'm definitely NOT going to follow? 2022-03-18 21:23:22 @mmitchell_ai @Twitter @ruchowdh +1 for tweetdeck, so you'll see these less frequently at least (if @Twitter won't fix). Not quite the same, but there are also people for me who I have definitely decided not to follow, but @Twitter won't stop suggesting them. A "no thanks" option would be dandy. 2022-03-18 21:21:11 RT @StateBarCA: Today we honor @HabenGirma. As the first deaf-blind person to graduate from @Harvard_Law, Ms. Girma has dedicated her caree… 2022-03-18 13:22:36 @LeonDerczynski @SeeTedTalk And then telling the students we didn't fail that they're on top of some important "hierarchy of knowledge" (Gebru 2021) and they should look down on all others. https://t.co/3x4pqSZhRW 2022-03-17 20:41:07 @RWerpachowski @TaliaRinger @sundarpichai @mmitchell_ai @Google Then why are you jumping in here arguing with Talia? 2022-03-17 20:07:03 @RWerpachowski @TaliaRinger @sundarpichai @mmitchell_ai @Google Did you read the screencap Talia posted? You seem to be pretending that Google isn't going around claiming that they're still making ethics foundational to all their producs. 2022-03-17 20:00:39 @AnnaDanielWork That does sound awful. Thank you for perservering, and for replying here. It really sounds like having info to hand like what's in the Tech Worker's Handbook could be really valuable. 2022-03-17 19:12:10 @TaliaRinger @RWerpachowski @sundarpichai @mmitchell_ai Not the entire team, but the leadership of that team. But Talia's point stands: How can anyone take @Google seriously on anything to do with AI ethics after what they did? 2022-03-17 19:03:55 The juxtaposition of these two points is hilarious too --- AI is so important, it's more important than electricity, it will do things like remind me to have dinner with my family! 2022-03-17 18:30:18 RT @CriticalAI: Interesting article and thread from #CriticalAI ally @emilymbender. Looking forward to her talk at our March 24th #DataOnto… 2022-03-17 18:25:12 From the same article: "Artificial intelligence “is one of the most profound technologies we are working on, as important or more than fire and electricity,” Pichai said." Did anyone have "AI is more profound than electricity" on their #AIhype bingo card? https://t.co/mkFKitZwX0 2022-03-17 18:24:03 @sundarpichai https://t.co/5CdD96gRKH 2022-03-17 18:23:40 Maybe if your company had a better culture around reasonable work/life balance you wouldn't need your calendar to remind you to HAVE DINNER WITH YOUR FAMILY, Sundar. https://t.co/mkFKitZwX0 2022-03-17 18:22:28 So many applications of "AI"/#PSEUDOSCI are trying to solve problems downstream that could be better addressed through prevention. Case in point from @sundarpichai 's hypothetical here: https://t.co/XIy9ftfhd1 https://t.co/OU1OHSVxt2 2022-03-17 16:50:36 This was fun---and I think we succeeded in generating discussion :) I particularly appreciate the format that #chiir2022 used, with 8 min videos played at a specific time followed by 12 min of discussion. @chirag_shah good choice of venue! https://t.co/3FdgwmNhif 2022-03-17 15:26:52 @ruthstarkman @GRACEethicsAI I can't wait to read it! 2022-03-17 13:27:51 Second (EMEA-timezone-friendly) presentation of our #chiir2022 paper in 2 hours (8:30am PST)! w/@chirag_shah https://t.co/3FdgwmNhif 2022-03-17 12:18:38 RT @cogsci_soc: Preserving context and user intent in the future of web search with. A Q& 2022-03-17 00:51:48 @cydharrell I think because it picks up at the point where someone is considering whether to become a whistleblower (which makes sense). So if you're writing about what it's good to have written down, that would be a good complement, I think! 2022-03-17 00:50:55 @cydharrell The Tech Worker Handbook is amazing, and will definitely be a cornerstone of what I point to (in the piece I'm currently writing that prompted this query), but one thing I don't see there is "having stuff written down for yourself". > 2022-03-17 00:38:25 @r_a_mckinney Thank you! 2022-03-17 00:29:03 @Dan__McCarthy @IfeomaOzoma Thank you!! 2022-03-17 00:28:44 @cydharrell Thank would be great! 2022-03-17 00:28:30 @grimalkina @cydharrell Ohh -- excellent! 2022-03-17 00:25:37 @cydharrell Is it something you plan to write about/give a linkable talk about? That would be fabulous! 2022-03-17 00:23:57 Seems like something Computer Professionals for Social Responsibility or someone might have put together back in the day, even... 2022-03-17 00:22:55 And: How can I network with like-minded people? Who can I talk to about things I find concerning, to help me think them through > 2022-03-17 00:22:33 Also: Before it is relevant to know, how does whistle-blowing work? What risks would I incur and what protections are afforded me by local law? > 2022-03-17 00:22:18 Asking self before starting: What are the bright lines that I will not cross? What are examples of things that I would feel compelled to become a whistle-blower over? > 2022-03-17 00:21:56 Something I feel like should be out there, but I don't know where: Has anyone written up advice to people just starting out in industry about being prepared to be a whistleblower, if necessary? I'm thinking things like > 2022-03-17 00:11:24 @RWerpachowski @keoladonaghy https://t.co/NzOt1npyJB 2022-03-17 00:02:30 @RWerpachowski @keoladonaghy Yeah, so look back at the original post you're replying to, and do some listening & 2022-03-16 23:14:55 @RWerpachowski @keoladonaghy *sigh* As usual, conversations that lack an analysis of power dynamics are just a waste of time. Discourse around "data sovereignty" specifically comes from Indigenous scholars & 2022-03-16 23:05:50 @RWerpachowski @keoladonaghy Do you know how the knowledge of other cultures gets to the library? Two paths: open sharing by the people whose culture it is (fine) or extractive/exploitative research by outsiders (not fine). 2022-03-16 21:59:33 @keoladonaghy Thank you -- yes. Super key point. "Anything in the world" isn't actually Google's to give. 2022-03-16 21:58:11 RT @keoladonaghy: Not to mention the gall of them believing they had the right to access much less give access to another culture's knowled… 2022-03-16 21:41:52 @_alialkhatib Yeah -- nothing in this blog post feels particularly informed by scholarship on pedagogy. And (as always) automated solutions seem like they're trying to clean up issues way downstream when something upstream (e.g. smaller class sizes) would be much more effective. 2022-03-16 21:37:17 And even if they did (they don't), that degree of reach for one company just seems inherently dangerous. Also. What is the scope of "anything in the world"? Would @Google really support people learning about the various ways in which their business practices do harm? /fin 2022-03-16 21:36:29 One last one for now: "Learning is personal" but they want to "help everyone in the world learn anything in the world". What grounds do we have to believe that Google has the cultural competence to achieve that? > 2022-03-16 21:32:10 This comes back to the idea (not original to me, but don't have cite to hand) that data, when collected into piles, creates risk, and we should not be collecting it without thinking about and mitigating those risks. 2022-03-16 21:30:39 Second, data collection. What else is happening to this data? Who has access? What is being done to mitigate its use to see students as (collections of) data points, rather than as people? https://t.co/i3qMbAulGp 2022-03-16 21:28:16 What, specifically, is the system doing to get the student "unstuck" in their non-math assignment? What role does the LLM play? How does the way that LLMs absorb various societal biases from their training data affect performance? 2022-03-16 21:27:08 First, I'm super skeptical that learning math by doing problem sets is a good model for learning other kinds of things. And even if it were, the idea that LLMs would support that generalization seems super sketchy. https://t.co/Y6Tlj6QzGi 2022-03-16 21:24:29 Reading the linked blog post, it's all kinds of creepy. Just a couple of examples: https://t.co/ayYVcl3IV2 2022-03-16 21:03:47 RT @mediajustice: Congrats to @timnitGebru on being named one of #TheRecast40's most influential people for exposing the racial bias of AI… 2022-03-16 20:43:57 RT @emilymbender: Poll for #NLProc researchers based in the US. Where do you think the most $$ are coming from funding #NLProc research (re… 2022-03-16 19:32:13 @SashaMTL Thank you! Btw, the published (peer-reviewed) version is here: https://t.co/d9xs3DRCn1 From AIES '21 2022-03-16 16:53:05 Poll for #NLProc researchers based in the US. Where do you think the most $$ are coming from funding #NLProc research (regardless of where the research takes place)? M = military + intelligence, G = other gov't spending (incl NSF, NIH, etc), I = industry 2022-03-16 16:48:45 @SScottGraham_ Yeah, no kidding. I guess one approximation would be to look at affiliations + acknowledgments in published papers... 2022-03-16 16:31:57 Does anyone know of any studies quantifying #NLProc research funding by source (national science funding schemes, industry, military/intelligence, etc)? 2022-03-16 15:57:29 @srchvrs @chirag_shah Gee, I remember using search before snippets, and guess what, it was usable! It perhaps took a little more time, but that isn't necessarily a bad thing. Friction can be valuable in information seeking behavior! https://t.co/zl6myTDKN4 2022-03-16 01:43:45 @csdoctorsister Oooh!! Congrats :) 2022-03-15 22:10:20 RT @uwnews: In a new perspective paper, @UW professors @emilymbender and @chirag_shah respond to proposals that reimagine web search as an… 2022-03-15 20:52:00 RT @UW_iSchool: Google is betting there's a big future in speech-based search, but the iSchool's @chirag_shah and @emilymbender of @UWlingu… 2022-03-15 20:23:42 @_amandalynne_ What a drag. But not just a by product of a culture that fosters overwork. There's also the guy interrupting you and not letting you say the thing! That's on him. 2022-03-15 15:10:29 @negar_rz Similar query just a couple of days ago here! https://t.co/JDXozkzIF7 2022-03-15 14:35:49 @chirag_shah @webis_de Finally, the quibble. This is cute, but no, search engines aren't "aware" of anything and even in jest I think it's critical (in the current environment, also true in 2020) to steer clear of such #AIhype. https://t.co/kO99nxCcRj 2022-03-15 14:32:21 @chirag_shah @webis_de fn 10: "The benefit of an end-to-end integration of indexing documents, language modeling, and question answering can be expected to be a severely improved “understanding”." 'Severely improved' is an odd turn of phrase, but I take it to be an intensifier on the scare quotes. > 2022-03-15 14:31:30 @chirag_shah @webis_de Also Potthast et al: "But for all the new opportunities afforded by these technologies, their repercussions on society due to their large-scale deployment are not well-understood." So much fun living through these large-scale "experiments"... > 2022-03-15 14:30:46 @chirag_shah @webis_de Also Potthast et al: "As no actual conversations are currently supported by conversational search agents, every query is an ad hoc query that is met with one single answer." No. Actual. Conversations. There's a whole study to be done on the perils of aspirational tech names.> 2022-03-15 14:29:47 @chirag_shah Potthast et al (from @webis_de) suggest a standard disclaimer on direct answer responses, which is very well put: “This answer is not necessarily true. It just fits well to your question.” https://t.co/vRaONqJzaZ > > 2022-03-15 14:28:13 Yes, this is great! I'm sorry we didn't find your paper while writing ours. (cc @chirag_shah) A few favorite quotes & 2022-03-15 13:12:58 First presentation of this is today, at 7pm PST! (Which I guess is tomorrow, for those over on the other side of the Date Line.) #chiir2022 https://t.co/3FdgwmNhif 2022-03-15 03:25:56 (Hmm not keynote, but Presidential Address. Not that that really matters...) 2022-03-15 03:23:47 Thank you, @wtimkey8 for starting this new version of the thread. The other one was so awful to read and so hard to look away from.... 2022-03-15 03:23:10 Newmeyer's answer involved Occam's Razor to which I got to reply "But Occam's Razor cuts both ways", much, as I remember it, to the audience's approval. :) 2022-03-15 03:22:24 I forget exactly what I asked, but it must have been something to do with it being an empirical question whether linguistic competence (knowledge of language, as stored in actual brains) really did only concern grammaticality or not. > 2022-03-15 03:20:58 I was so pleased to have been put in the same league as Jurafsky that I figured I just had to go ask a question. And I was actually much better positioned to get to the mic to line up than if I hadn't gotten up to go check on my kiddo. > 2022-03-15 03:20:10 Newmeyer's topic was "Grammar is grammar and usage is usage" and I forget the exact details, but Garrett said to me: "Isn't someone like you or Dan Jurafsky going to get up there and...?" > 2022-03-15 03:19:14 I got up from my seat in the standing-room-only crowd to go check on him, and when I came back, ended up standing at the back of the room, next to Andrew Garrett, who I knew from the one-year stint I did at Cal in 2000-2001. > 2022-03-15 03:17:14 Mom was pushing the baby around the hotel ballroom level hallways in the stroller trying to keep him happy, but he was NOT happy and I could hear him. > 2022-03-15 03:16:32 LSA 2003, Newmeyer's keynote. I was a few years in to my academic job search and attending the conference with my 9 month old son and my mom to look after him. > 2022-03-15 03:13:02 RT @lukobe: Q& 2022-03-15 00:10:23 @joeltruher @farhangkassaei @cciancutti @oheckmann Thank you! 2022-03-15 00:10:12 RT @joeltruher: As a "search old-timer," I think this paper is fantastic! @farhangkassaei @cciancutti @oheckmann https://t.co/0hT8kcBYHR 2022-03-14 22:54:10 RT @alexiswellwood: Registration for NASSLLI 2022 @ USC is now open! Early bird rates are in effect through April 7. We hope to welcome as… 2022-03-14 22:49:17 RT @emilymbender: Looking forward to #chiir2022 this week and to presenting "Situating Search", with @chirag_shah https://t.co/rkDjc4BGzj 1/ 2022-03-14 19:51:15 RT @aclmeeting: Good news everyone! Findings of ACL 2022 papers will have the opportunity to be presented in a special poster session at AC… 2022-03-14 19:12:10 RT @laurenkirschman: Q& 2022-03-14 16:31:21 Okay, so the deal is that that was actually a tutorial not a session. The sessions start after the welcome "tomorrow" (which is 5pm today on the West Coast). Thanks @delsweil for sorting me out! 2022-03-14 16:18:54 I find it very nerve wracking when the interfaces to online conferences are unclear ... like I'm meant to be somewhere, but I can't figure out where, nor can I figure out why I can't figure it out. Also, no helpdesk that I can see, so no one to ask... #chiir2022 2022-03-14 16:17:45 Also, there don't seem to be any papers listed in that session, nor "Conversational Information Seeking: Theory and Evaluation (session 2)" this afternoon. Maybe these are just phantom calendar entries? #CHIIR2022 what's going on? 2022-03-14 16:10:37 Trying to attend "Conversational Information Seeking: Theory and Evaluation (Session 1)" at #CHIIR2022, but the Zoom link in the conference room in Gather isn't working. Anyone have a clue? 2022-03-14 13:54:10 RT @chirag_shah: A future of search driven by large language models -- is this what we want or should have? Are there other perspectives to… 2022-03-14 13:42:46 So for those at #chiir2022, we're presenting in Viewpoints and perspectives 1 and (Mar 15, 7pm PST) and Viewpoints and perspectives 2.b (Mar 17, 8:30am PST). @chirag_shah and I hope to see many of you there! /fin 2022-03-14 13:42:23 Finally, we present some thoughts about paths forward, including a call for transparency at many levels and desiderata for ideal search systems. 7/ 2022-03-14 13:42:05 We also explore how language model based dialogue agents can exacerbate problems such as the representational harms documented by Sweeney 2013 and Noble 2018: 6/ https://t.co/Rpz8KgTnJv 2022-03-14 13:36:00 In the paper, we explore different Information Seeking Strategies (ISS 5/ 2022-03-14 13:35:39 The conceptual flaws have to do with the vision of how web search relates to human information seeking behavior: we need tools that help us surface and contextualize information sources, not systems that authoritatively provide 'answers'. 4/ 2022-03-14 13:35:22 Technical flaws include the fact that the language models aren't designed to perform "reasoning" (despite wild claims, such as Metzler et al 2021 referring to their "reasoning-like capabilities"). See also Bender & https://t.co/cYj1vKUpu1 3/ 2022-03-14 13:34:56 @chirag_shah In this #chiir2022 perspective paper we argue that using language model driven conversation agents (e.g. LaMDA) for search is flawed both technically and conceptually. 2/ 2022-03-14 13:34:42 Looking forward to #chiir2022 this week and to presenting "Situating Search", with @chirag_shah https://t.co/rkDjc4BGzj 1/ 2022-03-14 02:24:08 @alexhanna Yup. Over and over. Also got calendars, etc. 2022-03-14 02:18:43 @alexhanna (I went to Stanford for grad school and my son is now at UCLA.) 2022-03-14 02:18:11 @alexhanna This building: https://t.co/gGerRrLMKn 2022-03-14 02:17:57 @alexhanna Let's discuss further once you've seen more :) 2022-03-14 02:17:10 @alexhanna Also, if you're going to use the UCLA campus as a stand in for Stanford, maybe don't film in front the building that is on all the UCLA promotional materials??? 2022-03-14 02:16:19 @alexhanna Ohh -- I am so here for the @alexhanna live tweeting of this show :) Also, totally with you on the character feeling relatable. That became quite a struggle for me a couple of episodes in... 2022-03-14 02:04:53 @maria_ryskina @aparrish Cool -- thanks! 2022-03-14 02:04:45 @Ricardo_Joseh_L Well, yes, but do you know of any research (what I meant by "work") on that phenomenon. 2022-03-14 00:25:35 RT @dr_nickiw: This skit said so much about technology..but it said SO much more about the technology *developers.* *insert shameless plug… 2022-03-13 23:55:13 @akil_iyer Thanks! 2022-03-13 22:34:52 (I thought I'd tweeted a similar query before, but can't turn it up...) 2022-03-13 22:34:27 Q for #linguistics twitter: Does anyone know of any work on spell checkers and how they interact with language ideologies, esp. about standard and non-standard varieties? 2022-03-12 20:19:40 @GaryMarcus @yudapearl @luislamb Neither AI nor AGI. There are things I am interested in that other researchers view as components of one or both of those, but that is not why I am interested in them . 2022-03-12 20:06:12 @GaryMarcus @yudapearl @luislamb Not exactly: I personally have no interest in building AI. I do care though about countering false claims about what current systems are doing. https://t.co/l3bnO5jG04 2022-03-12 08:11:00 CAFIAC FIX 2022-01-23 21:19:54 @OlgaZamaraeva @ssshanest It's only one word each day and the same for everyone who plays that day. Avoid spoilers :) 2022-01-23 20:26:12 @AmandaAskell @timnitGebru And who does "people" refer to in "people have treated..."? The groups Timnit and I are talking about (enslaved people, disabled people) have understood their own moral patienthood all along. 2022-01-23 20:24:18 @AmandaAskell @timnitGebru If you want to do research into questions about moral patients, etc, fine. But when you bring it into this context, where we are talking about harm to actual people, to defend (or at least "both sides" for) the people making the harmful analogies, I question your priorities. 2022-01-23 20:22:16 @AmandaAskell @timnitGebru 4. When I asked you what you meant by that, you spent time and energy defending the importance of worrying about the potential future harm to potential (entirely hypothetical) entities. > 2022-01-23 20:20:37 @AmandaAskell @timnitGebru 1. Agüera y Arcas wrote a blog post with harmful analogies. 2. I wrote a blog post explaining why those analogies are harmful. 3. You QT'd my post to say that this disagreement is a "cause of friction", portraying my post and his as something like "both sides' of an issue. > 2022-01-23 20:19:37 @AmandaAskell @timnitGebru Future models are not at all relevant to the present discussion. 2022-01-23 20:19:23 @AmandaAskell @timnitGebru The only evidence for consciousness or sentience in language models (aka the models in question here) is that they are capable of producing language that we make sense of. Others claim that this means they are "understanding" the language, but they aren't: https://t.co/cYj1vKUpu1 2022-01-23 19:07:20 @elizejackson @ruthstarkman @mmitchell_ai @timnitGebru @HabenGirma Thank you again for taking the time & 2022-01-23 18:51:55 @elizejackson @ruthstarkman @mmitchell_ai @timnitGebru @HabenGirma Thank you, Liz! I have updated the post to use identity first language consistently (previously it was quite a mix!). Would you be okay with my thanking you by name there for the feedback? 2022-01-23 18:35:36 @tallinzen In my OP: "pattern recognition" should be "only pattern recognition (of the type done by ML models)". I'm not saying there isn't a use for pattern recognition, but rather that it's problematic to assume it alone is a sufficient tool for any problem. 2022-01-23 18:33:36 @AmandaAskell @timnitGebru That you are more concerned for the "rights" of such entities than the harms these analogies cause to ACTUAL PEOPLE speaks volumes. 2022-01-23 18:32:42 @AmandaAskell @timnitGebru I can tell you, and in doing so am speaking from squarely within my expertise, that the models in question do not bear any resemblance to something with the capacity for consciousness. > 2022-01-23 18:01:38 @AmandaAskell @timnitGebru You do recall that this whole thread is about the problems with drawing analogies between disabled people & 2022-01-23 17:59:49 @BlancheMinerva I am aware -- see the footnote in the post. 2022-01-23 13:18:47 RT @XandaSchofield: We're hiring visitors in my department for 2022-2023! This an open-rank visiting professor position: we're looking for… 2022-01-23 04:55:09 @KenButler12 @Laserhedvig I wasn't asking for help -? Just reporting on my experience, TYVM. 2022-01-23 04:15:29 @Laserhedvig Okay, so I went to the German one that day and tried "ETWAS" thinking I saw SO clever, and was disappointed. And then realized I was entirely stuck ... do NOT know enough German to make the next move. 2022-01-23 01:51:40 For describing the application of SALAMI for centralizing power & https://t.co/MPJkmCr3c2 2022-01-23 01:50:54 That renaming makes the ridiculousness of those questions crystal clear. Thank you! 2022-01-23 01:50:23 @quinta "Will SALAMI have emotions ? Can SALAMI acquire a “personality” similar to humans’ ? Will SALAMI ultimately overcome human limitations and develop a self superior to humans ? Can you possibly fall in love with a SALAMI ?" --@quinta in https://t.co/sHfNDO3y7W 2022-01-23 01:49:55 It's not "AI", it's SALAMI: Systematic Approaches to Learning Algorithms and Machine Inferences Thanks @quinta for this bit of brilliance! https://t.co/sHfNDO3y7W 2022-01-23 01:48:31 RT @quinta: @FrankPasquale @emilymbender the article touches some points I discussed in Rome (Frank, you might remember). Let's call them… 2022-01-23 01:48:27 @quinta @FrankPasquale I love it! Another proposal is #PSEUDOSCI: https://t.co/MPJkmCr3c2 2022-01-23 01:15:47 RT @mmitchell_ai: Let's slow inequity in tech! "In Washington, state senator Karen Keiser has introduced a new bill to expand whistleblower… 2022-01-23 00:51:21 @HabenGirma I'm glad to have the chance to do so! 2022-01-23 00:51:05 I'm going to keep working through our library and other resources, but I'm also definitely curious about people's favorite works on this topic, especially from the point of view of #DisabilityStudies #AcademicTwitter https://t.co/k58vzX5mRv 2022-01-23 00:50:14 I'm both surprised and glad that such a thing exists --- and especially glad that it's a handbook for understanding dehumanization (from primarily philosophical and psychological perspectives) and not a handbook for how to DO dehumanization! 2022-01-23 00:47:55 So (in preparation for my #COGSCI2022 keynote), I'm now looking into the literature on dehumanization, from the perspective disability studies & https://t.co/EAUc696pvb 2022-01-23 00:36:12 @FrankPasquale Thank you, Frank! 2022-01-23 00:29:10 RT @FrankPasquale: This is such a smart article. It identifies deep flaws in analogies between neural nets and human neurophysiology. It al… 2022-01-22 23:32:33 @mmitchell_ai @timnitGebru Wait, unfair treatment = having someone write a thoughtful response to his blog post? 2022-01-22 23:28:22 @gordic_aleksa @HabenGirma I’m here to take up all the oxygen in the room and exhaust people who are trying to fight against injustice so that we can maintain the status quo, which serves me. I have no interest in learning And then in bold text: “Let’s engage”! 2022-01-22 23:28:08 @gordic_aleksa @HabenGirma “Hi! I’m a white dude who likes to play Devil’s Advocate, because other people’s struggles are theoretical to me. It’s fun to debate their right to equality. While we’re here, I would like to centre my voice and perspectives about a cause that means nothing to me! > 2022-01-22 23:26:02 @gordic_aleksa @HabenGirma Just in case the tweet I QT'd there didn't give alt text for the photo, here it is: The photo is of a smiling, bearded white guy looking straight at the camera. Next to him is a bunch of text reading: 2022-01-22 23:24:08 @gordic_aleksa @HabenGirma https://t.co/SnyCxqI6sx 2022-01-22 21:51:39 @elizejackson @ruthstarkman @mmitchell_ai @timnitGebru @HabenGirma Thank you for this feedback! 2022-01-22 21:48:00 @elizejackson @ruthstarkman @mmitchell_ai @timnitGebru @HabenGirma The system being ableism or something else? 2022-01-22 21:44:40 @elizejackson @ruthstarkman @mmitchell_ai @timnitGebru @HabenGirma Also painful to read in my case not because I was the target of the dehumanization in the piece (outside of one comment about women gaining personhood) but because the writing is tiresome, full of twisted logic, etc. 2022-01-22 21:42:43 @elizejackson @ruthstarkman @mmitchell_ai @timnitGebru I was very fortunate to receive comments from @HabenGirma who is Deafblind and Sébastien Hinderer who is blind. I also tried my best to be upfront about my positionality in the piece. Also, definitely open to feedback. 2022-01-22 21:30:55 @timnitGebru Ugh 2022-01-22 21:24:33 RT @emilymbender: Q for #NLProc twitter: Know of any surveys of language technologies and their required resources? (Asking for a student)… 2022-01-22 21:23:21 @ruthstarkman @mmitchell_ai @timnitGebru IKR? Once I decided I was going to write a reply, I also realized I had to finish reading the whole thing. It was painful. 2022-01-22 21:20:23 @mmitchell_ai @timnitGebru I sure hope that folks at Google are really embarrassed by his blog post --- but I'm afraid most probably aren't (even among those who read it), and that's also telling. 2022-01-22 21:12:40 RT @mmitchell_ai: A Google Research VP put out a piece on large language models (LLMs) used in AI. One of the ppl in charge of me & 2022-01-22 14:38:59 @AmandaAskell This comes across as very "both-sides". Is that what you intend? 2022-01-22 14:26:59 @asayeed @LeonDerczynski So either we're using "pattern recognition" to mean two different things, or I don't see the connection between "domain general" and "pattern recognition". 2022-01-22 02:01:21 RT @AJLUnited: Nothing is certain except death & 2022-01-22 01:49:19 Job posting that I'm looking at, deciding whether to forward to my students: "**Ideal Candidate** --> Me: That's a pass then. 2022-01-21 23:02:07 Q for #NLProc twitter: Know of any surveys of language technologies and their required resources? (Asking for a student) eg: A spell checker can be built with a word list + FST, but to catch homonyms out of context requires large corpus for LM. 2022-01-21 21:03:35 RT @DAIRInstitute: "There’s a tendency I’ve observed where people trying to argue that language models “understand” language to draw analog… 2022-01-21 20:23:24 RT @HabenGirma: A Google VP compared AI to Deafblind people, misinterpreting #HelenKeller’s words to paint her similarities to #AI. This hu… 2022-01-21 18:13:06 @crazyuddie You'd still need to make that database dynamic, though, as road signs get added/removed over time. But yeah. 2022-01-21 17:53:40 We always need to keep a larger frame to hand, one which encompasses the possibility of non-ML based approaches and even more importantly the possibility of NOT automating the thing. 2022-01-21 17:52:28 "My ML system isn't working well enough in the real world because there isn't enough data" ... seems to presuppose that all problems are amenable to pattern recognition. But on what grounds should we believe that? 2022-01-21 16:39:39 @ghoshd @EmilyBender Thanks -- wrong account though. I'm @emilymbender 2022-01-21 16:34:16 RT @emilymbender: I just published: No, LLMs aren’t like people with disabilities (and it’s problematic to argue that they are) https://t.c… 2022-01-21 15:27:04 Twitter has spoken folks. No unsolicited .docx files. https://t.co/kQSr56UZUG 2022-01-21 13:54:06 RT @emilymbender: This blog post by @blaiseaguera draws analogies to the experience of people with disabilities (especially blind and Deafb… 2022-01-21 13:50:43 @dmonett Yes, please! 2022-01-21 13:50:08 @dmonett Thank you, Dagmar! 2022-01-21 13:49:03 @Abebab Thank you, Abeba! 2022-01-21 13:37:57 RT @timnitGebru: First of all, thank you so much for taking the time to write this Emily. I was incensed and that’s where my contribution s… 2022-01-21 13:32:30 RT @dmonett: < We need more of this. We need a tsunami of this. For it is appalling, utterly alarming how far some #AI advocates have gon… 2022-01-21 06:01:22 @_alialkhatib And then hold this up against Stochastic Parrots which Google said couldn't be published with Google authors on it. https://t.co/iJvycC71gM 2022-01-21 05:56:18 @_alialkhatib I wanted to nope out so bad too, but I also was so angry that I figured I should write the blog post and so I had to read the whole thing... 2022-01-21 05:31:56 This blog post by @blaiseaguera draws analogies to the experience of people with disabilities (especially blind and Deafblind people) which are both false and dehumanizing. I lay out the details & https://t.co/M73ykMMu72 https://t.co/al6qdAFkCD 2022-01-21 05:01:35 @Kobotic @rayljohns Thank you :) 2022-01-21 04:54:42 @Kobotic @rayljohns Fixed the title in the post, but that doesn't change the tweet alas. 2022-01-21 04:35:55 @rayljohns I should probably not use acronyms in blog post titles... 2022-01-21 04:12:40 I just published: No, LLMs aren’t like people with disabilities (and it’s problematic to argue that they are) https://t.co/tMtE18CS96 2022-01-20 23:26:07 See that little knob on the end of the pull cord? It has a purpose that it can't fulfill if the instructor can't reach it. Don't be an asshole, leave it where others can reach. Signed, shorter than average but still a prof. https://t.co/OQTAgyHqiv 2022-01-20 22:44:54 @jtbeavers Wow -- how did you resolve that one? 2022-01-20 22:33:54 @jtbeavers Ouch ouch ouch! 2022-01-20 21:07:41 RT @mmitchell_ai: Also 1 year ago today, the camera-ready of the paper was due, spearheaded by @timnitGebru and @emilymbender, and featur… 2022-01-20 20:36:30 @jessgrieser It's you. This is an explicitly normative poll. 2022-01-20 19:15:54 @VerbingNouns Congrats!!! 2022-01-20 18:27:49 I'm such a sucker for meme mashups... https://t.co/Hkb83I6QHI 2022-01-20 18:17:17 (this is a subtweet) https://t.co/jT5IfaqjDe 2022-01-20 17:54:46 RT @emilymbender: Someone asks you for some short textual information, that they will need to send to other people as an email. You send it… 2022-01-20 15:57:58 @qpheevr 2022-01-20 15:49:20 @ipanalysis Yes, exactly! Like it's text --- I'm reading an email, maybe on my phone (or previously, in my unix mailer program over an ssh connection). Why should I have to find a way to open your stupid attachment? 2022-01-20 15:36:15 @ipanalysis Haha -- that was actually a typo on my part. It was supposed to say "document", ofc. 2022-01-20 15:09:04 Someone asks you for some short textual information, that they will need to send to other people as an email. You send it in: 2022-01-20 01:34:44 @mmitchell_ai @ruthstarkman @billyperrigo Ditto! And I think the erasure of your work (building the team at Google) is more serious. I was only a co-author on the paper, and more to the point, suffered far fewer consequences from Google's misdeeds around it than either of you! 2022-01-19 21:17:39 RT @NowWeAreAllTom: 2000s iTunes: "hey buy this content! It has DRM!" 2010s Bandcamp: "hey buy this DRM-free content!" 2020s N F T mark… 2022-01-19 20:31:34 @tacertain Thank you! 2022-01-19 20:31:01 RT @tacertain: Are we on a path to computers "understanding" natural language? I don't think so, but never would have been able to articula… 2022-01-19 15:07:39 @timnitGebru @JeffDean @Google @DAIRInstitute I'm glad the collaboration remains a bright point for you, too! 2022-01-19 13:22:54 RT @emilymbender: This is a great profile of @timnitGebru and well-articulated telling of what @jeffdean and others at @google did. https:… 2022-01-19 13:11:26 RT @UVicHumanities: What is linguistic discrimination and how does it become embedded within our institutions? Sociolinguist Kelly Wright t… 2022-01-19 05:37:21 @brendan642 I have no opinion on rule-based approaches to things *outside* of language. My point in this whole thread has been about rule-based approaches to linguistic structure ("representing language", per se). 2022-01-19 05:36:15 @brendan642 It may well be that modeling the relationship between translational equivalents in two different languages isn't terribly amenable to a rule-based approach -- but then why expect it to be? Translational equivalents do not constitute linguistic systems. > 2022-01-19 05:35:15 @brendan642 But you're back to talking about linguistic structure here, not the relationship between language and the rest of the task. > 2022-01-19 05:34:32 RT @ACharityHudley: Liberatory linguistics centrally involves what we call linguistic reparations, which entails recognizing, uncovering, a… 2022-01-19 05:32:21 This is a great profile of @timnitGebru and well-articulated telling of what @jeffdean and others at @google did. https://t.co/PdhOco4Kzs Really looking forward to the future as imagined by @DAIRInstitute ! 2022-01-19 01:27:24 @brendan642 What are those though? 2022-01-19 01:22:41 @brendan642 Oh, and I'd also revise to "hand-crafted rules alone". If your text classification problem relies on e.g. appropriate understanding of the scope of negation, you're gonna want a good parser as one component! 2022-01-19 01:18:35 @brendan642 But what do you mean by "relevant aspects of human languages"? I think I could agree with that statement if you meant something like "relevant aspects of language use" ... whether or not I assign 5 stars to some movie isn't an aspect of language. 2022-01-18 23:27:57 @mmitchell_ai @JeffDean Oh Meg, I am so sorry! And I'm so glad that you are still with us. 2022-01-18 19:25:28 RT @emilymbender: Got opinions about ACL ethics review? Feel like there should be some community input into that process? Here's the ACL Et… 2022-01-18 18:44:43 @brendan642 And yes, it's fine to be impressed with ML (though in generally I personally am not) --- so long as being impressed doesn't roll over into being uncritical. And *if* ML is impressive, surely that is independent of how well rule-based systems work? 2022-01-18 18:43:49 @brendan642 (Oh and re sociolinguistics, of course sociolinguistics is linguistics. "How do speakers make use of variation in linguistic form to construct and convey identities and how does this relate to language change over time"... absolutely linguistic questions.) > 2022-01-18 18:42:53 @brendan642 But a good representation of the linguistic system alone won't do it. And, having observed that, deciding that rule-based systems are worthless is where I see these things go off the rails. Just because you need something in addition doesn't mean they aren't helpful. > 2022-01-18 18:42:03 @brendan642 “Does the person who wrote these words like the movie?” is not. Nor is “Did the person who wrote these words assign 5 stars in the review?”. Having a good representation of the linguistic system can help to answer those questions, of course. > 2022-01-18 18:41:03 @brendan642 “What ways do English speakers use words/constructions to express positive/negative sentiment?” is a linguistic question, in the sense that it can be helpfully approached through the study of languages as symbolic systems. > 2022-01-18 16:53:37 @ian_soboroff @brendan642 @oepen And you think this is news to me? 2022-01-18 16:52:52 @brendan642 Precisely because text classification, while a problem involving language, isn't actually about representing language. I return to: https://t.co/WIeLjXRsrz 2022-01-18 16:51:48 @brendan642 Sure, text classification is important and relevant to NLP. But the fact that you can't write rules to do it based on linguistic introspection doesn't seem to me to support the claim that "human languages were too complex to be effectively represented by hand-crafted rules." > 2022-01-18 16:25:00 @ian_soboroff @brendan642 That's also tricky, but it's the kind of evaluation that that @oepen & https://t.co/9n9WOYCxQj 2022-01-18 15:52:21 @brendan642 MT is an interesting edge case. Surely professional translators and interpreters develop some kind of intuition about relationships across languages (and probably most bilinguals do, too), but is this kind of knowledge the same thing that linguistics seeks to model? 2022-01-18 15:51:22 @brendan642 I don't think text classification is really a linguistic problem --- it's usually about the content (what people are talking about and what they are saying about it) rather than the linguistic systems involved. > 2022-01-18 15:50:46 @brendan642 Right *parse selection* is definitely an area where statistical approaches are by far the superior choice. But that's not the whole problem -- coming up with the parses to choose among is also key. > 2022-01-18 15:35:42 @brendan642 What is your basis for saying that the performance of statistical parsers shows the limited capacity for human linguistic introspection? Where do the treebanks they're trained on come from if not linguistic introspection? 2022-01-18 15:19:47 @brendan642 It's not at all clear to me that supervised statistical parsing is superior to rule-based systems—unless your supervision comes from a treebank created by a rule-based system! The time scale for creating a solid, broad-coverage grammar might not suit one's needs, but that's diff. 2022-01-18 14:24:55 @brendan642 And I definitely wouldn't want to do rule-based parse selection either (that was part of my job at YY Technologies, and it was impossible). 2022-01-18 14:07:07 @brendan642 In general, I'm not suggesting that folks should stop doing ML-based NLP, just stop motivating ML-based NLP by dumping on rule-based NLP. That's a weak sauce kind of motivation anyway. 2022-01-18 14:06:13 @brendan642 Still staying within ML, that could be done much better cast as an NER or other sequence labeling problem. But why even bother with that? They said there were too many counties to do this by hand. Number of counties in question: 49. > 2022-01-18 14:05:20 @brendan642 @xenamutt We need to get these over to @aclanthology STAT! 2022-01-18 14:05:00 @brendan642 I once saw a presentation of work in progress by someone (NOT in our program) who was looking to classify court documents by which county they pertained to. Their approach: treat this as seq2seq where the doc is the input and the county is the output. > 2022-01-18 14:03:07 @brendan642 Btw, one of our learning goals for the CLMS program at UW is that students should have the skills to determine, given a problem, what kind of solution (rule-based, ML, combo, either) is well-suited to it. > 2022-01-18 14:01:16 @brendan642 As for the question: Can we do anything interesting/useful with rule-based systems, I think the answer is a clear yes! There's a range of examples here: https://t.co/hV79Gch7Kv > 2022-01-18 13:59:56 @brendan642 One interesting take on comparing rule-based to ML-based NLP I remember is @xenamutt 's keynote at (I think) SIGTYP 2020. One takeaway: for morphological analyzers starting from scratch, you'll get there faster hiring a linguist to build the rule-set. > 2022-01-18 13:56:04 @brendan642 I wonder if the shared task paradigm set us up for this/put us as a field on the track of always looking for one "best" solution, ideally one "best" solution that works for everything. > 2022-01-18 13:55:15 @brendan642 Honestly, I don't think it's good for science to insist that either rule-based NLP or ML-based NLP has to prove its value by showing that it's more useful than the other (in at least some cases). It should be enough to ask: can we answer interesting questions with this? > 2022-01-18 13:53:43 @brendan642 I don't think the quote at the top of the thread implied comparative RQs. It seems to imply: "Can manually created rules be useful for anything, y/n?" (where 'for anything' is probably practical applications, but I'd also include linguistic research). > 2022-01-18 13:01:19 RT @cogsci_soc: Meet the highly anticipated plenary speakers of #CogSci2022! @NeilLewisJr is an assistant professor at @Cornell and @Weill… 2022-01-17 15:02:26 @gullabi I believe @huggingface is working on "model cards" and "data cards", both of which should include information on the language variety|ies represented (for language data) and way more. @mmitchell_ai do you have pointers to more info? 2022-01-17 08:11:00 CAFIAC FIX 2022-01-13 00:18:17 RT @alexiswellwood: The North American Summer School in Logic, Language, and Information (NASSLLI) 2022 at USC this June is now accepting a… 2022-01-12 20:03:59 RT @VasundharaNLP: Quick thread to teach you a tiny bit of linguistics, using Malayalam examples. (This is an attempt at #LingComm!) 2022-01-12 19:45:27 @complingy Ducks is a good example . . . . . . So is for! 2022-01-12 19:27:47 @MaxPapillon1 Thank you! 2022-01-12 17:42:43 @SuzanneWakim @brocansky I had one student (that I know of!) in the same boat. They really appreciated the opportunity to attend classes in an environment where their anxiety didn't prevent learning. 2022-01-12 16:09:37 Q for #linguistics twitter: Are there any grammar sketches for Hixkaryana that are available online? The primary reference seems to be Derbyshire 1979, which isn't (as far as I can tell). 2022-01-12 14:32:15 This has been a lot of fun. Also, to the people who think they've read that ___ is the word with the most definitions in English: those kinds of factoids are likely always bunk. https://t.co/yuAyH3dOnq 2022-01-12 14:31:12 Are you signed up for @csdoctorsister 's newsletter yet? If not, you have been MISSING OUT! This week's edition includes the best description I've ever seen of what happens when people try to write about "harm" without engaging with discrimination. https://t.co/Oqz1Wfwpgi 2022-01-12 14:24:37 @lambdaofgod Aw thanks :) The initial tweet really wasn't about book promotion, but then I realized later that it did connect to that book so decided to drop the link... 2022-01-11 22:29:59 @LinguisticsGirl @rctatman I put "word sense ambiguity" to try to avoid arguments about homonyms vs. polysemy... 2022-01-11 19:37:30 @lexicoj0hn Hah -- that's great! 2022-01-11 19:35:34 @christian_hudon Of course there many others. Did you think I was unaware? 2022-01-11 17:37:18 We do, in fact, discuss bank (among other examples) --- including the surprising claim that historically the senses are actually related! 2022-01-11 17:36:40 And if you want to read more about the ways in which words can have multiple senses, check out Ch 4 of Bender & https://t.co/7fSWxKPNd6 2022-01-11 17:25:27 This is delightful! I'm so glad I asked :) Come wallow in word sense ambiguity.... https://t.co/yuAyH3dOnq 2022-01-11 17:24:12 @Drbenderignacio Hah. I (obviously) share your pain. 2022-01-11 17:05:46 (Mostly this is just commentary on how over-used that one example is, but I'm also kind of curious what people come up with.) 2022-01-11 17:05:23 Quick! Think of an example of word sense ambiguity in English other than bank/river bank/financial institution! 2022-01-11 14:22:33 RT @ReviewAcl: ARR will have a slightly modified timeline for January reviewing cycle. See details here: https://t.co/BljxlV1R0l 2022-01-11 13:40:29 @j2bryson Have you looked into income inequality as a factor? 2022-01-11 08:11:00 CAFIAC FIX 2022-01-05 16:54:42 These answers a great, folks! Keep 'em coming. Also, so far it seems like there isn't anything as established as the term "manel". Maybe we can fix that in 2022? (Ideally as a term for past practices only 2022-01-05 16:38:45 (Asking for a slide deck that I'm putting together.) 2022-01-05 16:38:30 If a panel consisting only of men is manel, what's a panel consisting only of white people? 2022-01-05 13:56:59 @haldaume3 That's a great example! @kirbyconrod is my go to expert on the linguistics (socioling and syntax) of pronouns. You might check out their work. 2022-01-04 22:13:14 @mmitchell_ai @timnitGebru Not to defend UW CSE here, but FWIW, UW NLP isn't the department. It's the informal NLP group across campus, and I believe the twitter account is run by the same person whose tweet it liked. 2022-01-04 14:22:50 @alexhanna Get well soon! 2022-01-03 21:22:56 RT @DAIRInstitute: Our fellow, Raesetje Sefala (@bonjora) will talk about her work at @Blackathonic on February 5 at 1pm EST. You can get y… 2022-01-03 17:55:26 Either both applicability to NLP tasks *and* linguistic insight are necessary conditions for publication in *ACL venues or neither are. Hmpf. 2022-01-03 04:13:15 @ruthstarkman @CT_Bergstrom @UW People who fish for "cancellation" are pathetic and exhausting. What a strange goal. 2022-01-03 04:10:04 @uwcse It's awkward to speak up against another dept at my own institution, but it's even more awkward to sit silent while one of their (emeritus) faculty members embarrasses our whole university---and directly harms our ability to recruit students & 2022-01-03 04:08:29 @uwcse "We insist on a robust and professional intellectual environment where debate and diverse views can be expressed vigorously and free of personal attacks" This almost sounds like it's explicitly making space for direct attacks on the qualifications of categories of people. 2022-01-03 04:07:31 @uwcse "We value, support, and reward people who work for inclusiveness" Good. But what about people who actively work against inclusiveness? What then? 2022-01-03 04:06:49 I notice an important piece is missing from @UWCSE's "Inclusiveness Statement" https://t.co/xfydDS7kZz ... there's nothing there about holding themselves accountable (collectively) for fostering the described inclusive environment. 2022-01-03 04:04:11 @ruthstarkman @CT_Bergstrom @UW Negative attention is attention I guess? It is so embarrassing and frustrating, especially for someone in an adjacent field but different department. 2022-01-02 22:28:07 @candersHamilton @Ling_Lass @kathrynbck @LingSocAm SlideSpiel person wrote back saying they would take the version with captions. Hopefully they know how to do that. 2022-01-02 21:27:51 @candersHamilton @Ling_Lass @kathrynbck Thanks -- I ended up editing within the YouTube interface. Not sure if this means that SlideSpiel will get a version that has them in or not. I am super frustrated that @LingSocAm didn't actually plan for captions! 2022-01-02 20:57:08 @BayesForDays haha -- my first interpretation here was that he was saying something about getting ratioed. 2022-01-02 20:19:14 @EmmaSManning 2022-01-02 19:17:44 @Ling_Lass @kathrynbck The subtitles option might be the better route for producing something that SlideSpiel *has to* include in their display.... 2022-01-02 19:17:17 @candersHamilton @Ling_Lass @kathrynbck But can you re-upload it after correcting? That's the key question... 2022-01-02 19:16:55 @Ling_Lass @kathrynbck Hm, I see the YouTube ones now, but not how to edit them. I can add subtitles (which is different to captions), and which I think would work (though the viewer would have to turn off captions to be able to see the subtitles). 2022-01-02 19:15:16 @Ling_Lass @kathrynbck I can't get YouTube to make any right now, but I did the recording in Zoom and had that one, so starting there. But this suggests that maybe I was looking in the wrong place in YouTube and might be better off starting with those... 2022-01-02 19:12:48 "is also fully D biased." ("... de-biased"). Next line: that's a mythical end-point, of course. 2022-01-02 19:08:02 @kathrynbck IKR?! Completely unacceptable. And we knew this LSA was going to be hybrid & 2022-01-02 19:07:28 @kathrynbck It looks like YouTube has some kind of forced alignment option. I'll report back on how it goes. 2022-01-02 19:07:01 @BlancheMinerva I have no inside info, but my guess is that it's the usual combination of training on (audio, transcription) + a language model. So, if the transcription has punctuation, there's a chance of correlating it with intonation... 2022-01-02 19:05:59 @kathrynbck I'm hoping I can get the captions embedded in what SlideSpiel uploads because their answer when I emailed about captions was "we're not doing that". 2022-01-02 19:04:23 @kathrynbck I've downloaded the text file and am just editing it.... my plan is to try to add them via YouTube and then go from there. (I'm part of an organized session where we were going via YouTube because we hadn't gotten any info. Now we're supposed to send the recordings to SlideSpiel) 2022-01-02 18:48:59 "representations of words in terms of what else they call a car with" (co-occur with) 2022-01-02 18:45:18 Ironic error: in my description of ASR as input being audio and output being orthography, the transcript had the output as "or soccer fee". 2022-01-02 18:44:46 One place it seems to like to put a comma is after "language" in "natural language, processing" and I can't work out what might be behind that error. I don't hear any cues in my intonation and I can't imagine why it would be frequent in the training data... 2022-01-02 18:44:05 The Zoom auto captions try to output punctuation, but seem to only supply periods (one at the end of each subtitle chunk, regardless of whether it's the end of a sentence) and commas. 2022-01-02 18:43:24 Creating captions for my pre-recorded #LSA2022 talk, starting from the auto captions provided by Zoom, and amused (as always) by the errors, a quick thread, if you want to be amused too: 2022-01-01 18:58:47 Gotta love people who consistently refuse to actually listen and then come back with: "I hope you will consider contributing your views." Yeah, that sure sounds like a good use of my time! 2022-01-01 18:46:53 @BootstrapJersey @Abebab Last reply from me: If you are interested in actually making a difference re harmful applications of AI, the starting point is to **learn from the experts** who are scholars who are also affected by this. I gave you a suggested reading list. /Emily out. 2022-01-01 18:09:08 @BootstrapJersey @Abebab Wait, I'm somehow misunderstanding that the lack of citations to any Black women scholars in your page --- and you are citing them? 2022-01-01 18:06:39 RT @citeblackwomen: On New Year’s Day 2018 we came up with 5 New Year’s resolutions that we can all commit to in order to Cite Black Women.… 2022-01-01 18:04:56 @BootstrapJersey @Abebab You wanna actually be helpful? Here's a starter pack: Benjamin 2019: https://t.co/Q6qe30C3hX Noble 2018: https://t.co/vrC0bo2nLv Raji 2020: https://t.co/1JnDJoehEQ Gebru 2020: https://t.co/44BHW0mxnc Also, check out DAIR: https://t.co/wJxq3PFDQH 2022-01-01 18:02:34 @BootstrapJersey @Abebab "Best known methods for delivery tech products" is still NOT anything about actually engaging communities and looking to avoid harm rather than make profit. I notice that you utterly fail to #CiteBlackWomen and therefore are missing most of the most relevant work in this space. 2022-01-01 17:56:22 @BootstrapJersey @Abebab "MLOps could be narrowly defined as "the ability to apply DevOps principles to Machine Learning applications"" ... doesn't help me (and I'm fairly tech savvy) know what "MLOps" is, because WTH even is "DevOps"? 2022-01-01 17:55:39 @BootstrapJersey @Abebab First off -- WTH even is "MLOps"? Who is this document meant for? Secondly, I see nothing in that intro paragraph that suggests that the "for all" includes truly everyone and not just devs, companies and *maybe* paying customers. 2022-01-01 17:54:48 @BootstrapJersey @Abebab "This document sets out the current state of MLOps and provides a five year roadmap for future customer needs which is intended to support pre-competitive collaboration across the industry with a view to improving the overall state of MLOps as a capability for all." 2022-01-01 17:48:32 @BootstrapJersey @Abebab Oh, and if you want to post "regular reminders", be my guest. But not in my mentions TYVM. I've not signed up for regular reminders from you. 2022-01-01 17:47:59 @BootstrapJersey @Abebab Dude, I read your link. It's all about software/engineering practices and nothing about a) social context or b) NOT building the thing. 2022-01-01 17:42:24 @BootstrapJersey @Abebab Uh, maybe read the QT *before* replying with something totally irrelevant/beside the point? 2022-01-01 14:47:44 RT @Abebab: its a field-wide plague that's ingrained in the "problem" -> 2022-01-01 14:45:45 @Abebab Yes, exactly! 2022-01-01 14:27:44 ML enthusiasts: You can't just raise problems without also proposing solutions. Us: For one thing, yes we can. For another, @Abebab did: the solution is don't do the thing. https://t.co/fHCNzZAJ0K 2021-12-31 05:30:48 RT @emilymbender: For my #AIEthics tweeps: what do you think? 2021-12-31 05:30:34 @mle_ross I think it could be read in two ways: 1) There's no way anything could have predicted that: hilariously perfect 2) Wow they predicted that!: too subtle Hence the poll :) 2021-12-31 03:40:16 @PatrickDaitya @queerterpreter @kirbyconrod Yes! More here: https://t.co/dlOMS4iyye 2021-12-30 20:03:02 RT @VocalFriesPod: It's that time of year again. time for the top 10 dl'd eps of the Fries! 2021-12-30 20:02:57 RT @VocalFriesPod: 2. Me Myself and AI with @emilymbender. I learned so much from this and her Stochastic Parrots paper (with @timnitGebru… 2021-12-30 19:45:43 @queerterpreter @PatrickDaitya @kirbyconrod Backstory here: https://t.co/dqIGaSJwhZ 2021-12-30 17:56:12 @linasigns @LingSocAm I've just sent her an email asking. But I think that @LingSocAm needs to have someone on our side who is responsible for these issues and can raise them e.g. when contracts are being signed with vendors. 2021-12-30 17:45:20 @drgriffis @naacl That is frustrating! 2021-12-30 17:04:55 @Science_stanley I agree that there is a severe lack of diversity in our training sets, but I also think that all data collection needs to be done thoughtfully (i.e. not the "grab what we can take" mentality). Bare minimum here: opt-in by the presenters whose voice data would be collected. 2021-12-30 16:23:09 .@LingSocAm who is in charge of accessibility for our conferences? How do we empower the person that role to make sure this happens? 2021-12-30 16:15:23 Watching "Top AI news of 2021" pieces go by and noting how many of them are missing a very key development in 2021: @timnitGebru 's founding of @DAIRInstitute! Tech journalists: are you writing puff pieces for industry or are you striving to capture the whole story? https://t.co/NvD7TWpvBI 2021-12-30 16:05:11 @MaxPapillon1 Yes totally agreed that it is on conference organizers (and virtual infrastructure providers) to plan for this, including planning in enough time to produce the captions. 2021-12-30 15:32:48 RT @anggarrgoon: yes, along with recognition that the corrections take a lot longer for some people than others - or rather, that initial a… 2021-12-30 15:17:53 This message brought to you by the video upload instructions for #LSA2022 which are not inspiring confidence. 2021-12-30 15:17:20 All virtual/hybrid conferences in 2022 had better have captions on pre-recorded videos and not just auto-generated captions, but ones that are corrected (by the presenters or otherwise). There's really no excuse at this point to not plan for this. 2021-12-30 13:48:18 For my #AIEthics tweeps: what do you think? https://t.co/c6KQxmA6Y8 2021-12-30 01:42:48 RT @emilymbender: Brontaroc in #DontLookUp as a send-up of AI/#PSEUDOSCI snake oil (poll): 2021-12-29 19:01:06 Brontaroc in #DontLookUp as a send-up of AI/#PSEUDOSCI snake oil (poll): 2021-12-29 02:53:21 RT @mmitchell_ai: My favorite things about the journalism of @kharijohnson is all the things. His writing craftsmanship, topics (obv the to… 2021-12-28 19:03:13 @Abebab I'm so sorry Abeba! 2021-12-28 15:45:13 And note that the harms in question aren't only physical risk like in the two above examples. There are also psychological harms from perpetuating racism etc. --- thoroughly documented in Sweeney 2013 and Noble 2018. Here's another example of that type: https://t.co/09VQgG8dsh 2021-12-28 15:40:46 Same is also true when the "answers" come as decontextualized (and thus recontexualized) text snippets: https://t.co/4cqYjo3O2L #PSEUDOSCI 2021-12-28 15:39:48 When web search results are presented via voice as "answers" to "questions" it becomes even more clear that we need regulation that holds companies accountable & 2021-12-28 15:38:09 For the accountability and #PSEUDOSCI files: "As soon as we became aware of this error, we took swift action to fix it." Thankfully, it appears no one was hurt, but this kind of reactive approach doesn't cut it Amazon https://t.co/pmXama7EsI 2021-12-28 14:44:58 @Anandstweets Interesting! APA treats the content of the tweet as the title! 2021-12-27 18:59:58 Suggested best practice for the New Year: instead of saying "the AI" or "AI technology" or just "AI", use #PSEUDOSCI ... it'll help us keep a better focus on what this tech is actually doing and through ridiculous claims into relief. https://t.co/MPJkmCr3c2 2021-12-27 18:49:06 RT @naaclmeeting: Announcing call for affinity workshops for NAACL 2022. The goal here is to establish research communities out of affinity… 2021-12-27 14:06:43 RT @michebox: SSRC’s hottest club is the #JustTech Fellowship. It has everything — a livable wage, extra support for adult responsibilities… 2021-12-27 08:20:00 CAFIAC FIX 2021-12-20 22:00:30 @TaliaRinger My "favorite" part of all of that is when I say: "Hey, maybe before making wild claims about language, talk to the folks who study how language works" and get told "Stop gatekeeping!" https://t.co/j7xbLJUP36 2021-12-20 21:09:27 "personhood is a people thing" --@mmitchell_ai 2021-12-20 20:37:43 RT @mmitchell_ai: Eek. This is really bad. First...equating slaves w language models in order to make a pt that a language model, like a sl… 2021-12-20 17:24:11 @timnitGebru @mmitchell_ai Failing up, indeed. 2021-12-20 17:06:00 @mmitchell_ai @timnitGebru 2021-12-20 00:02:02 @timnitGebru Yes. I don't have time just now for a full response to that essay, but I do have *thoughts* and hope to have time to set them down at some point... 2021-12-19 15:15:20 "Resisting Dehumanization in the Age of AI" is my planned topic for my keynote at #COGSCI2022, so definitely more on this between now and July! https://t.co/ainIvuymLM 2021-12-19 15:14:14 I think this is often done with the goal of humanizing the LLM (or bolstering the claim that it is "an AI"), but it does the inverse and dehumanizes the people in the analogy. 2021-12-19 14:51:59 AI researchers: If you're tempted to call on the experience of people with a disability you do not share to make an argument about what LLMs can do, just don't. It's not going to end well. 2021-12-17 20:49:16 @rctatman @alexhanna @_jack_poulson His feed really does sound like someone trying their hardest to coin aphorisms, which would be funny in a pathetic sort of way if so many of them didn't include such absolutely abhorrent ideas. But yeah: real person, doing real harm. Many of us have the receipts. 2021-12-17 20:10:20 RT @timnitGebru: Reading about the FB papers, a lot of the articles seem to focus on how FB knew how dangerous its platforms could be & 2021-12-17 17:27:30 @ergodicwalk @thegautamkamath @rajiinio @amandalynneP @cephaloponderer @alexhanna Thanks, @ergodicwalk ! @thegautamkamath you might be interested also in another paper of ours: https://t.co/LIjLDQKb3R 2021-12-17 17:07:41 Not bringing me joy this morning: The number of clicks it takes to remove a stray annotation from a paper in Canvas... 2021-12-17 16:32:24 Bringing me joy this morning: The student who, in the prescribed acknowledgments section of their term paper, (also) put in a good word for their laptop. 2021-12-17 01:05:20 RT @timnitGebru: What a great article by By Esther Sánchez García (twitter handle?) and Michael Gasser (@mapinduzi21k). It is a great summa… 2021-12-16 23:15:27 Public health zeugma https://t.co/yFxzsNa1Ff 2021-12-16 16:14:23 RT @ReviewAcl: If you have received your reviews before December 15, 2021, and are planning to commit to ACL 2022, please remember *NOT* to… 2021-12-16 13:38:57 @EmmaSManning is too close to which would def mean the opposite in that context 2021-12-16 05:44:38 @histoftech From a student in my class this fall: What is made of leather and sounds like a sneeze? . . . . . . . . . . A shoe 2021-12-16 00:30:02 RT @LouNoDear: This is just to say I have buttered the Jorts who was in the closet and who you were probably keeping around for staf… 2021-12-15 23:06:29 RT @WiNLPWorkshop: Our friends at @aclmeeting are seeking mentors and mentees to help improve reviewer expertise. Mentees are #NLProc resea… 2021-12-15 20:10:06 @CT_Bergstrom Thank you for this thread --- and I totally agree about hybrid teaching! We've been doing it on our program since a first pilot in 2007. So beneficial in so many respects: students with caregiving responsibilities, students with social anxiety, & 2021-12-15 18:48:52 RT @etiene_d: @Abebab @emilymbender @UjuAnya @timnitGebru @ninadhora by any chance do you know? if not, could you please share? 2021-12-15 17:33:48 Appreciating this description of what this week is about... https://t.co/beb1cLTxDQ 2021-12-15 15:53:11 @LukaszBorchmann I appreciate the intention to mitigate hype. I think that it's important to practice that care in a medium like Twitter. Perhaps this might help in finding a framing that foregrounds how the systems will be used rather than what they do: https://t.co/5NiM5ydDGV 2021-12-15 14:55:44 RT @davidschlangen: "From Natural Language Processing to Natural Language Use" -- new talk, in which I argue that NLP isn't going to give u… 2021-12-15 14:29:49 @LukaszBorchmann So "just pdf" seems like a decent characterization of one end in end-to-end document "understanding", but what's the other end, and why is deserving of the descriptor "understanding"? 2021-12-15 13:58:36 @Abebab Like for the realization which I hope will power more if your amazing writing and not for the situation which absolutely sucks. 2021-12-15 03:09:04 @kirbyconrod I’ve DMed you! 2021-12-15 01:34:57 @SashaMTL @mer__edith @timnitGebru @UpFromTheCracks Same? https://t.co/mPGsBMBwZe 2021-12-14 17:50:37 @lauriedermer I can't imagine a more Laurie rug! 2021-12-14 17:29:13 Also apropos of #PSEUDOSCI https://t.co/dwAQSY44nP 2021-12-14 17:28:49 Apropos of #PSEUDOSCI https://t.co/D4uPegeKJY 2021-12-14 17:27:58 One of my son's many talents is the ability to find appropriate acronyms, given a description. With his assistance, I propose renaming AI to PSEUDOSCI: Pattern-matching by Syndicate Entities of Uncurated Data Objects, through Superfluous (energy) Consumption and Incentives 2021-12-14 14:10:10 RT @FAccTConference: CRAFT (Critiquing & 2021-12-14 13:42:43 What a fabulous initiative! Down with glowing brains and up with thoughtful artistic renditions of what pattern matching by data monopolies driven by uncurated training data & 2021-12-14 13:39:08 RT @ImagesofAI: Today we launch the first images on our free repository https://t.co/TvzLeYbk04. Instead of defaulting to shiny robots or g… 2021-12-13 22:28:35 @aclmeeting Looks like it was probably sent in late September: https://t.co/3i32l09jZo The pony express might well have been faster. 2021-12-13 22:22:03 @TaliaRinger That's hilarious. I'm continually alarmed by auto-complete that puts in specific dates / times. Like: it has NO WAY To know what I'm actually suggesting and shouldn't even try. What if I didn't catch it and ended up confusing the scheduling for something important? 2021-12-13 22:14:59 Another Tweetorial from the #SciComm assignment in my class this quarter. Enjoy! https://t.co/jhyzO3FvD0 2021-12-13 22:14:34 RT @MclachlanEric: How well does Google know you? The truth is, Google knows you well enough to finish your sentences. And it does. As a li… 2021-12-13 20:42:57 And now the @aclmeeting email portal is informing me that registration is open ... for #EMNLP2021. 2021-12-13 20:30:25 @kirbyconrod The proximate cause of my subtweet is not a linguist, if that helps. 2021-12-13 20:29:54 @sh_reya Could perhaps even be at submission time, really.... 2021-12-13 20:28:34 @sh_reya It seems like there's maybe a role for the conferences to play here: perhaps at acceptance time, especially for labs that have submitted very large numbers of papers, reaching out to the PIs in those labs and asking ... something ... (not quite sure what yet). 2021-12-13 20:06:24 Being busy isn't an excuse, either: If you don't have time to do that kind of mentoring, you don't have time to do research with students. 2021-12-13 20:05:40 Currently subtweeting: Faculty who take on UG researchers, get a paper published with them at a major conference, and then apparently don't tell the students anything about how conferences work nor offer to pay their registration. 2021-12-13 19:53:31 @mle_ross @CeciLoge That is definitely a hazard of online, international conferences. (The better ones try to accommodate at least...) I hope you've managed to get access to the rocketchat! 2021-12-13 15:51:13 This whole article is a "Valid questions aside, the truth is that A.I. has the power to enhance, not diminish, human potential." Valid questions ... aside?? https://t.co/jOVLt4YNcT 2021-12-13 14:15:11 RT @emilymbender: PSA to instructors, this assignment is not only valuable for students, but a real boost for the instructor come end-term… 2021-12-13 13:59:16 @terrible_coder @LeonDerczynski @justsaysinmice The whole point of the #BenderRule is to change community norms. If we interpret no language mentioned as English, we're just continuing the same norms. 2021-12-13 13:53:10 @terrible_coder @LeonDerczynski @justsaysinmice I reject that as a corollary of the #BenderRule, since it lets folks off the hook. 2021-12-13 03:20:25 RT @struthious: from @emilymbender (and she's said something similar in a @RadicalAIPod ) https://t.co/N8BwMJ9qyz 2021-12-13 01:25:01 I got the idea from a colleague during the UW Faculty Fellows program back in 2004, and have enjoyed it ever since... 2021-12-13 01:24:34 @JBogunjoko @hypervisible Ew, yeah. 2021-12-13 01:24:00 PSA to instructors, this assignment is not only valuable for students, but a real boost for the instructor come end-term grading time! https://t.co/ydEaXCg8Gn And that goes double for classes on #ethNLP / #AIethics including students currently working in industry :) 2021-12-12 22:49:07 @hypervisible Wow. Did they even notice the contradiction in the first sentence ("people choose from birth")? 2021-12-12 18:46:37 @arademaker Thank you! 2021-12-12 18:40:02 @gini_do I'm so sorry to hear it! Perhaps it is such discouragement leading not attending the poster session + just being unaware of the rocketchat channel? 2021-12-12 16:02:44 @blazi1 Thanks! 2021-12-12 15:44:14 RT @emilymbender: If you've read this book & 2021-12-12 15:44:00 RT @emilymbender: Not recommending purchasing via Amazon, but I am tickled to see both volumes of Linguistic Fundamentals for #NLProc liste… 2021-12-12 04:54:37 If you've read this book & 2021-12-12 04:54:10 OTOH, Vol 2 (100 Essentials for Semantics and Pragmatics) still has only one review on Amazon, and it's just a complaint about the shipping & 2021-12-12 04:53:21 Not recommending purchasing via Amazon, but I am tickled to see both volumes of Linguistic Fundamentals for #NLProc listed under "Gift Ideas in Artificial Intelligence" https://t.co/i0Yl5tZL5k 2021-12-12 01:31:16 @sina_lana I also left questions for both papers in rocketchat, which shouldn't be timezone sensitive. No response nor evidence that they have even checked. 2021-12-12 01:30:50 @sina_lana I've had really great experiences at virtual poster sessions (in GatherTown, which does work on my local hardware), but that seems to require people showing up. 2021-12-12 01:30:18 @sina_lana The affiliations of these authors suggest that the timezones shouldn't have been a problem, but I suppose they might have been traveling. 2021-12-12 01:11:27 I assume that there are other things going on for them which made not attending the right choice (end-quarter overwhelm? family emergency?) and try to find time to email my questions... 2021-12-12 01:10:53 In both cases of the papers I had questions about, the first authors are PhD students, which makes (apparently) blowing off the conference even more surprising. 2021-12-11 23:09:51 So, anyone else have questions for #NeurIPS2021 poster presenters only to find no one at their poster / no response to questions in rocketchat? Is this a thing, to not actually attend / discuss? 2021-12-11 00:18:14 RT @annargrs: A great discussion of ethics checklists! A highlight from the panel: Q (@emilymbender): ethics checklists have the danger of… 2021-12-11 00:16:30 This was as amazing as I'd expected (and hoped)! The chat was pretty lively & 2021-12-10 22:37:44 RT @cfiesler: So I'm about to be on a panel (3pm PT) at #NeurIPS2021 (with a lot of REALLY smart people ) and on the chance that someone… 2021-12-10 22:30:20 @cfiesler It's 3pm PT though, isn't it?? (=4pm MT) Really looking forward to it. 2021-12-10 21:54:29 RT @JesseDodge: This is today! Looking forward to being on this plenary panel at NeurIPS to discuss ways we can build incentive structures… 2021-12-10 20:07:50 @michaelzimmer @g8enjamin @timnitGebru @mmitchell_ai @BBlodget @sulin_blodgett @s010n @hannawallach @annaeveryday @cephaloponderer @alexhanna @amandalynneP @rajiinio @spillteori @MCoeckelbergh @JohnDanaher @morganklauss @Madeleine_Pape @luke_stark That looks excellent! And an honor to be included, for sure. I'd definitely encourage you to add something by Dr. @Abebab as well. Perhaps: https://t.co/5LYYmCZ04X or https://t.co/GR6HT58xfo 2021-12-10 16:18:53 RT @aclmeeting: The Program Chairs of @aclmeeting jointly with the @ReviewAcl Editors-in-Chief are delighted to announce the Best Reviewer… 2021-12-10 15:10:18 Info on next year's HPSG conference! #linguistics #NLProc #syntax https://t.co/K8pzSiyn8w 2021-12-10 15:09:40 RT @dyo1976: The 29th International Conference on Head-Driven Phrase Structure Grammar will be held online, hosted by Nagoya University and… 2021-12-10 01:42:31 @DariaYasafova @rajiinio Thank you for coming to our poster! 2021-12-09 20:23:52 Uh wow -- I'd guess that anyone working in #NLProc at @Google would be quite embarrassed by this... https://t.co/wnVWFffRtp 2021-12-09 16:50:16 Poster session happening now! Come say hi :) #NeurIPS2021 https://t.co/LYpRMlUF2L 2021-12-08 22:11:37 RT @cfiesler: Just gave a talk to @NYUDataScience titled "Data Is People: Unintended Consequences in AI & 2021-12-08 19:24:24 RT @YJernite: Glad this metaphor landed And this is as good a time as any to direct people who want to read more about these issues to @… 2021-12-08 18:55:11 @marypcbuk @timnitGebru Wow. I'll have to read the paper --- attempting to make a taxonomy seems like possibly a good contribution --- but I have to wonder: is Google's legal department an uncredited author here? How do we know what hoops they had to jump through to publish this? 2021-12-08 18:30:47 Great resonance with @marylgray's points yesterday about asking: "What right do I have to use this data in this way?" and with the @RadicalAIPod episode with @schock on the notion of enthusiastic consent. 2021-12-08 18:29:08 @YJernite In a bit more detail: Something that was obscure comes easy to find 2021-12-08 18:28:15 Great metaphor from @YJernite re privacy issues with data scraped from the public internet: the problem when a needle in a haystack becomes a needle on a pedestal #NeurIPS2021 2021-12-08 16:33:26 @AlexBaria Thank you! 2021-12-08 16:33:20 RT @AlexBaria: I’ve been wanting to read this paper and it did not disappoint! It really hits on a key issue of how researchers tend to con… 2021-12-08 15:48:51 @dlowd Still not a class of things to learn, on a par with "vision" or "language", is it? 2021-12-08 15:11:05 @compthink If the language is "small" (in number of speakers, in spheres of usage, otherwise), how did it come to be that way? These stories are not the same between actual languages and conlangs. 2021-12-08 15:01:36 @compthink It's not actually about data complexity, but rather about the social context. Who does the language belong to? What role does it play in their lives? Etc. 2021-12-08 14:15:13 Because the discourse I see seems to go further than just "sub-communities" but also talk about "classes of tasks/things to learn". But how can "reinforcement learning" be a class of things to learn? https://t.co/RFyEgei4ee 2021-12-08 14:14:02 An on-going mystery for me: how #AI folks see the field as (previously?) divided into: vision, speech/language (sometimes separate) and reinforcement learning. Is that just an artifact of how the research community organized itself? > 2021-12-08 13:07:37 RT @rajiinio: We reviewed 100+ ML survey papers & 2021-12-08 04:02:04 Just caught up on @marylgray's AMAZING #NeurIPS2021 keynote. * Thinking about data edges (relationships) rather than data nodes (points): what (specifically) gives us the right to that data? * Fast doesn't mean efficient, if it means building things poorly. * so much more! 2021-12-07 23:00:32 RT @mmitchell_ai: Super excited to announce the Data Measurements Tool! An open-source project (coded up personally by me, @YJernite, an… 2021-12-07 22:05:30 @cfiesler @rajiinio That panel is going to be AMAZING! Can't wait :) 2021-12-07 21:09:59 @Sparksbet No. 2021-12-07 21:00:02 PSA: If you're submitting to an #NLProc journal, be sure not to cite papers about the other NLP. Yikes. 2021-12-07 20:58:09 @gugliacci @LingSocAm Oh, bummer :( 2021-12-07 20:36:10 @LingSocAm Thank you. 2021-12-07 19:50:41 Hey @LingSocAm once again: If you've got a virtual or hybrid conference, the schedule MUST include the time zone! I'm guessing this year it's EST? 2021-12-07 19:15:49 RT @mixedlinguist: Again, recruiting folks! Are you a woman, 18-32, with 1 black & 2021-12-07 17:23:31 That was in connection to Stochastic Parrots, which took shape as quickly as it did only because @timnitGebru @mcmillan_majora @mmitchell_ai @vinodkpg @blahtino and @benhutchinson all brought their expertise and poured it in! 2021-12-07 17:21:07 But once doesn't make a hobby! The other involved Stone Soup (actually a tale older than the 1970s, but hey, *I* encountered it in the 1970s): https://t.co/MvI126DfNg 2021-12-07 17:16:56 In a tiny bit more detail: https://t.co/91d2Rrl8I2 2021-12-07 17:15:48 For anyone wondering what this was about, it was (partly) in reference to what is now Raji et al 2021 in the #NeurIPS2021 Datasets and Benchmarks track: https://t.co/kR4ZA1Bawz (In poster session 3, Thursday) https://t.co/jlPpKomYGP 2021-12-07 17:05:32 @athundt Someone did ask how they decide which languages to add ... the list he was drawing from in making that comment was "small languages on Duolingo" (contrasted with "major languages, you know, the ones you have heard of"). 2021-12-07 15:47:04 @MadamePratolung Languages add words all the time, so that's not actually such a strong criterion. The key question is one of community: does the language have a community where it is (or has been) historically used for daily communication? 2021-12-07 15:21:17 Trying and failing to figure out how to make a question out of this. Maybe I should just put a comment in the rocket chat. 2021-12-07 15:13:10 I mean, I don't have any objection to Duolingo including conlangs, but they simply can't be equated with languages of actual communities. 2021-12-07 15:11:00 Interested to watch the von Ahn keynote at #NeurIPS2021 ... but did he really just cite "High Valerian, Irish, Klingon" as examples of "small languages"? 2021-12-07 14:44:10 RT @haldaume3: Deadline for applying to the FATE Postdoc position is: December 15! If you're doing cool work on fairness, accountability,… 2021-12-07 13:56:34 Helpful comment from a reviewer bringing me joy this morning: "- In the reference list, ref 7 shows a parrot symbol" 2021-12-07 13:07:52 @Abebab Congrats 2021-12-07 13:06:09 RT @SeeTedTalk: When you announce how many papers you or your lab, etc. has had accepted or is presenting, it would be great (seriously, no… 2021-12-07 00:59:45 I am *loving* the use of the as a reactemoji in the https://t.co/AO3tFrf8T2 for @timnitGebru and @cephaloponderer 's #NeurIPS2021 tutorial this afternoon :) 2021-12-06 17:34:23 RT @UW_iSchool: Join us today at 12:30 PT for an iSchool Research Symposium with @alexhanna, a sociologist on Google's Ethical #AI team. He… 2021-12-06 17:28:50 RT @emilymbender: What does this mean for the effective deployment of ML benchmarks? See "AI and the Everything in the Whole Wide World Be… 2021-12-06 17:19:45 RT @timnitGebru: Thank you to the Guardian for publishing this opinion piece that I wrote. The title I had in mind was “To reduce the harms… 2021-12-06 15:57:41 Thanks @bendee983 for this thoughtful coverage of our #NeurIPS2021 paper w/@rajiinio @amandalynneP @cephaloponderer and @alexhanna https://t.co/2gQTUFjeK8 2021-12-06 15:56:28 RT @bdtechtalks: Why we must rethink AI benchmarks https://t.co/LpKZ5eAvHQ 2021-12-06 15:43:27 @timnitGebru "what truly stifles innovation is the current arrangement where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future." --@timnitGebru https://t.co/N5dEntD7E8 2021-12-06 15:42:36 @timnitGebru "We need governments around the world to invest in communities building technology that genuinely benefits them, rather than pursuing an agenda that is set by big tech or the military." @timnitGebru in the Guardian: https://t.co/N5dEntD7E8 > 2021-12-06 15:41:55 Brilliant op-ed by @timnitGebru "So what is the way forward? In order to truly have checks and balances, we should not have the same people setting the agendas of big tech, research, government and the non-profit sector. We need alternatives." https://t.co/N5dEntD7E8 > 2021-12-06 15:27:54 RT @annargrs: Many #NLProc people complain about #Reviewer2. How about spending 5-10 minutes to try to actually improve paper-reviewer matc… 2021-12-06 14:18:54 RT @timnitGebru: If you’re at @NeurIPSConf @cephaloponderer and I have this tutorial tomorrow (Monday). We will also be joined by @alexhann… 2021-12-06 13:54:34 RT @emilymbender: Just skimmed the #NeurIPS2021 Datasets and Benchmark papers abstracts and I want to highlight these papers introducing na… 2021-12-06 13:50:14 @joavanschoren Fantastic! Thank you :) 2021-12-06 13:39:41 RT @emilymbender: Ever since ELMo, muppets have been key figures in ML. But Sesame Street can do more for us than just provide mascots. Con… 2021-12-06 13:26:34 RT @MasakhaneNLP: If you're attending #NeurIPS2021, please join @vukosi and @davlanade's tutorial: A Journey Through the Opportunity of L… 2021-12-06 05:13:51 @Abebab @alienelf Thank you! 2021-12-05 21:10:42 @joavanschoren @cephaloponderer If you like! I was trying to conserve characters but you're right that matching is good. Makes it more likely that people will use it consistently! 2021-12-05 20:09:28 @joavanschoren @cephaloponderer Hmm... short is good for hashtags. Maybe: #NeurIPS21DandB ? 2021-12-05 19:03:13 RT @aclmeeting: Announcement: The list of accepted workshops and co-located events at ACL, COLING, NAACL and EMNLP in 2022 is out #NLPro… 2021-12-05 15:35:57 The CPD Data Set: Personnel, Use of Force, and Complaints in the Chicago Police Department Thibaut Horel, Lorenzo Masoero, Raj Agrawal, Daria Roithmayr, Trevor Campbell https://t.co/zwNFeRUP15 2021-12-05 15:35:49 Trust, but Verify: Cross-Modality Fusion for HD Map Change Detection John Lambert, James Hays https://t.co/mwXwDl8jV9 2021-12-05 15:35:39 HumBugDB: A Large-scale Acoustic Mosquito Dataset Ivan Kiskin, Marianne Sinka, Adam Cobb, et al https://t.co/Kk0flkbezC 2021-12-05 15:35:25 DENETHOR: The DynamicEarthNET dataset for Harmonized, inter-Operable, analysis-Ready, daily crop monitoring from space Lukas Kondmann, Aysim Toker, Marc Rußwurm, et al https://t.co/HqJycszZJa 2021-12-05 15:35:10 WildfireDB: An Open-Source Dataset Connecting Wildfire Occurrence with Relevant Determinants Samriddhi Singla, Ayan Mukhopadhyay, Michael Wilbur, Tina Diao, Ahmed Eldawy, Mykel Kochenderfer, Ross Shachter, Abhishek Dubey https://t.co/iCrYi3Qyvt 2021-12-05 15:35:03 Not language datasets, but I want to highlight a few more for naming the location in the abstract: #NeurIPS2021 2021-12-05 15:34:15 Not in the title/abstract, but really great language (actually script) diversity in this paper: OmniPrint: A Configurable Printed Character Synthesizer Haozhe Sun, Wei-Wei Tu, Isabelle Guyon https://t.co/GbdsiQsi8l 2021-12-05 15:34:01 RP-Mod & Dennis Assenmacher, Marco Niemann, Kilian Müller, Moritz Seiler, Dennis Riehle, Heike Trautmann https://t.co/TQavRQ78aO 2021-12-05 15:33:50 Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark Solène Evain, Ha Nguyen, Hang Le, et al https://t.co/tJFYddsErT 2021-12-05 15:33:31 KLUE: Korean Language Understanding Evaluation Sungjoon Park, Jihyung Moon, Sungdong Kim, et al https://t.co/hYmYTDeIPc 2021-12-05 15:33:13 CrowdSpeech and Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription Nikita Pavlichenko, Ivan Stelmakh, Dmitry Ustalov https://t.co/eSWraEXohC 2021-12-05 15:33:02 LiRo: Benchmark and leaderboard for Romanian language tasks Stefan Dumitrescu, Petru Rebeja, Beata Lorincz, et al https://t.co/rXqZ53SuVH 2021-12-05 15:32:48 A Toolbox for Construction and Analysis of Speech Datasets Evelina Bakhturina, Vitaly Lavrukhin, Boris Ginsburg https://t.co/h5X8uzYQVl 2021-12-05 15:32:36 FFA-IR: Towards an Explainable and Reliable Medical Report Generation Benchmark Mingjie Li, Wenjia Cai, Rui Liu, et al https://t.co/eWfr1rttYj 2021-12-05 15:32:03 Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers Loren Lugosch, Piyush Papreja, Mirco Ravanelli, Abdelwahab HEBA, Titouan Parcollet https://t.co/t50YwhMztR 2021-12-05 15:31:46 The People’s Speech: A Large-Scale Diverse English Speech Recognition Dataset for Commercial Usage Daniel Galvez, Greg Diamos, Juan Torres, Keith Achorn, Anjali Gopi, David Kanter, Max Lam, Mark Mazumder, Vijay Janapa Reddi https://t.co/6PyfL4mKXK 2021-12-05 15:31:29 Native Chinese Reader: A Dataset Towards Native-Level Chinese Machine Reading Comprehension Shusheng Xu, Yichen Liu, Xiaoyu Yi, Siyuan Zhou, Huizi Li, Yi Wu https://t.co/kgClJtCbd2 2021-12-05 15:31:16 CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge Yasumasa Onoe, Michael Zhang, Eunsol Choi, Greg Durrett https://t.co/YKetRMvmiq 2021-12-05 15:31:06 KeSpeech: An Open Source Speech Dataset of Mandarin and Its Eight Subdialects Zhiyuan Tang, Dong Wang, et al https://t.co/4UNDXNUbfV 2021-12-05 15:30:48 Just skimmed the #NeurIPS2021 Datasets and Benchmark papers abstracts and I want to highlight these papers introducing natural language datasets which name the languages up front (title or abstract): 2021-12-05 14:10:20 So, is there a standard hashtag for the #NeurIPS2021 datasets & @cephaloponderer 2021-12-05 13:20:44 @rajammanabrolu @mark_riedl I've gotta ask, though, @rajammanabrolu @mark_riedl --- which language is it in? https://t.co/U4pArFTnct 2021-12-05 13:17:37 Very cool dataset in #NeurIPS2021 Datasets & @mark_riedl leveraging text adventure games: https://t.co/hoSPPYX0Iy 2021-12-05 13:12:49 @joavanschoren Nor anyway to "bookmark" individual Datasets & 2021-12-05 13:11:04 @joavanschoren There is also not, as far as I can see, any way to search the datasets and benchmarks papers from within the virtual conference site beyond choosing one of the poster sessions and using Ctrl-F. 2021-12-05 13:10:13 @joavanschoren (I'm guessing the papers were uploaded into the pre-proceedings site and linked from within the virtual conference site, but this page wasn't there yet and/or not linked from the main proceedings page when I went looking for it: https://t.co/3cepShoE4N ) 2021-12-05 13:03:59 @joavanschoren Thank you! I see the preproceedings now (but couldn't fit it yesterday). And I hope it works out to get separate rocketchat channels. 2021-12-05 13:00:02 #NeurIPS2021 Datasets and Benchmark papers are now up in the preproceedings! https://t.co/12PQGvCdVX Thanks @joavanschoren 2021-12-05 05:45:17 @megandfigueroa @jessgrieser @VocalFriesPod Ooohhh! 2021-12-05 02:23:15 @SashaMTL So sorry to hear it. 2021-12-05 01:21:52 @AnthroPunk Awesome! 2021-12-05 01:17:58 RT @emilymbender: Ever since ELMo, muppets have been key figures in ML. But Sesame Street can do more for us than just provide mascots. Con… 2021-12-05 01:09:00 RT @rajiinio: In our upcoming paper, we use a children's picture book to explain how bizarre it is that ML researchers claim to measure "ge… 2021-12-04 21:14:57 RT @rajiinio: It was such a joy to serve as Ethics Review co-chair with Samy Bengio @NeurIPSConf. The scale was ridiculous - over 100 eth… 2021-12-04 20:37:45 RT @UWArtSci: To learn, algorithms need massive datasets. However, AI datasets are vanishing. @emilymbender, professor of @UWlinguistics, s… 2021-12-04 19:55:17 @MadamePratolung @histoftech @merbroussard Excellent! 2021-12-04 19:52:04 @kirbyconrod I wish more people would do this! Here is my soundcloud (with plans to add a couple more soon): https://t.co/4C8ZjOaA3i 2021-12-04 14:33:35 RT @macfound: New this week, the @DAIRInstitute! Shoutout to @timnitGebru and all the researchers working to creating systems that avoi… 2021-12-04 14:21:37 What does this mean for the effective deployment of ML benchmarks? See "AI and the Everything in the Whole Wide World Benchmark" by @rajiinio, me @amandalynneP, @alexhanna & https://t.co/plsKXf4oCN 2021-12-04 14:18:54 Ever since ELMo, muppets have been key figures in ML. But Sesame Street can do more for us than just provide mascots. Consider Stiles & #NeurIPS2021 https://t.co/zWFRDxMXYN 2021-12-04 13:57:46 More generally: 1) Datasets & 2) Datasets & Feels like #NeurIPS2021 doesn't really want us there... https://t.co/2Reuz1G7hh 2021-12-04 13:54:06 So, the #NeurIPS2021 Datasets and Benchmarks track seems to be set up with one Rocketchat channel per poster session. Our paper is in a session with > 2021-12-04 02:53:59 RT @NeurIPSConf: Learn more about the #NeurIPS2021 ethics review process, including highlights and lessons learned, in this retrospective b… 2021-12-04 02:53:04 Is anyone having success with the green "bookmarks" in the #neurips2021 schedule? I thought I could click them yesterday, but they aren't clickable today. Also, I don't know how to find a display of my bookmarked sessions... 2021-12-04 02:25:30 @ruthstarkman @timnitGebru @cynthiablee @WomeninAIEthics @DAIRInstitute @mmitchell_ai @mcmillan_majora Wow @ruthstarkman -- your students are awesome. Huge thank you to them for investigating this! 2021-12-04 02:25:13 RT @ruthstarkman: @timnitGebru @cynthiablee @WomeninAIEthics @DAIRInstitute @emilymbender @mmitchell_ai @mcmillan_majora Two grads studied… 2021-12-03 23:22:20 @_amandalynne_ Hang in there!! 2021-12-03 15:43:17 Okay, the @aclmeeting ACL Portal is really behind the times. I just got a reminder email that #emnlp2021 early bird registration closes "this Friday" (= Oct 15). 2021-12-03 05:19:00 RT @anggarrgoon: I'm writing a paper with coauthors and we make the point that multilingual searches are hard to do with google because res… 2021-12-03 01:48:13 RT @mmitchell_ai: Congratulations to my former co-lead @timnitGebru on launching the DAIR institute today. =) Imagine! An AI center focusin… 2021-12-02 18:04:42 @astent @annargrs @KarimiRabeeh @tpimentelms @ReviewAcl @chrmanning @aclmeeting @LeonDerczynski For #COLING2018, we kept author identity hidden through the PCs (though that last bit required us 'not peeking'). Reviewers and ACs (in our case) needed to be able to communicate with each other, outside the software, if necessary. Details: https://t.co/fZpJYGGa9N 2021-12-02 17:52:31 RT @timnitGebru: I love how today has been reclaimed as a day of celebration :) Thank you all for your support. I'm gonna write a thread ri… 2021-12-02 17:52:21 RT @ruthstarkman: @timnitGebru @cynthiablee @WomeninAIEthics @DAIRInstitute Thank you & 2021-12-02 15:43:09 RT @rajiinio: I so much admire Timnit's courage, & 2021-12-02 15:40:30 Alright #NLProc tweeps: I *know* you have opinions about reviewing, because I see you sharing them on Twitter. Please take a few moments to share them in these surveys, where they can do some good :) https://t.co/nClPIHms5E 2021-12-02 15:03:37 RT @DAIRInstitute: Our Founder, @timnitGebru, will be speaking at the Women in AI Ethics Summit today! Tune in 8am-12pm PST! #AIChangemaker… 2021-12-02 14:54:43 RT @mer__edith: Oh YES! Timnit is brilliant and real and exactly the right person to be shaping this research, and the new organizational f… 2021-12-02 14:54:38 RT @dinabass: Timnit Gebru is marking the 1 year anniversary of her dismissal from Google by announcing her new AI research institute -- a… 2021-12-02 14:45:50 @timnitGebru @DAIRInstitute Thank you, @timnitGebru for persevering and for your tireless work making this vision a reality. This is going to be amazing! (And it's going to be amazing at an appropriate, humane, livable pace.) 2021-12-02 14:42:57 When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity.” -- @timnitGebru, on the founding of @DAIRInstitute https://t.co/OmUsaO2Yzd 2021-12-02 14:42:20 “AI needs to be brought back down to earth,” said Gebru, founder of DAIR. “It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. > 2021-12-02 14:41:17 @timnitGebru @DAIRInstitute “how to make a large corporation the most amount of money possible and how do we kill more people more efficiently,” Gebru said. “Those are […] goals under which we’ve organized all of the funding for AI research. So can we actually have an alternative?” https://t.co/B5dx7f4Z1Z 2021-12-02 14:39:15 “I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” -- @timnitGebru on the founding of @DAIRInstitute https://t.co/O5AAZ5pE5H 2021-12-02 13:38:33 RT @emilymbender: A few thoughts on citational practice and scams in the #ethicalAI space, inspired by something we discovered during my #e… 2021-12-02 04:13:25 In particular, if someone working in this space (and selling services no less!) appears to be unaware of the work of: @LatanyaSweeney @jovialjoy @rajiinio @timnitGebru @merbroussard @mer__edith @ruha9 @mmitchell_ai or @safiyanoble ... I will approach their wares w/skepticism. 2021-12-02 04:04:40 It's really easy to see that Henry Dobson is not someone to take seriously---he's flat out plagiarized Friedman & > 2021-12-02 04:03:47 I guess it's in vogue to be a "tech ethicist" these days, and of course anyone can hang out their shingle. But someone doing serious work in this space will have good citational practice, relating their work to others'. > 2021-12-02 04:02:14 And from there, the Medium page, apparently by the same person: https://t.co/nwaotICcUY (with the tagline "tech ethicist") 2021-12-02 04:01:24 Another student turned up this old tweet (from March 2020) advertising a keynote by Henry, whose last name appears to be Dobson: https://t.co/GeuJHkTKqK > 2021-12-02 04:00:08 And the listed partners include @MonashUni (who might prefer not to be associated with such a shady organization). > 2021-12-02 03:59:05 On the other hand, "Henry" and colleagues are apparently selling ethics seminars, for $495-$1295, depending on the length of the seminar. > 2021-12-02 03:57:58 We poked around the website a bit more, trying to figure out who is behind it, and could only find a first name ("Henry") as well as this helpful invitation to call ... but no phone number: > 2021-12-02 03:52:34 Some of that text looked oddly familiar! But a thorough search of the site turned up exactly zero citations to Nissenbaum & > 2021-12-02 03:51:50 In parallel, one of the students did a web search for the term, and landed on this page: https://t.co/zigMrWP7lp 2021-12-02 03:50:52 Anyway, the definition of the term "emergent bias" was highly relevant, of course, so I was pulling up Friedman & https://t.co/DSqrB0u3J2 > 2021-12-02 03:49:52 Week by week, we've been setting our reading questions/discussion points for the following week as we go, so that's where the questions listed for this week come from. > 2021-12-02 03:49:21 Today's topic was "language variation and emergent bias", i.e. what happens when the training data isn't representative of the language varieties the system will be used with. The course syllabus is here, for those following along: https://t.co/ZCIyV1zGrr > 2021-12-02 03:47:47 A few thoughts on citational practice and scams in the #ethicalAI space, inspired by something we discovered during my #ethNLP class today: > 2021-12-01 16:57:10 Here's the "why" for the Curation Rationale element, from the Guide https://t.co/Vr0PE1QLnX https://t.co/skMCJSzeZC 2021-12-01 16:55:45 Yes! Throughout our Guide to Writing Data Statements, we have info on how the writing is useful to both dataset creators & 2021-12-01 14:42:02 This is poetry, indeed! It feels like there's need for a zine of wisdom from author responses, because gems like this deserve a broader audience. https://t.co/hDi2gKbR0M 2021-12-01 04:28:31 [our institution] is of course a big name school. I was sorely tempted to say "top 5%" to the first question and "top 1%" to the second, but of course didn't, because that wouldn't have helped the student I was recommending... 2021-12-01 04:15:29 Recommendation letter season is upon us, along with the absurd questions on the forms. Best one yet, was a series of two questions: * Compare this student to PhD students you have worked with * Compare this student to PhD students at [our institution] 2021-11-30 15:30:06 RT @evanmiltenburg: Does anyone know when the @NeurIPSConf proceedings for the Datasets and Benchmarks track will be published? The pre-pr… 2021-11-30 13:45:55 Listening to the professional musicians rehearse & 2021-11-30 03:24:21 @foaadk My main concerns about an online matchmaking resource would be protecting the student volunteers from exploitation --- it really can't be "AMT but free". There has to be some kind of commitment from the mentor side... 2021-11-30 03:00:03 @_vsmenon @tobysmenon With this music, I am definitely impressed! I'm afraid I'm not professionally qualified in this arena, but I believe the professionals are also impressed :) 2021-11-30 02:57:19 Beautiful piece by @tobysmenon ... took a little while for the recording to happen because 2020 but definitely worth the wait! https://t.co/EolhvI2aJi 2021-11-30 02:52:12 RT @tobysmenon: Happy to finally be able to share the piece I composed for the Seattle Symphony's Merriman Young Composer's Workshop in 202… 2021-11-30 02:43:56 @billmdev @ml_collective Well, yes, for people working on ML. But (again) "research" in my tweet doesn't refer to only ML research. 2021-11-30 01:21:59 @MishraAmogh What makes you assume that someone contacting me would be interested in AI? I do computational linguistics, TYVM. 2021-11-29 21:23:20 @m__vaisakh @savvyRL @ml_collective Thanks & 2021-11-29 19:29:44 @savvyRL @ml_collective It seems like the pointer to @ml_collective was right on track, just very very weirdly phrased. 2021-11-29 19:29:18 @savvyRL @ml_collective I'm really glad you've created a structure where high-schoolers can be involved! And even better if you've found ways to reach high schoolers beyond those at already well-resourced institutions. 2021-11-29 19:10:11 @savvyRL @ml_collective Not a comment directed at you! @ml_collective seems like a great idea. I was reacting to "is all you need" (which I realize is a reference to a paper) and the general context of erasure of non-ML work in computational linguistics. For example: https://t.co/T0797itrDm 2021-11-29 17:53:12 @IbrahimSharraf @barbara_plank @robvanderg Your strategy of attending conferences and introducing yourself with info about your experience & 2021-11-29 17:52:24 @IbrahimSharraf @barbara_plank @robvanderg I'm not going to dump unpaid work on PhD students either! Part of the problem is that it's impossible to tell a priori if the "help" will actually be valuable. I think what's missing is a structure that allows people to build relationships first. 2021-11-29 17:42:33 @IbrahimSharraf @barbara_plank @robvanderg Yeah, speaking as someone on the receiving end of lots of those offers of volunteer help, I really don't know what to do with them. The folks offering frequently don't seem to realize that they are also asking for something (my time). 2021-11-29 17:41:34 @DouglasKGAraujo So here's the thing, the person who cold emailed me this morning asking to "get involved" didn't even provide their full name. I was on the verge of inviting them to join our lab's talks mailing list, but then decided I couldn't do that without knowing who I was inviting. 2021-11-29 17:40:20 @m__vaisakh @ml_collective Believe it or not, there is research (even in #NLProc! ) that doesn't involve ML. 2021-11-29 16:37:59 @ArjumandYounus "folks outside of academia/research labs" 2021-11-29 16:06:40 For the high school students, I tend to write back with links to things that I think might be of interest to them (including pointing out that NAACL 2022 will be local to us next year, but also things like NACLO if they haven't found it yet). 2021-11-29 16:05:47 But I also don't see how having university faculty take on the job of mentoring/supervising random volunteers is workable either. (And it's generally not folks from underrepresented communities who are contacting me, either. The latest is a local tech employee.) 2021-11-29 16:04:31 I definitely value research as a public activity and don't believe that participation should be restricted to people with current university affiliations. 2021-11-29 16:03:47 I also get, roughly once a month, requests from high school students who want to "intern" or otherwise be involved. I respect the hustle, but also that system simply doesn't scale: I don't have the time to supervise such interns, nor do I see it as my job. 2021-11-29 16:02:43 What are the best ways for folks outside of academia/research labs to get involved in research? (Asking for, but not only for, the person in my inbox this morning, who wanted to "get involved" but not do a PhD.) 2021-11-29 15:23:49 @zehavoc My guess is that that first list of skills is their boilerplate description for the position type and that no one noticed it should be edited out for this ad, because of typical ML blinders. 2021-11-29 14:59:22 @zehavoc I didn't think private companies bothered with that? Can't they just hire Dr. Moustache if they want them? 2021-11-29 14:54:14 "Expertise in AI such as Deep Learning" is not, in fact, a general qualification. Indeed, someone who has spent their time building up that expertise has less time to build up other kinds of expertise. #ThingsThatShouldGoWithoutSaying https://t.co/T0797itrDm 2021-11-29 14:48:43 Also, perhaps there's some good reason why this is called a "visiting" position? Are they trying to recruit faculty on sabbatical? Because it looks like they think they only need a linguist's expertise temporarily... 2021-11-29 14:47:56 I wonder if the "We're also looking for" clause is actually even there in the job ads for deep learning researchers? 2021-11-29 14:44:10 When "expertise in AI such as Deep Learning" is hard-coded into your job description, even when you're advertising for expertise in other areas. https://t.co/ie6U7fZ7Cn 2021-11-28 04:39:51 @SashaMTL Aw thanks! That would be awesome :) 2021-11-27 16:35:03 RT @__femb0t: https://t.co/XS7psrT5t7 2021-11-27 16:06:31 RT @emilymbender: So I read the US Defense Innovation Unit's "Responsible AI Guidelines in Practice", released earlier this month, with gre… 2021-11-26 23:11:12 7. Budget for documentation, because without it, a system cannot be used confidently. (See also: Bender et al 2018 (data statements), Mitchell et al 2019 (model cards), Gebru et al 2021 (datasheets) and Bender, Gebru et al 2021 (Stochastic Parrots, on "documentation debt")) https://t.co/XHWgfABPnX 2021-11-26 23:10:49 6. #AIhype is harmful and should be avoided: https://t.co/HNcpHDHX43 2021-11-26 23:10:25 5. Harms modeling requires diverse expertise: https://t.co/ARVtcN7B5d 2021-11-26 23:09:51 4. Again, it's not a trade-off between "performance" and "ethics". The due diligence required by these guidelines leads to better technology: https://t.co/h9x6UC5Um3 2021-11-26 23:08:50 3. The RAI Guidelines are not meant to be a checklist or a hurdle, but an aid, and furthermore one which leads not only to more ethical practice but better tech. https://t.co/IDEbtU05Sj 2021-11-26 23:08:19 2. AI systems need to be not only transparent and equitable, but also reliable and governable: https://t.co/Ks0tPgcZL0 2021-11-26 23:06:18 1. Guidelines aren't a 100% fix + "let's not build it" is explicitly called out as a possible outcome: https://t.co/1eDMimB4J9 2021-11-26 23:05:42 More broadly, I do not trust the US DoD to be only pursuing socially beneficial technology. That said, there are really excellent points in this document, which seem to be the result of it being written by people used to thinking about accountability for making sure things work. 2021-11-26 23:05:18 The 2nd case study is closer to what I expected: It involves a surveillance application, isn't clearly presented at all, & 2021-11-26 23:04:38 So I read the US Defense Innovation Unit's "Responsible AI Guidelines in Practice", released earlier this month, with great skepticism ... and was quite surprised at how many excellent ideas were in the document! https://t.co/7HdTBTusSu 2021-11-26 22:26:41 @Abebab Ugh, I'm so sorry @Abebab and so grateful that you are persisting nonetheless! Also: typical ML folks thinking they can "predict" things... 2021-11-26 19:36:18 RT @rajiinio: This was the most fun paper to write. I'm so happy we could leverage this analogy all the way to the end, aha. 2021-11-26 17:54:24 @MelMitchell1 @rajiinio @amandalynneP @alexhanna @cephaloponderer It was a childhood favorite of mine (to the extent that I got a copy to read to my kids). I don't think anyone else in the group had heard of it before (@rajiinio certainly had it), but everyone was entertained and Deb RAN with it. 2021-11-26 17:53:27 @MelMitchell1 Thank you! Working with @rajiinio @amandalynneP @alexhanna and @cephaloponderer has been a delight! We found the Grover analogy fairly early on, when what we were talking about one day reminded me of that book. > 2021-11-26 16:25:15 @MelMitchell1 Thanks :) To appear in #NeurIPS2021 in the datasets and benchmarks track. 2021-11-26 02:51:56 @joavanschoren @rajiinio @amandalynneP @cephaloponderer @alexhanna I'm so glad to hear it -- and thank you again for your work on this track. 2021-11-23 17:14:45 @mdlhx Congratulations 2021-11-23 14:38:45 @rajiinio @amandalynneP @cephaloponderer @alexhanna Research designing & 2021-11-23 14:37:46 @rajiinio @amandalynneP @cephaloponderer @alexhanna This from the blog post announcing the track was ... surprising: "We were pleasantly surprised by the quality and breadth of these submissions," I mean, I guess I knew that ML folks look down on data work, but I wouldn't have thought that extended to the chairs of this track? 2021-11-23 14:36:12 I'm glad to see this track at #NeurIPS2021 and of course pleased to have a paper with @rajiinio @amandalynneP @cephaloponderer and @alexhanna (dream team!) in it. However > 2021-11-23 14:08:30 RT @rajiinio: There's finally a place at @NeurIPSConf for discussions on data and evaluation. It's worth everyone's time to check this o… 2021-11-22 21:11:58 @annargrs @boydgraber @tallinzen Which conference was the 2020 iteration associated with? EMNLP lands at a time that makes sustained virtual participation almost impossible for me (during the teaching quarter), whereas ACL I can more easily make time for... 2021-11-22 21:10:49 @boydgraber @annargrs @tallinzen Also: this format has interesting possibilities for raising the profile of newer/lesser-known researchers, if folks come to a session for a big name but the session doesn't have a defined time slot for that paper. 2021-11-22 20:55:37 @boydgraber @annargrs @tallinzen (What I didn't like about FAccT was that the set up made the audience invisible to the presenters, with questions through slido or similar only. But that seems independent.) 2021-11-22 20:55:03 @boydgraber @annargrs @tallinzen I really liked the format at FAccT 2021, where the Q& 2021-11-22 20:53:45 @boydgraber @annargrs @tallinzen I think there's some value in showing up to Q& 2021-11-22 20:44:22 @shubham_stark @nytimes @CadeMetz @rctatman From the article: "Still, [UCSD philosopher Patricia Churchland] was somewhat impressed by its ability to respond, though she knew a human ethicist would ask for more information before making such pronouncements." 2021-11-22 20:42:53 @shubham_stark @nytimes @CadeMetz @rctatman But also, I don't mean to stake out a relativist position here: I never said it's "societal". Rather, that questions of ethics should be approached in context (which delphi very much doesn't). > 2021-11-22 20:41:17 @shubham_stark @nytimes @CadeMetz @rctatman I don't think linguistics is the right field of study for this question: you're looking for work on comparative ethics, anthropology, & 2021-11-22 20:24:33 RT @JesseDodge: The computational budget used for NLP research has grown tremendously in recent years. @royschwartzNLP, @strubell, @IGurevy… 2021-11-22 20:18:10 @shubham_stark @nytimes @CadeMetz As @rctatman said: Why are you looking for a general way to answer that question? That seems to presuppose automation/desire to decontextualize things that should definitely remain in their social context. 2021-11-22 19:22:00 Sifting through internship ads for students and was struck by this location description: "The internship will be in-person (i.e., not remote) at the [PLACE] office" There's a study to be done here on the linguistic reflexes of large-scale changes in shared presuppositions… 2021-11-22 17:25:27 @nytimes @CadeMetz To be very clear: Delphi's response there is not "contentious", it's wrong. (And it reflects societal biases, sure, but that doesn't mean it's just "contentious".) To see it framed as "contentious" (i.e. maybe right) was jarring, to say the least. 2021-11-22 17:24:17 Reading the @nytimes piece on Delphi ... and stopped in my tracks by this part. @CadeMetz what inspired you to throw in a completely gratuitous comment denigrating sex workers? Who is your editor and why didn't they catch this? https://t.co/x6qtFbuZX7 2021-11-22 14:29:53 RT @emilymbender: This is a long, difficult and important read. Excellent reporting by @_KarenHao https://t.co/Xz4puenLYZ The upshot: wi… 2021-11-21 19:03:18 @megandfigueroa @sonoranloom Gorgeous Sonoragram! 2021-11-21 16:08:52 @asayeed As late as 2019, when WeCNLP was hosted at Facebook, that was still their password for their guest wireless... 2021-11-21 16:01:32 Facebook's MO seems to be: throw a universal fire accelerant around, see what really catches, and then go hire expertise in the relevant languages, cultures & 2021-11-21 15:59:42 Facebook's reactive approach is never going be sufficient (and suggests that as a company, they aren't really interested in solving the underlying problem). I found this particularly damning: https://t.co/7ZcUFm8K5X 2021-11-21 15:56:42 This is a long, difficult and important read. Excellent reporting by @_KarenHao https://t.co/Xz4puenLYZ The upshot: without well-crafted regulation, ads-driven platforms create terrible externalities. 2021-11-20 21:51:39 RT @Abebab: This is not AI or science but absolute racist trash that should just be banned https://t.co/dKrPJ6ORbU 2021-11-19 19:40:20 Still not there, still wondering. @aclanthology is this an Underline problem, somehow? https://t.co/NeA8Sl9j4u 2021-11-19 19:33:11 @RWerpachowski @IrisVanRooij "garbage", probably. 2021-11-19 15:21:51 RT @emilymbender: "AI" can NOT: * Predict who will commit a crime "AI" can: * Make biased policing look "objective" https://t.co/bwGjDnr6… 2021-11-19 04:35:26 I can't guess what this quote was supposed to mean. The only reading I can find is: "We plant data there for the police, to tell the story they want told, that will allow them to harass and arrest marginalized people." https://t.co/DovpJEP7qX 2021-11-19 04:33:46 "AI" can NOT: * Predict who will commit a crime "AI" can: * Make biased policing look "objective" https://t.co/bwGjDnr6MT 2021-11-19 03:32:48 @biolinguist The pointing or the flying? /ducks 2021-11-18 21:11:31 RT @schock: @emilymbender @RadicalAIPod Ty! Check out https://t.co/a1eO2S333T 2021-11-18 21:09:49 @RadicalAIPod @schock And it turns out that "I have read and understood the terms of service" is a particularly useless search term, even when paired with "Pinocchio" and "cartoon". 2021-11-18 21:09:19 @RadicalAIPod @schock There's an excellent and very apropos comic involving Pinocchio clicking a button that says "I have read and understood the terms of service" that I'd love to share, but I can't find the original (only versions that don't credit the artist). 2021-11-18 21:03:08 A little slow on the report-back this time, but a few days ago I enjoyed this episode of @RadicalAIPod with @schock. One thing that has particularly stuck with me is the q of what would enthusiastic, as opposed to annoyed, consent look like, wrt to tos? https://t.co/PK9ZsI22Rg 2021-11-18 16:41:09 @zoe2ks IKR?? I mean, I assume that's a genuine photo.... 2021-11-18 16:01:11 "iel" -- chouette! Also, the photo at the top of this article is a chef's kiss image of "linguistic prescriptivism" https://t.co/95H4gBC04u 2021-11-18 15:34:31 @yuvalpi @SeeTedTalk @haldaume3 @boydgraber To feel any better about anything? No. To resolve to encourage ACL to find a better provider? Yes. 2021-11-18 14:43:02 @SeeTedTalk @haldaume3 @boydgraber The switch to Underline really ... underlined ... just how important this is for making online events cohesive. 2021-11-18 14:42:31 @SeeTedTalk @haldaume3 @boydgraber With a responsive online platform + good chat/text facilities, it can be super engaging to "be there" with a large group of people all experiencing the same keynote, etc. (Also the async commentary on papers worked well at ACL 2020.) 2021-11-18 14:41:36 @haldaume3 @SeeTedTalk @boydgraber I think one really key piece of making the virtual experience valuable is the responsiveness of the virtual platform. Underline is TERRIBLE. Miniconf + RocketChat worked really well (at the cost of LOTS of volunteer hours, I know). 2021-11-18 14:41:31 @haldaume3 @SeeTedTalk @boydgraber Yes!! 2021-11-18 14:39:59 @SeeTedTalk @haldaume3 @boydgraber On the flip side, that is "expensive" in time, while attending virtual conferences is cheap in money (compared to in person). There's no way I could do the cancel-everything approach for as many virtual conferences as I've attended.... 2021-11-18 14:38:45 @SeeTedTalk @haldaume3 @boydgraber Yeah, making time to really attend makes a huge difference. I was able to do that for ACL 2020 (because summer) and got way more out of it than most virtual conferences. 2021-11-18 13:42:10 Totally agree that it's worth working out how to make hybrid conferences work! https://t.co/y5phiA5AnQ 2021-11-17 19:58:01 "We need to think about what happens to people when the only communication they have with their families and loved ones comes with a big surveillance target on their backs" USians: we can do something about this. Call your reps (local/state/fed) and say NO to surveillance. https://t.co/HJFCfW332S 2021-11-17 19:56:32 RT @rctatman: If you work in NLP (or want to) please read this. If, like I do, you think that this is a clear misuse of technology and yo… 2021-11-17 19:55:40 @rctatman @AASchapiro @khia_johnson @theintercept @jonschuppe Agree with what @rctatman said. Thank you for this! 2021-11-17 19:28:08 RT @naaclmeeting: Now that #EMNLP2021 is over, we want to learn from your experiences on how we can improve hybrid conferences for #NLProc.… 2021-11-17 04:41:44 @TaliaRinger It seems like it should be reasonably feasible to create a filter for the most egregious slurs... 2021-11-17 03:46:17 @LeonDerczynski 2021-11-16 17:10:23 @BecomingDataSci Possibly --- though for bitexts, that would be a lot of work. Possibly in the LM component though? 2021-11-16 17:05:57 Inspired by @mer__edith 's paper, I wonder: How would we see the current landscape differently if we replaced every instance of "AI" with "pattern matching by data monopolies driven by uncurated training data & 2021-11-16 17:04:59 A must read, especially those concerned with academic freedom and its purpose. https://t.co/vvDLGspgDG 2021-11-16 16:57:42 RT @mer__edith: New paper! In which I work through a lot of my uncomfortable observations since joining academia, examining the alarming-b… 2021-11-16 14:26:19 @teemu_roos But also random requests to review someone's course syllabus and the like. At least I've stopped getting so many requests to "jump on the phone" with start-up founders. 2021-11-16 14:24:55 @teemu_roos It's not actually speaking invitations that I'm complaining about --- I get more of those than I can do and just turn down the ones I can't. It's things like this: https://t.co/U4q7xxCTfl > 2021-11-16 05:10:28 I need to get better at just deleting email from random people that contains demands for my uncompensated labor. Replied to one such with a brief helpful comment 2021-11-15 21:56:13 @questoph @dirk_hovy Thanks! 2021-11-15 21:42:12 @questoph @dirk_hovy I'm not looking for technical solutions so much as any discussion of the problem... 2021-11-15 20:33:49 @Ricardo_Joseh_L Cool! 2021-11-15 19:01:08 Q for #NLProc and #socioling tweeps: Has anyone written about the interaction between grammar checkers and stigmatization of language varieties? 2021-11-15 18:58:45 @haleyhaala @Wikimedia Yay!! Congrats!! 2021-11-15 17:51:32 @BakerBdb There are other options: "This tweet contains racist language that our service will not translate" or, with less certainty "This tweet may contain..." 2021-11-15 17:44:56 Actually, not so much "do some good" as mitigate harm. One wonders how such a filter could get rolled out but only partially. Did they just forget about third party uses like Twitter? Or is this a decision that someone made? 2021-11-15 17:42:00 I doubt this problem can be completely solved by filtering for slurs, but it seems like a first step that would at least do some good. 2021-11-15 17:40:42 @Google @Twitter As @CT_Bergstrom has noted (and I've also confirmed) this 'bug' doesn't appear at https://t.co/anWOBCP245. So, addressing this with some filter for slurs that for whatever reason isn't there in the version @Twitter uses? https://t.co/oseQK5fL3i 2021-11-15 17:38:45 @Google 4) Those harmed by the reinforcement of stereotypes & 5) The company providing the translation (here, @Google and @Twitter) > 2021-11-15 17:37:26 Re @google's most recent #AIfail, let's think some on who the stakeholders of MT services are: 1) The person who is trying to understand content in a language they can't read on their own 2) The person whose words are being translated and also > 2021-11-15 16:59:37 RT @naacl: NAACL2022 Election - Just wanted to clarify that there are *three* openings for the NAACL board, and you can choose 1-3 candidat… 2021-11-15 03:35:46 RT @ChanceyFleet: My absolute joy about Google Translate's new Transcribe feature for iOS withered and died as I failed to escape a bias-dr… 2021-11-14 20:52:50 @alienelf @GretchenAMcC I think the canonical reference there is Hockett's work on "design features of language": https://t.co/ImBYFSkAg7 2021-11-13 18:03:21 @haspelmath 1200 examples paired with a carefully written grammar sounds great. But no, I'm not prepared to go digging around in MSWord files. If you had a toolbox database on the other hand... 2021-11-13 13:45:46 RT @emilymbender: This collaboration is one of my favorite things to come out of Twitter. Such interesting discussions with @amandalynneP @… 2021-11-13 01:30:29 RT @rajiinio: We just published an extended version of the Data & Data has always been a c… 2021-11-13 00:03:49 @JoeyLovestrand Wow -- that's awesome! We aren't set up to easily use ELAN or FLEx data (yet), but hope to be in the future. 2021-11-12 23:19:53 @JoeyLovestrand Yep! 2021-11-12 21:45:09 Hey #linguists—I'm once again looking for folks who do primary descriptive work & 2021-11-12 21:22:33 @myrthereuver @amandalynneP @rajiinio @cephaloponderer @alexhanna This isn't so much a Part II as an updated version of the paper. It is closely related to "AI and the Everything in the Whole Wide World Benchmark", to appear in the #NeurIPS2021 Datasets and Benchmarks track... 2021-11-12 21:13:09 RT @alexhanna: Super happy to see this updated paper out! This paper surveys and encompasses a lot of our thinking on the problems in machi… 2021-11-12 21:07:23 This collaboration is one of my favorite things to come out of Twitter. Such interesting discussions with @amandalynneP @rajiinio @cephaloponderer and @alexhanna and even manage to write some papers :) https://t.co/1bV0FzrEFI 2021-11-12 21:06:06 RT @amandalynneP: The new and improved version of "Data and its (dis)contents" is published at @Patterns_CP today! Co-authored with @rajiin… 2021-11-12 16:41:03 RT @Abebab: when you hear the term "foundation" you think of something: 2021-11-12 05:37:50 RT @CaroRowland: If you use the ELAN annotation tool in your work, please help us with a new, exciting initiative by answering a few questi… 2021-11-11 18:30:31 @VerbingNouns Yes, this is fine!! Arguably, you have given them a higher value talk, since you are letting them see part of the process that you aren't ready to be archival. 2021-11-11 13:19:40 @haldaume3 @rajiinio @cephaloponderer @alexhanna @amandalynneP Camera ready for that one will be available soon! 2021-11-10 18:24:21 At the #SustaiNLP2021 panel, and reflecting that CS people are actually really used to thinking about efficiency --- just efficiency in terms of human effort involved. Time to change that optimization objective, I think. #EMNLP2021 2021-11-10 18:04:08 @jacobeisenstein @aclmeeting If you have feedback for the authors of this document, please send it via the form at the link above! 2021-11-10 17:53:55 Attn #NLProc folks check out this message on efficient NLP that just came through via @aclmeeting and note that they soliciting community input through Friday Nov 19: https://t.co/9tcdbCudzV 2021-11-10 17:29:22 My hobby: Getting invited to panels and then taking issue with the framing of questions. (Though I think it wasn't unwelcome this time.) 2021-11-10 14:06:59 @timnitGebru @tomsimonite @WIRED Watching the accelerating race to scale (bigger datasets, bigger models, bigger spheres of influence and of course bigger profits) has been exhausting and alarming. All respect to @timnitGebru who has been trying to do something about it and KEPT GOING even after being fired. 2021-11-10 14:04:43 .@timnitGebru to @tomsimonite at @WIRED: "AI needs to slow down. “We haven’t had the time to think about how it should even be built because we’re always just putting out fires,” she said." https://t.co/2valfl0CAJ 2021-11-10 13:45:55 @anoushnajarian @mmitchell_ai @mohitban47 Thanks! 2021-11-10 13:33:55 @BayesForDays @mdlhx @zehavoc @ryandcotterell @gchrupala @cohenrap @jack Un-fucking-believable? (To also account for the putain) 2021-11-10 01:03:33 @malihealikhani @mohitban47 @banazir @aclmeeting Thank you! 2021-11-09 22:38:52 @myrthereuver Me too! It was just such exquisite shade to the parts of the world (waves in USA) without high speed rail... 2021-11-09 22:37:17 @dhumchikdish Ugh, no. There's more to Seattle than Amazon (and Microsoft, etc). Come and see! 2021-11-09 22:28:33 Lol @ "adequate transportation" as the caption on a video of high speed rail. #EMNLP2021 #aacl-ijcnlp2022 2021-11-09 21:50:49 @ruthstarkman @mmitchell_ai @mohitban47 Thank you :) 2021-11-09 21:44:16 @mmitchell_ai @mohitban47 Thank you, Meg! 2021-11-09 20:32:53 @DiverseInAI @mohitban47 Wow -- thank you :) 2021-11-09 20:29:39 @banazir @mohitban47 @aclmeeting Thanks :) 2021-11-09 20:11:59 @TorrentTiago @soldni @aclmeeting Thank you 2021-11-09 19:12:19 @ptullochott @EvpokPadding Thank you! 2021-11-09 17:47:54 @SashaMTL @EvpokPadding Thank you 2021-11-09 17:38:10 @EvpokPadding Thank you! 2021-11-09 17:09:07 @mdlhx @soldni @aclmeeting Thank you! 2021-11-09 17:01:11 @soldni @aclmeeting Thanks 2021-11-09 16:54:52 RT @SustaiNLP2021: And... now it’s the time for SustaiNLP 2021 We are very much looking forward to seeing you *tomorrow* Please check… 2021-11-09 14:13:46 My answer (which doesn't fully answer the question, but still): The difference between "being interested" and "doing research on" includes committing to doing enough reading to actually know how to fit your work into the conversation on that topic. https://t.co/RcFeDgsgSr 2021-11-09 14:11:42 Got to be part of a really fun panel last week for our Phd student pro-seminar. Panel topic: the life cycle of a research project. One of the questions was: "How do you manage being interesting in too many different topics?" > 2021-11-09 14:05:34 RT @aclmeeting: Note 5: How do I indicate that my submission is for the Special Theme track? Write “ACL 2022 Theme Track” in the Preferred… 2021-11-09 14:01:55 When I clicked "mark as spam" gmail offered me the option to "mark as spam and unsubscribe". So I tried that at got back something from https://t.co/WWaH3f5g7F saying they've taken me off the list for that type of update, anyway. 2021-11-09 13:55:01 So I deleted my https://t.co/WWaH3f5g7F account last week (sick of their spam) and today I *still* got email from them telling me someone had left "a reason to download" my paper. I'm going to start actually marking these as spam, I think. Seems like the appropriate response. 2021-11-09 13:40:27 @cabitzaf @alexhanna @mmitchell_ai @makedatahealthy @DrValerioBasile Also Datasheets (@timnitGebru et al) and Model Cards (@mmitchell_ai et al). I'm not at the same talk Meg is at, but if Ng isn't citing *all* this work, there are two options: 1) Really poor scholarship 2) Stealing ideas without credit 2021-11-09 13:19:59 RT @KaporCenter: What is algorithmic bias? How does it negatively impact communities of color + how do we protect their civil rights? Do no… 2021-11-09 05:40:48 @AndyPerfors Hooray! Congratulations 2021-11-08 14:21:54 @haleyhaala @WiNLPWorkshop So, uh, would you could UW Linguistics here? We definitely are working on all of those things, but I hesitate to stake a strong claim, being cognisant of the need to keep doing better... 2021-11-08 13:30:24 RT @haleyhaala: Which #NLProc PhD programs have a strong focus on radical ethics, diversity, and inclusion? Tag them here! @WiNLPWorkshop a… 2021-11-07 20:16:49 @myrthereuver Also -- I don't know that I've ever thought "Well that was a dumb question" at a conference. OTOH: "Who is this person who is giving their own talk in the discussion time" is a pretty frequent one... 2021-11-07 19:57:40 @myrthereuver This is part of why I encourage senior/well-known people to do it too, because it becomes normalized. 2021-11-07 19:53:12 @myrthereuver Asking questions is scary! But one of the main purposes of conferences is networking, and that's an important part of it. Also, someone might want to come find you afterwards for asking such a great question! Own your ideas :) 2021-11-07 19:45:44 Hey #EMNLP2021, esp folks who are there in person: When you ask a question, please introduce yourself! (Even if you think everyone already knows you.) Zoom makes us lazy about this, because it shows our names... 2021-11-07 13:56:43 #EMNLP2021 keynote questions... moderator takes first question from remote audience. Good move! 2021-11-06 23:20:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-08-21 21:31:54 @BrilliantBlkGrl I am up-front with students --- I'd say: I can write this letter, but I don't have a lot of details for it and so it won't be as effective as a letter coming from someone who knows you better. Do you have other options? 2021-08-21 13:27:37 RT @mdekstrand: @mark_riedl @Miles_Brundage I’m here for the Second Foundation models, where we learn that the whole thing is held together… 2021-08-20 15:30:30 @GretchenAMcC @jessgrieser @lanegreene @duolingo Also, I'm currently doing German on Duolingo (and completely beholden to my streak), and it definitely helps that I have a pretty solid idea of German grammar from reading lots of HPSG papers over the years.... 2021-08-20 15:29:52 @GretchenAMcC @jessgrieser @lanegreene @duolingo A distinct memory from grad school: another ling grad was taking first year Japanese and complaining about all these disparate confusing things. I said: "strictly head-final language" and he went "OHHHHH!". 2021-08-20 15:22:11 @betsysneller Yes!! 2021-08-20 15:21:53 RT @twimlai: Friend of the show Euclid with a lesson in consistency https://t.co/pG0i9ZxHIt 2021-08-20 15:15:11 @betsysneller Not a photo, but here's a video of my cats being consistently ridiculous: https://t.co/et2LiAtWBg 2021-08-20 00:56:21 @anggarrgoon @megandfigueroa Same! 2021-08-19 18:28:23 @kirbyconrod And some of the most painful discussions for me were around what counts for what in the grad curriculum. I never tried re UG syntax, but do welcome undergrads in 566. 2021-08-19 18:21:32 @kirbyconrod Yeah: there was definitely a sense within the dept that "syntax" maps onto specific faculty, which is connected to conceptions of what "syntax" is that are connected to part of what makes people made about "generative grammar"... 2021-08-19 18:18:35 @kirbyconrod Thanks, Kirby. 2021-08-19 18:17:42 @kirbyconrod Or maybe four to two, if you count HPSG & 2021-08-19 17:32:18 RT @megandfigueroa: folks who studied linguistics at university, when was the first time you heard about the field? (i was in my second yea… 2021-08-19 16:47:25 @MadamePratolung Not I: https://t.co/gXR4iBKsLj But @mmitchell_ai is one of the speakers, and I think @ruthstarkman will be there... 2021-08-19 16:34:14 And the completely unsupported notion that size is somehow leading to emergent intelligence seems to impress people too much---and take some of the most important options right off the table. /fin (for now) 2021-08-19 16:33:03 If we are actually going to address the harms of pattern recognition at scale (& 11/n 2021-08-19 16:30:36 See also: https://t.co/hB8nTprF2N 10/n 2021-08-19 16:29:40 This all fits, of course, with the way HAI is promoting their up-coming workshop associated with that report: https://t.co/7GiCWwRtJa 9/n 2021-08-19 16:27:42 What are we gaining by doing that? What are the questions we are trying to answer and why are they worth the resources we are/would be pouring into them? If the concern is understanding societal impact, why not use those resource to fund the humanities & 2021-08-19 16:25:57 What would an enormous, publicly funded "foundation model" help us peer into? What happens when you pile up so much text and stir it around? https://t.co/rMDUJUGlO9 7/n 2021-08-19 16:24:36 (Astro)physics is not my field, but my understanding is that particle colliders and telescopes are built because they help humanity peer into the infinitesimally small and the astronomically far. Places/scales where we have much to learn but which **we know exist**. 6/n 2021-08-19 16:23:28 I think part of my issue with the appeal to (astro)physics is the connection to that assumed inevitability. That so-called "foundation models" are actually a real step towards some interesting scientific discoveries. 5/n 2021-08-19 16:22:23 So: they're too big, but they're also inevitable/the only interesting way to do this science, so let's make an even bigger one! But make it public, that'll fix everything 4/n 2021-08-19 16:21:20 From the report: “While some meaningful research can still be done with training smaller models or studying preexisting large models, neither will be sufficient to make adequate progress on the difficult sociotechnical challenges,” 3/n 2021-08-19 16:20:17 First, there's the fact that this is the suggested remedy to the problems of "foundational models" (they're ill-understood, deployed too quickly, reproduce & 2/n 2021-08-19 16:18:54 I'm finding this analogy to large-scale, collaborative projects in (astro)physics particularly galling, and I'm trying to put my finger on exactly why, so, thinking aloud in a thread... Source: https://t.co/PETmMx2L8w https://t.co/QovgsLsJq5 2021-08-18 23:07:30 RT @michigan_AI: ICYMI @radamihalcea's Presidential Address at #ACL2021NLP, you can now watch it online: https://t.co/V4LLhtGALi 2021-08-18 22:06:05 RT @mer__edith: This is an impressive achievement! Congratulations to the authors. My critique is structural, not personal. That said, I… 2021-08-18 20:55:45 @_alialkhatib But regardless, I think that both framings are problematic in the ways you note... 2021-08-18 20:55:18 @_alialkhatib I don't know which version is older, actually, and both links are still live: https://t.co/xpl9yfWiMV https://t.co/AxJpuYx6Vu Meanwhile this huge report uses "foundation models": https://t.co/HnoPxGVyw7 2021-08-18 19:40:31 @mer__edith Thanks, @mer__edith ! I wonder what's behind the alternation between "universal models" and "foundational models". (There seem to still be two versions of this website up...) 2021-08-17 22:02:51 RT @gradientpub: Machine translation has its origins in Cold War-era defense research, when it was meant to bolster national security and a… 2021-08-17 14:52:03 RT @mixedlinguist: We’re back with a new episode of Spectacular Vernacular, @Slate’s Language podcast. Tune in to hear @bgzimmer and I talk… 2021-08-16 22:42:44 @ruthstarkman @timnitGebru IOW, training experiences that help those whose primary academic training is in ML (or CS more generally) see that there is also complementary value in other fields. 2021-08-16 22:42:08 @ruthstarkman @timnitGebru I'm beginning to think that a key component of training will be collaborative projects which show the value of different kinds of expertise: coursework in interdisciplinary contexts where success depends on input from all the disciplines involved. > 2021-08-16 19:23:51 RT @cfiesler: On my wish list: - technical interviews including questions about ethics, because CS departments don't want their graduates t… 2021-08-16 19:22:40 @osazuwa Hard disagree. In many cases can study how the algorithm is affecting the world by looking at how the task is defined and studying the algo's failure modes. Reasoning about that requires skills outside of coding and doesn't necessarily require coding... 2021-08-16 19:21:02 RT @cfiesler: Pretty telling when you're hiring for a "machine learning fairness" researcher position and your job qualifications mention c… 2021-08-16 19:20:45 This isn't unique to ML fairness jobs, but it's particularly galling there... https://t.co/aI2uHed8i8 2021-08-16 19:19:34 Hierarchies of knowledge again (cc @timnitGebru ) How do we break free of this idea that ML expertise is the most difficult & 2021-08-16 18:32:17 RT @jessgrieser: Sociolx twitter I'm doing the thing again. Hit me up with your awesome recent scholarship! My sociolx students (advanced… 2021-08-14 21:48:18 @WomeninAIEthics Some thoughts on that New Yorker piece: https://t.co/h14B4BYgCB 2021-08-14 12:24:42 RT @SashaMTL: I'm digging deep on the #carbonfootprint of manufacturing computing hardware for #MachineLearning (namely GPUs/TPUs). If anyo… 2021-08-13 18:56:00 RT @emilymbender: There's a metaphor in here involving teleportation that I'm particularly pleased with. What is a teleportation a metaphor… 2021-08-13 14:18:02 @athundt @mmitchell_ai @_KarenHao Sort of like: "We thought we'd try doing/building X and then we did Y research into the problem and learned that X is a bad idea because Z" ... sounds like a useful thing to have in the research record!! 2021-08-13 14:16:27 @athundt @mmitchell_ai @_KarenHao That's a really interesting question! I'd imagine that FAccT and AIES would already be amenable to such write ups, but perhaps they could be also framed as a kind of negative result for conferences not directly focused on societal impacts? 2021-08-13 13:24:26 “It’s really important to think through the various values that a data set encodes, as well as the values that having a data set available encodes,” -- @mmitchell_ai in this great piece by @_KarenHao https://t.co/kEBH8ojQM3 2021-08-12 22:08:54 @stivits Thanks, @stivits and !! 2021-08-12 19:33:42 @BramVanroy @aclmeeting You can reach the ACL secretariat at the email address listed on the bottom of this site, to check: https://t.co/DrBqU0Xvrz 2021-08-12 19:14:19 @ian_soboroff @mmitchell_ai @aclmeeting Why, exactly, though? I really appreciate how this article centers the perspective of the community whose language it is. 2021-08-12 15:08:21 Speaking of #AIhype https://t.co/YJAubgVE4D 2021-08-12 12:15:49 @RajaswaPatil @shaily99 @danielleboccell Thanks, @RajaswaPatil ! @shaily99, if you are currently a student, check your library to see if they subscribe. If so: you can download a copy for free to keep :) Also, there's a vol 2 now, too on semantics & https://t.co/7fSWxKPNd6 2021-08-12 02:58:10 @alexhanna @schock I'm irregular esteem 2021-08-11 21:36:28 Q for #linguistics Twitter: What are your favorite go-to resources for explaining language ideologies to non-specialists? (As in, what is a language ideology, how do you spot one, what harm can they do?) 2021-08-11 16:37:31 @internetdaniel @NeurIPSConf I think so, yes, because we are in a phase where we all need to level-up together... 2021-08-11 16:35:13 @internetdaniel @NeurIPSConf For #NAACL2021 we did this because that feedback is valuable! 2021-08-11 16:01:19 @mmitchell_ai @DataSciBae @timnitGebru Argg!! And the quote they use from her is terrible, too: "what I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now, so we can truly advance this field." 2021-08-11 15:24:46 Also, I'd like to think this would also be an excellent venue for Indigenous authors working on (or with) language technology to publish in. @ICLDC_HI can you help spread the word? 2021-08-11 15:21:43 @aclmeeting https://t.co/Di3SPzEDyi 2021-08-11 15:21:30 One key reference for starters: Wesley Leonard (2017) Producing language reclamation by decolonising ‘language’ https://t.co/pSBFK7baJk 2021-08-11 15:20:56 Is there already a good list of works by Indigenous scholars on lg technology/#nlproc that we could point folks to, to help folks get oriented? #linguistics folks, any pointers? https://t.co/OohLU3RoLB 2021-08-11 15:17:39 I am super excited about the @aclmeeting Theme Track for #acl2022nlp: https://t.co/Fo4wSd0PhH At the same time, I'm a bit worried about #NLProc types suddenly getting excited about Indigenous languages and swooping in with parachute science (or worse). 2021-08-11 00:21:21 Thanks @JulesPolonetsky for the solution :) https://t.co/LAesXiCZKr 2021-08-11 00:15:53 @JulesPolonetsky THANK YOU!!! 2021-08-11 00:09:29 Does anyone know how to disable the "feature" where Google calendar automatically adds a "join with Google meet" to any calendar even with more than one participant? It's annoying to have to remember to delete each time.... 2021-08-10 10:25:53 Fix Cafiac 2021-07-11 15:25:53 RT @csdoctorsister: In accepting that BBC interview now titled “Science in the Time of Cancel Culture” including @emilybender, I asked for… 2021-07-11 12:48:30 @asayeed @LeonDerczynski Lesson learned: always check for previous work by the producer (and the journalist) first. 2021-07-10 23:03:55 @csdoctorsister Speaking of really shoddy fact checking, they couldn't even manage to spell check their fake "warning" sign in the graphic. (I missed this before 2021-07-10 20:08:20 @csdoctorsister Also unsettling was being thanked for such nuanced answers ... as if they expected otherwise? 2021-07-10 20:06:17 @csdoctorsister I forget what question that was a preface to, but somehow I doubt that the likes of P*dro and St*v*n would follow such advice, even if it was also given to them... 2021-07-10 20:05:33 @csdoctorsister I don't remember if they used the phrase "cancel culture" in talking to me, but I do remember being admonished that they like their guests to imagine "steelman arguments" and try to address the most charitable interpretation of what the "other side" has to say. > 2021-07-10 19:34:05 @mmitchell_ai Sorry to hear that Meg. I guess I wanted to share this info by way of warning others off, if not the BBC entirely at least this programme (Analysis) and its channel (Radio 4). 2021-07-10 19:31:28 @csdoctorsister You'd think they'd at least still hold enough journalistic standards to get such simple facts right as name and title (also wrong for me, I'm a professor of linguistics not in computational linguistics). 2021-07-10 19:16:52 Also really gross here is the suggestion that "social justice" and "cancel culture" are different framings of the same thing. 2021-07-10 19:13:59 @csdoctorsister And I honestly had expected better of the BBC than that kind of bait and switch! Because if someone had approached me to talk about "cancel culture" that would have been a clear NO. 2021-07-10 19:12:53 @csdoctorsister I'm fairly confident in what I said, but now quite unsettled as to what the framing will be. 2021-07-10 19:11:36 It's now set to air in a couple of days, with the title "Science in the Time of Cancel Culture" and listing among the experts interviewed not just me and @csdoctorsister but also St*v*n P**nker and P*dro D*ming*s among others. Sheesh. > 2021-07-10 19:10:07 A few weeks ago, I did an interview with the BBC for a documentary then described to me as "investigating the influence of modern social justice movements on the scientific community and scientific research". > 2021-07-10 01:00:54 @djnavarro Congratulations!! 2021-07-09 21:27:56 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 @csdoctorsister @mapmeld When talking about language varieties belonging to minoritized populations, there are also important questions about balancing access and mitigating surveillance, which I think can only be answered by the communities impacted and wrt specific use cases and technologies. 2021-07-09 21:26:21 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 @csdoctorsister @mapmeld Where "user-ready" here, inter alia, means ready to be deployed in a particular context without inducing allocation harms. 2021-07-09 21:25:31 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 @csdoctorsister @mapmeld The whole data documentation movement (datasheets, model cards, data statements, etc) is in large part about understanding and making clear what language varieties are in NLP datasets so as to allow for reasoning about whether systems trained on a given dataset are user-ready > 2021-07-09 21:23:42 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 @csdoctorsister @mapmeld And the framing of "standard English" training sets as "error free" in opposition to these other varieties (in the blog post) is super cringe. https://t.co/GYU5j3fCi6 2021-07-09 21:22:50 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 @csdoctorsister That said, I'm with @mapmeld that the approach in the QT blog post/paper isn't it. The differences across varieties of English (and in particular, between MUSE and African-American Language) are not well-modeled by throwing in random permutations of inflectional paradigms. 2021-07-09 21:21:56 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 @csdoctorsister And it's pretty clear that a large part of the problem is that varieties of English other than MUSE (mainstream US English) are underrepresented, to varying degrees, in the training data. E.g. https://t.co/HyvRScj8SX 2021-07-09 21:20:57 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 Likewise, @csdoctorsister has explored how off-the-shelf sentiment analysis systems just completely fall down on data from Black Twitter. 2021-07-09 21:19:55 @jay__cunningham @denaefordrobin @writingprincess @adapperprof @DocDre @iogburu @Martez_Mott @LaurenDThomas1 So, yes, it's pretty clear that various #NLProc (including speech) technologies do not work equally well across varieties of English. See, for example for ASR: Koenecke et al 2020: https://t.co/uANR9HATNn Wassink 2021: https://t.co/Pj0rMkep4q 2021-07-09 19:28:02 RT @vinodkpg: Excited to finally announce this effort, set off by follow up discussions around our ACL2020 mentorship sessions, & 2021-07-09 19:26:45 [Caveats: Small sample, non-random, likely non-representative] If representative, this is consistent w/ the notion that #NLProc reviewers overestimate other reviewers' interest in SOTA. I have some thoughts about how to make this more visible to each other … watch this space! https://t.co/e3Q6mzxpBA 2021-07-09 18:28:01 RT @aclmentorship: Glad to launch the ACL Year-Round Mentorship program, open to all students worldwide! You're welcome to apply as a men… 2021-07-09 18:27:57 RT @radamihalcea: A new year-round mentorship opportunity for students across the world interested in #NLProc Students from underreprese… 2021-07-09 14:10:51 RT @emilymbender: Retweeting for reach (I think the platform just doesn't promote things that are mid-thread otherwise). Note that there ar… 2021-07-09 12:09:48 @cfiesler It looks like one of those was from my personal troll, likely because I retweeted your insightful thread. Sorry about that… 2021-07-09 04:49:30 @datingdecisions Hey! There’s more to the West cost of the US than California!! 2021-07-09 04:07:04 @datingdecisions Hah! Caught red-handed by those of us in the pokey time zone out West!!! 2021-07-08 23:57:28 RT @cfiesler: There has been a very upset/angry reaction to a paper published using tweets about mental health. I'm not RTing because I'd l… 2021-07-08 22:21:01 Retweeting for reach (I think the platform just doesn't promote things that are mid-thread otherwise). Note that there are three polls linked here and I hope folks will answer them all! #NLProc https://t.co/jjaqLQvtGo 2021-07-08 20:06:37 For all those interested in #linguistics, this is a great podcast. Please check out @VocalFriesPod and consider becoming a patron! https://t.co/FH6V6TPUbp 2021-07-08 20:01:53 RT @LingSocAm: The LSA is excited to announce the following distinguished plenary speakers for the 2022 LSA Annual Meeting: Dr. Michel DeGr… 2021-07-08 19:21:42 RT @emilymbender: Poll 1/3: What do you think reviewers in #NLProc weight most heavily in their evaluations of papers? (Where 'hypothesis t… 2021-07-08 18:59:28 @timnitGebru @adrian_weller @mmitchell_ai @anncopestake @ZacKenton1 It was! I don't know the details about availability, though. 2021-07-08 18:55:00 So add that to the usual caveats about unscientific Twitter polls -- I'm curious to see the results anyway but more curious for further discussion about how to shift incentives. /fin 2021-07-08 18:54:19 I realize there are many more things that make papers valuable, and not all papers have to be valuable for the same reason! And also, char limit on poll options makes it nigh impossible to make this kind of question clear enough to be worthwhile. > 2021-07-08 18:53:06 Poll 3/3: Of those same four categories, where would you personally rate SOTA results in terms of how valuable they are? > 2021-07-08 18:52:08 Poll 2/3: What do you think is actually most important in research, in terms of advancing the science and its ability to serve the public? (Same definitions as above): 2021-07-08 18:50:59 Poll 1/3: What do you think reviewers in #NLProc weight most heavily in their evaluations of papers? (Where 'hypothesis testing' is short for setting up experiments to test hypotheses & 2021-07-08 18:48:47 @turinginst @adrian_weller @anncopestake @ZacKenton1 One thing that came up in our discussion was how to change research incentives towards the kind of grounded, specific, human-focused work we all (I think) agreed is a valuable direction. In that light, two quick Twitter polls: > 2021-07-08 18:48:08 This was loads of fun! Thank you again @turinginst and @adrian_weller for the invitation and @anncopestake, @ZacKenton1 and Anjali for the really interesting discussion afterwards. > 2021-07-08 16:46:05 @bhecht This key point about governance reminds me of this @RadicalAIPod episode with @divyasiddarth https://t.co/IK2K1dBaz3 2021-07-08 16:43:49 @bhecht "In addition to profits, it’s also important to share the governance of AI technologies with data laborers. With shared governance, we can likely make much more progress on key AI challenges such as privacy, sustainability, and even performance." https://t.co/N82VHjYK9m 2021-07-08 16:43:16 "we in the tech industry should realize that Copilot is merely a taste of our own medicine @bhecht, Hanlin Li, and Nicholas Vincent https://t.co/4HJvNhVK3Y 2021-07-08 16:26:13 @IAugenstein Congratulations!! 2021-07-08 16:18:55 @leahhenrickson @turinginst Thank you for attending! 2021-07-08 16:18:49 @anoushnajarian @MathWorks @turinginst Thank you for attending! 2021-07-08 12:45:10 RT @naacl: If you have attended NAACL-HLT2021, please take a few minutes to fill in the post-conference survey: https://t.co/482BZE5urz You… 2021-07-08 03:53:06 RT @heidi_harley: I would like to alert you to a systemic language bias in US university system. It's complicated so buckle up. US colleg… 2021-05-22 19:57:47 RT @mmitchell_ai: Check out @_KarenHao's great piece on LLMs to better understand the scientific discussions around the firing of me & 2021-05-22 18:14:30 @vinayprabhu My main concern is that I want any corrections I make to be taken up by the conference organizers for the hosted video. I'm not inclined to do the work until I know it will have value. 2021-05-22 14:18:01 TFW the conference didn't provide any means to hand-correct the transcript of your prerecorded talk & 2021-05-21 23:17:13 RT @rctatman: sometimes i think about how people in my field are building systems that directly harm people and how they either don't care… 2021-05-21 22:08:18 @SomethingCoward Not asking for a solution, thanks. Just venting about how completely terrible the Springer review system is. (& 2021-05-21 21:59:26 And it case it isn't obvious: this screen cap is *after* I clicked on a different value (since N/A was the default) and then tried N/A again, and no I couldn't submit the review without picking a number here. 2021-05-21 21:57:21 Today's stupidity from "editorial manager" software. N/A apparently isn't an answer? Grrr. https://t.co/36Xq8gSwwS 2021-05-21 21:26:38 Better presentation: https://t.co/dGXfPGMDb2 2021-05-21 20:57:28 In fact, from the blog post: "Wav2vec-U learns purely from recorded speech audio and unpaired text" ... which is cool enough as it is. It's not magic, and there's no need to pretend that it is. #AIhype 2021-05-21 20:57:16 I had to go check the blog post because "just the audio" is clearly impossible. What's it going to do? Invent writing systems all on its own? https://t.co/u7o4OP793h 2021-05-21 20:03:03 In fact, from the blog post: "Wav2vec-U learns purely from recorded speech audio and unpaired text" ... which is cool enough as it is. It's not magic, and there's no need to pretend that it is. #AIhype 2021-05-21 20:02:23 I had to go check the blog post because "just the audio" is clearly impossible. What's it going to do? Invent writing systems all on its own? https://t.co/YUUWMetbII 2021-05-21 16:38:03 RT @farbandish: I feel like a corollary of “any sufficiently advanced technology is indistinguishable from magic” is: “anyone told they’re… 2021-05-21 15:53:17 @jessgrieser Recursion! 2021-05-21 15:29:08 #NAACL2021 Ethics review process report back, in which @KarnFort1 and I provide a window into the ethics review process for the conference. https://t.co/X1yT7YQcmt #NLProc #ethNLP 2021-05-20 20:59:59 @CosmicInglewood @_KarenHao Maybe you misthreaded? I looked at the whole thing before replying and did not see any links. 2021-05-20 20:56:14 @CosmicInglewood This looks like a summary/excerpts of @_KarenHao 's recent article: https://t.co/QQSnQcms6d Not sure what your point is of posting this without credit? 2021-05-20 20:03:15 RT @jessgrieser: Before everybody else makes my cover go viral...formally introducing THE BLACK SIDE OF THE RIVER, forthcoming 2022 from @… 2021-05-20 19:45:33 I've signed. Will you? https://t.co/V3ZVATusp9 2021-05-20 17:48:45 RT @AJLUnited: Join AJL in telling @CBS to #CiteBlackWomen like @jovialjoy, @timnitGebru, @rajiinio & 2021-05-20 17:13:29 RT @_KarenHao: Ever since Google fired @timnitGebru & 2021-05-20 16:44:14 @SeeTedTalk @mwe_workshop 2021-05-20 13:46:50 In case someone prefers to link directly to the pdf, they can find a pdf link on that page :) 2021-05-20 13:46:25 Success! The answer in our local environment was an .htaccess file with RewriteEngine on. So, I've now redirected links to the preprint pdf I put up in my own webspace when we completed the camera ready to the ACM DL landing page for the paper: https://t.co/kwACyKdufD https://t.co/2iQN8SE2JA 2021-05-19 23:42:32 @NicolasFuRivero @qz & 2021-05-19 23:41:51 @NicolasFuRivero @qz Also: "This shift promises to reduce the amount of work it takes to find information through Google. But it’s not clear that this is a problem in need of a solution." 2021-05-19 23:41:04 From @NicolasFuRivero at @qz: AI has no ability to actually understand the words it is saying—but it has gotten quite good at parroting human speech, [& https://t.co/jRakLZrpFW 2021-05-19 22:47:26 RT @alvarombedoya: There is a "nothing-to-see-here" tone to this statement from @60Minutes that, combined with the failure to make any ment… 2021-05-19 22:12:05 RT @sanjrockz: We have a limited number of free student registration vouchers for Emoji2021 @icwsm workshop. If you are a student working o… 2021-05-19 21:07:34 RT @banazir: Something I greatly respect and admire about @timnitGebru is how she never lets her peers go unrecognized: not only co-authors… 2021-05-19 19:14:01 @jjvincent @TheVerge @timnitGebru @mmitchell_ai Though on seeing the "lens towards facts" comment repeated. (As @mer__edith says ... what a dog whistle!) https://t.co/pSIwwhvjiS 2021-05-19 19:12:50 @jjvincent @TheVerge Thank you, @jjvincent for this reporting on how Google's firing of @timnitGebru and @mmitchell_ai undermines their credibility in this area. 2021-05-19 19:11:41 “they fired two of the authors of that paper, nominally over the paper. If the issues we raise were ones they were facing head on, then they deliberately deprived themselves of highly relevant expertise towards that task.” me, to @jjvincent at @TheVerge https://t.co/FKYOuci1Yq 2021-05-19 16:16:26 @sandyasm Especially curious because the "rethinking search" paper read like vaporware but the Google IO announcement suggests near-term deployment. 2021-05-19 16:16:03 @sandyasm I am curious about this too. The one FastCompany article I read didn't clarify. 2021-05-19 15:59:23 @annaeveryday Yeah, I've seen it frequently, especially from reviewers. 2021-05-19 13:41:38 RT @Combsthepoet: The fact that they don't even recognize that they not only erased the research and organizing, but they chose to justify… 2021-05-19 13:32:00 @DingemanseMark Thanks! 2021-05-19 13:31:25 Answer seems to be: yes possible, but ask the folks who actually run the server. So: doing that. Thanks all! 2021-05-19 13:29:09 @dave_andersen Which tells me I really should just see if UW IT can help. 2021-05-19 13:28:36 @dave_andersen This is the university's web server, so I probably only have limited config possibilities... 2021-05-19 13:27:16 Q for Twitter hivemind: is it possible to redirect a URL that ends in .pdf rather than .html? (Folks keep using the link to the Stochastic Parrots preprint I put in my own webspace, and I've love to have that URL redirect to the ACM DL version.) 2021-05-19 13:02:37 Shorter @CBSNews : That erasure you're all complaining about? We did it on purpose. #CiteBlackWomen https://t.co/UbHWvBd7Hx 2021-05-19 13:00:06 @TaliaRinger As far as I can tell, yes. 2021-05-19 12:47:44 RT @schock: CBS editors trying to rationalize why they erased @jovialjoy @rajiinio @timnitGebru @Combsthepoet in this note: https://t.co/9i… 2021-05-19 01:41:39 RT @rajiinio: Wow - @timnitGebru @iajunwa & No doubt that some serious truth is about to b… 2021-05-19 01:06:43 RT @conitzer: The AI, Ethics, and Society Conference starts tomorrow (Wednesday)! @AIESConf See the program here: https://t.co/Lfpbukygl2 2021-05-19 01:01:45 RT @worldofjem: Erklärung auf #TikTok https://t.co/Qe7cgmfRzG https://t.co/RNmtWY5A49 2021-05-19 00:59:05 RT @aclmeeting: ACL is looking for nominations for the 2021 Test-of-Time (ToT) Paper Awards with the deadline approaching (31 May 2021): h… 2021-05-19 00:01:14 Also, "a lens towards facts" reads to me as yet another attempt to discredit our work. cc @mmitchell_ai @timnitGebru 2021-05-18 23:59:34 @JeffDean IOW, how dare you criticize Google and also documenting problems isn't valuable if you can't in the same breath say how to solve them. 2021-05-18 23:59:11 So, according to @JeffDean it's okay to turn a critical eye towards Google products so long as you are sure to also point out everything Google is doing in the same space that's good AND propose solutions. https://t.co/cRv7P0WOBp https://t.co/YTM27so2YC 2021-05-18 23:56:49 RT @mmitchell_ai: Appreciate that @richardjnieva was able to communicate with Google/Jeff Dean for the first time about their handling of m… 2021-05-18 22:59:47 RT @AJLUnited: And the @60Minutes erasure just keeps going. 2021-05-18 22:22:43 @sarahshulist (This was an in person class, in the before times. Not a time zone thing.) 2021-05-18 22:22:15 @sarahshulist What detracted from your learning? Class at 8am. Reader, it was an afternoon class. 2021-05-18 21:21:13 RT @Combsthepoet: I don't have the energy to say all I want to say, but I also spoke to 60 mins about the organizing on the ground in Detro… 2021-05-18 20:31:37 RT @rajiinio: Bias in data is inevitable but there's nothing inevitable about deploying a product that doesn't work on a vulnerable populat… 2021-05-18 19:35:29 IIRC, the #FAcct2021 videos were going to be made public at some point. Have we gotten to that point yet? Anyone know? 2021-05-18 19:33:46 @twimlai @samcharrington @mmitchell_ai @_KarenHao @Google @timnitGebru On what @Google should have done, see this post from @GoogleWalkout https://t.co/wDiZsPBa6L 2021-05-18 19:32:22 @twimlai @samcharrington @mmitchell_ai Also hugely relevant is @_KarenHao 's coverage of how @Google fired @timnitGebru (and later @mmitchell_ai) supposedly over the paper. https://t.co/BtBJyiN8YF https://t.co/BJBf74pWFZ 2021-05-18 19:29:48 For those just learning about stochastic parrots, thanks to today's #GoogleIO2021, you can find our paper here: https://t.co/kwACyKdufD Or, you might enjoy this episode of @twimlai where @samcharrington interviewed @mmitchell_ai and me https://t.co/IcuRU4xt4j 2021-05-18 19:07:12 Finally, for anyone curious who hasn't found our paper about this yet, it's here: https://t.co/kwACyKdufD 2021-05-18 19:02:50 I feel like there's a whole discussion of evaluation practice in ML to be unpacked here... cc @rajiinio @amandalynneP @alexhanna @cephaloponderer https://t.co/ce6uEzqOtT 2021-05-18 18:53:06 RT @timnitGebru: "...And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use...Our hig… 2021-05-18 18:52:53 RT @timnitGebru: This is what is called ethics washing. 2021-05-18 18:48:28 RT @timnitGebru: This blogpost co-authored by @ZoubinGhahrama1, who's taken over for Megan Kacholia, says "Responsibility first." Its a jo… 2021-05-18 18:40:03 Goal 1: Be sensible Goal 2: Be specific Goal 3: Be interesting Goal 4: Be factual ... because THAT'S a reasonable ordering of expectations for artificial agents deployed in the world. 2021-05-18 18:33:21 RT @csdoctorsister: Parroting https://t.co/YzjrhoB651 2021-05-18 18:32:15 @mmitchell_ai @Google @timnitGebru You were doing so much excellent work there on the inside (I really loved this blog post giving a window into it), but at the same time, they were pushing forward on stuff like this. What would that have been like?? https://t.co/BiatwQzPJn 2021-05-18 18:31:36 @mmitchell_ai @Google @timnitGebru Absolutely surreal. I'm also finding myself wondering how this might have played out if they *hadn't* made a big deal out of our paper and fired you both. 2021-05-18 18:13:03 @timnitGebru Exactly. 2021-05-18 18:11:46 @Google @timnitGebru @mmitchell_ai "We're deeply familiar with issues involved with machine learning models, such as unfair bias, as we’ve been researching and developing these technologies for many years." Pay no attention to the huge pile of data behind the curtain.... 2021-05-18 18:10:55 @Google @timnitGebru @mmitchell_ai "And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use." This seems to suggest via presupposition some "careful vetting" of the training data. Really? I'd love to see the documentation of that. 2021-05-18 18:10:17 @Google @timnitGebru @mmitchell_ai They "are investigating ways to ensure LaMDA’s responses aren’t just compelling but correct." ... but in the meantime, is the model in fact framed in such a way that people interacting with it know it's not grounded in any reality? Is @Google accountable for its words? 2021-05-18 18:09:11 @Google @timnitGebru @mmitchell_ai Source: https://t.co/dugBIkJva6 2021-05-18 18:09:00 And we should trust these claims of responsible development of language-model-based chatbots from @Google because, why, exactly? I mean, they couldn't stand to be associated with a paper carefully documenting the possible risks... #GoogleIO2021 cc @timnitGebru @mmitchell_ai https://t.co/cTS8hjnJbV 2021-05-18 18:05:42 @mark_riedl Wow, that's rich. 2021-05-18 15:32:44 RT @alexhanna: I swear I've seen some work on this, but: any folks working on inequalities brought on climate change and how data used in A… 2021-05-18 14:03:24 RT @anggarrgoon: I'm looking for papers that talk about the use of acoustic language models for filtering for video-conferencing (zoom and… 2021-05-18 13:22:20 .@evanmiltenburg were you the one thinking it might be interesting to do a study of ethical considerations sections? I haven't read this yet, but here's one doing that for NeurIPS 2020 papers: https://t.co/gAmiKiNGBR 2021-05-18 02:45:03 @checarina https://t.co/aI2uHed8i8 2021-05-18 01:51:51 RT @checarina: made a lil meme about the job search https://t.co/bkVzom7Rzg 2021-05-17 22:38:15 Many of us have said over and over again that @Google firing @timnitGebru and then @mmitchell_ai was an enormous self-own. This blog post tells part of the story of what Google lost in that hugely misguided decision. https://t.co/gxH1jBZNNq 2021-05-17 22:29:10 So while it seems worthwhile to have folks thinking through the implications of current rules of war for autonomous weapons, I still think we shouldn't give up moving the needle towards no autonomous weapons. 2021-05-17 22:27:23 I have a responsibility to push my government not to approve arms sales to countries that bomb civilian targets: https://t.co/Z5dKAJEsB7 (From that article it seems unlikely that we can affect change here, but I'm still going to call my MoCs.) 2021-05-17 22:21:57 As a citizen of one of those major powers, even one among 350+ million, I have a responsibility to push for a government that *can* contemplate such a total ban, that *can* contemplate approaches to peace (and security) building through means other than an arms race. 2021-05-17 22:20:43 This is definitely consonant with the value sensitive design "better not best/progress not perfection" idea, and yet, I cannot bring myself to agree. 2021-05-17 22:20:07 5. Also: some autonomous weapons are defensive 2021-05-17 22:19:55 1. We're stuck with war 2. Major powers (including the US) are investing in autonomous weapons 3. The major powers will never sign on to such a ban 4. So we're better off with something more nuanced that they can sign on to. 2021-05-17 22:19:00 Throughout the episode, Umbrello kept positioning himself in opposition to blanket bans on autonomous weapons, and when he finally got to explaining why, it seemed to come down to: 2021-05-17 22:17:19 @RadicalAIPod I was definitely with Jess and Dylan in thinking: what could it possibly mean to apply value sensitive design to autonomous weapons? My own value system is such that supporting human values in design means NOT designing more efficient ways to kill people. 2021-05-17 22:16:15 I have to admit, it took me a long time to work up to listening to this one, even though I know that @RadicalAIPod episodes are always rewarding. https://t.co/HyT5zCZhHV Some thoughts: 2021-05-17 20:23:12 RT @natschluter: Just incredible! @60Minutes and @andersoncooper seem to be demonstrating quite literally how to carry out the gender and r… 2021-05-17 20:18:51 RT @AJLUnited: Here are the "Face Queens" — @jovialjoy, @timnitGebru, and @rajiinio that @60Minutes erased during their episode on facial r… 2021-05-17 20:05:44 RT @UWGOV: @UW is hosting a walk-in vaccine clinic in Madrona Hall. Everyone who gets vaccinated will receive a Dick's burger! Open today u… 2021-05-17 19:52:07 RT @jovialjoy: The irony is a suggested they interview Patrick and include @NIST research. @60Minutes producers were not aware of the signi… 2021-05-17 19:51:54 RT @mattmay: Okay! Experts! Particularly white and/or male ones! We need to talk. If someone interviews you on issues of inclusion and bia… 2021-05-17 19:40:45 RT @mixedlinguist: Tweeps, I'm doing research! Spread the word? Professor Holliday (Penn) seeks volunteers for research on how people talk… 2021-05-17 19:33:44 @megandfigueroa @VocalFriesPod So much fun... thank you!!! 2021-05-17 17:33:15 Just found myself writing "NLP and other areas where ML is applied" and I liked that formulation enough that I had to share it :) 2021-05-17 17:11:02 A particular application or class of applications of ML is not doing what you're claiming and/or is unethical. https://t.co/TzX7TYL2Tw 2021-05-17 17:09:58 "Linguistics is relevant to #NLProc." https://t.co/TzX7TYL2Tw 2021-05-17 17:09:42 There are, in fact, infinitely many such things. But a few of them (surprisingly) seem to not be recognized as such, at least by certain segments of Twitter. It is those that I'll plan to document here. 2021-05-17 17:08:24 I feel like it's time to start a thread of things a person can say without meaning that there's no use for statistics/machine learning, even if she's a linguist. 2021-05-17 14:31:18 @EmmaSManning I'm not sure I can even find a non-sarcastic reading of your tweet ... which is why it was so discordant to see it quoted that way!! 2021-05-17 14:13:34 RT @rajiinio: By the way, if you're looking to see the absolute other side of the spectrum - a feature on algorithmic bias with Black peopl… 2021-05-17 14:11:53 RT @csdoctorsister: You’ve got your citations wrong @60Minutes. What are y’all doing. You’ve overlooked the seminal work of @jovialjoy @tim… 2021-05-17 13:58:49 This is infuriating. And if I were associated with @60Minutes or @andersoncooper I would be just mortified. Such blatant erasure. https://t.co/JagPJJCd5b 2021-05-17 13:56:25 RT @jovialjoy: @60Minutes producers spoke to me for many hours. I even spent additional time building a custom demo for @andersoncooper and… 2021-05-17 13:55:09 RT @natschluter: Very telling how @60Minutes chose to not #BelieveBlackWomen nor #CiteBlackWomen and wasted hours of @jovialjoy's time only… 2021-05-17 13:42:10 RT @rajiinio: I'm getting tired of this pattern. At this point, @jovialjoy has to spend almost as much time actively fighting erasure as… 2021-05-17 13:27:00 RT @timnitGebru: So strange to see this from @60Minutes. I was watching the episode and wondering how they didn't have @jovialjoy at least,… 2021-05-17 13:10:33 RT @_KarenHao: This work was not led by NIST. It was led by @jovialjoy, @rajiinio, and @timnitGebru. Really disappointed to see this @60Min… 2021-05-17 13:10:25 RT @mutalenkonde: Wait they actually spoke to @jovialjoy that is terrible her work with @timnitGebru was foundational to people discussing… 2021-05-17 13:07:57 @timnitGebru @_KarenHao @techreview Yeah, @_KarenHao sets a really high standard! Few can meet that bar, but it's surprising to see the same publication providing such credulous articles. 2021-05-17 13:03:37 @srchvrs From a user's perspective, there's a world of difference between extracted snippets and generated ones, even if the latter are accurate: with the extracted snippet I can go find THE SAME TEXT on the page it came from and read it in context. 2021-05-17 13:00:46 RT @emilymbender: From the headline, my first thought was: On top of all of the other problems we document in the Stochastic Parrots paper,… 2021-05-17 12:42:19 @jftaveira1993 Woah -- that was fast. Also, @EmmaSManning ... I think the author of that piece didn't understand the sarcasm in your tweet??? 2021-05-17 12:34:03 @dayyansmith Excellent! 2021-05-17 05:19:48 @srchvrs Snippets largely suck but they at least come with actual links to web pages in the search results. Doing them with seq2seq is a terrible idea. Read my while thread before mansplaining at me please. Bye. 2021-05-17 04:58:26 @srchvrs *generated* citations are not metadata. 2021-05-17 04:47:52 @srchvrs I read their paper and no they do not. They propose to train the model to generate citations, which is not the same. 2021-05-17 04:43:50 @heatherklus Hard agree! https://t.co/6MvKdBYmEz 2021-05-17 04:43:17 @srchvrs You could read my thread and get a little more info before jumping in with the assumption that I'm against "everything statistical". I'm not. 2021-05-17 04:37:34 I'm not sure how to end this thread, except to say that it's disturbing to see an org with the resources & 2021-05-17 04:36:10 With a claim like that, I'd expect to see AT LEAST some user study establishing that people a) would want that functionality and b) could use it reliably or a deep review of the relevant HCI and information science literature. The paper contains nothing of the sort. 2021-05-17 04:35:24 Finally, I'd like to observe how utterly flimsy the motivation of this paper is: https://t.co/70otlbz75z 2021-05-17 04:33:52 For more on that, see: https://t.co/z1F7fESFOn [Audio paper available: https://t.co/QzkCVPYOrk ] 2021-05-17 04:33:36 And just because someone claims a task tests NLU doesn't make it so. 2021-05-17 04:33:29 Just quickly: While useful scaffolding for compositional semantics, morphology and "grammar" aren't themselves meaning. Distributional information is a reflection of lexical semantics, but it also doesn't constitute NLU. 2021-05-17 04:33:06 The paper is also rife with the misleading claim that LMs are "understanding" anything, as well as astounding naivete about what understanding natural language entails: https://t.co/qOv8K4tlOj 2021-05-17 04:30:32 The paper actually points out this problem, and a host of others. But then instead of recognizing these are reasons NOT to do this, takes them as "research challenges" instead: https://t.co/ayGHVZZuVx 2021-05-17 04:29:16 Their proposed solutions to grounding answers in sources are various ways of learning associations and documents, and then prompting the system to output document IDs in the strings it produces: giving it license to make up docIDs but present them as the source of the information 2021-05-17 04:28:23 The paper is framed as presenting a research area: "a unified model-based approach to building IR systems". https://t.co/GzqdMZA9pj 2021-05-17 04:27:51 How might they be solving this problem? They aren't. They're basically assuming a solution and then talking about what *could* be done if it (and solutions to a bunch of other problems they identify) were to exist. 2021-05-17 04:27:36 So of course I was curious to see what the researchers (Metzler et al) proposed to do, to link the sources to the output of an LM, and went to read the arXiv paper in the hopes of finding out: https://t.co/Q0ff2p6cuX 2021-05-17 04:27:19 It was somewhat comforting, therefore, to see this in the Tech Review piece: https://t.co/kP06b0rn3h 2021-05-17 04:26:48 So the last thing we need is even more distance between the answer provided by the search engine/QA system and the ultimate sources of those answers. And that's even before considering that LMs generated text with no understanding nor any grounding in communicative intent. 2021-05-17 04:26:32 When search engines instead try to "answer questions" by presenting snippets of text removed from their context, confusion can ensue, and can be harder to recover from---even for tech savvy users: https://t.co/FzXvRlg6yi 2021-05-17 04:26:23 By clicking through to the underlying documents, the human is in a position to evaluate the trustworthiness of the information there. Is this a source that I trust? Can I trace back where it comes from? Is it from a context that is congruent with my query? 2021-05-17 04:25:35 I don't see that as "a problem" at all, let alone "the problem". Modern search engines are an excellent example of human-in-the-loop: The human crafts a query, gets a ranked list of candidate documents to peruse, either finding what they're looking for or issuing a new query. 2021-05-17 04:24:23 From the MIT Tech Review piece, this really stood out to me: https://t.co/QgXTB21lMG 2021-05-17 04:23:44 From the headline, my first thought was: On top of all of the other problems we document in the Stochastic Parrots paper, this will also be terrible for information literacy. So, I read the MIT Tech Review piece and then the arXiv paper it's based on. https://t.co/dYQc1H9VPY 2021-05-16 21:54:15 How do I get an emoji mash-up between and ? 2021-05-16 20:34:58 @JCornebise @timnitGebru @mmitchell_ai Reading the paper now. Strong "step 3: profit!" vibes so far. 2021-05-16 20:17:25 "Building such experts would likely require developing an artificial general intelligence, which is beyond the scope of this paper." 2021-05-15 19:21:11 @ssshanest Hooray!! 2021-05-15 18:08:01 Seattle! Come to Lumen Field and get vaccinated! 12 and up and walk ins welcome. I'll be doing data entry. If we get enough walk-ins we can get to 11,000 vaccinated today. https://t.co/nvKn8WNTWu 2021-05-15 04:26:07 RT @NAACLHLT: A reminder that early conference registration closes in 6 days (May 20th 11:59 EDT). The price will subsequently increase $50… 2021-05-15 03:23:30 RT @sigtyp_: Olga Zamaraeva on Natural Language Processing and Language Variation Does NLP care about the range of language variation?… 2021-05-15 02:18:13 @timnitGebru Same!!! 2021-05-14 19:33:29 I'm not sure what it means to rank a list like this, but it is wonderful to see @timnitGebru 's leadership and general awesomeness recognized! https://t.co/x1rQX2xFW2 2021-05-14 19:24:16 RT @sigtyp_: Thanks so much to @OlgaZamaraeva for giving in the inaugural lecture in SIGTYP's Lecture series. Check it out here: https://t… 2021-05-14 15:49:08 @dialect @LothianLockdown @accentbias @EdinUniLEL @SchoolofPPLS @benmolineaux @anghyflawn @wataruu @jurafsky That looks like a really cool paper! Thank you for the flag :) 2021-05-14 15:43:27 RT @_alialkhatib: hi! i recorded a talk about my paper at #chi2021 - "To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcom… 2021-05-14 14:54:39 @TaliaRinger Hard same. 2021-05-14 00:37:30 RT @sigtyp_: #SIGTYP lecture series: May,14th 16:00 UTC Olga Zamaraeva (and her amazing cat!) will present her work "Assembling Syntax: Ty… 2021-05-14 00:02:21 @kirbyconrod Spent *loads* of time at elementary chess tournaments when my kids were small. Eating pieces that the other player was touching ... ew. 2021-05-13 19:23:45 RT @AllysonEttinger: Had a nice chat with Sam from the TWIML AI Podcast about my group’s research and themes from last week’s ICLR panel on… 2021-05-13 18:54:21 @LiljaOvrelid @NoDaLiDa @lspecia @adinamwilliams (Was kind of holding out hope for Reykjavik time....) 2021-05-13 18:51:08 @LiljaOvrelid @NoDaLiDa @lspecia @adinamwilliams I was afraid that might be the answer ... I might catch the tail end of a couple of days then. 2021-05-13 18:47:20 Congrats to @ananya__g @strubell @andrewmccallum !! https://t.co/ebk8e3XOvk 2021-05-13 13:11:36 RT @NoDaLiDa: Participation at #NoDaLiDa 2021 will be 100% free of charge! Register at https://t.co/L1froAgC6R and check out the list of… 2021-05-13 13:11:33 @NoDaLiDa @lspecia @adinamwilliams Trying to work out if I can attend anything live ... what's the time zone for the conference program? 2021-05-13 12:48:25 @LiamDING1 @_Sharraf They are two books in a series :) If you university library subscribes, you can download the ebooks (to keep!) for free, so check there first. 2021-05-13 00:34:22 @EmmaSManning @kirbyconrod @elinmccready Polite interruptions on Twitter are the best :) 2021-05-13 00:03:30 @AndyPerfors Any chance of sneaking in during the daytime while the kiddo is otherwise occupied? 2021-05-12 21:09:04 RT @TeachingNLP: 2 days of #TeachingNLP (June 10-11) !! 2 keynotes : @_inesmontani @adveisner 2 panels : @IAugenstein @emilymbender @yoavgo… 2021-05-12 18:43:25 RT @ZoeSchiffer: Google has *not* said it's doubling the size of the Ethical AI team. It's doubling the larger Responsible AI team, led by… 2021-05-12 18:05:21 RT @UWSchoolofLaw: TOMORROW: Join @uwcip and @TechPolicyLab for a book talk with @katecrawford, a leading scholar of the social implication… 2021-05-11 23:16:44 RT @stanfordnlp: We're excited to host Angelina McMillan-Major (@mcmillan_majora) from the University of Washington at this week's Stanford… 2021-05-11 19:22:37 RT @TechPolicyLab: Just TWO days away! On May 13, the Tech Policy Lab, along w/ @uwcip will co-host a livestreamed book talk with "Atlas o… 2021-05-11 17:39:32 @_KarenHao Oh and one more: Weizenbaum 1976 (_Computer Power and Human Reason_) has some good discussion starting on p203 including a critique of IQ, the notion that intelligence is one linear scale, and how that relates to stupid claims about AI. 2021-05-11 16:59:53 @yoavgo @GraemeHirst @dirk_hovy @mgalle Twitter is emphatically not the venue to discuss this type of policy. If you have a question, you know how to reach us via email. 2021-05-11 16:58:53 @_KarenHao Also: https://t.co/wKiNRN2AnW 2021-05-11 16:57:57 @_KarenHao This might be relevant: https://t.co/BXHliSAGz2 2021-05-11 16:32:13 @TaliaRinger Very small MPU (architecture tweak w/"SOTA" or application of someone else's architecture tweak to other task w/"SOTA") + disastrous culture of overwork from rushing to publish same before someone "scoops". 2021-05-11 00:06:08 @_Sharraf Enjoy! 2021-05-10 21:42:17 @TaliaRinger @IllinoisCS @plfmse Congrats, Talia!! 2021-05-07 21:07:48 Carlini setting up his talk as testing an @xkcd conjecture. #WELM 2021-05-07 18:32:25 @JesseDodge Possibly apropos: https://t.co/ILqdxzK5UL 2021-05-07 18:26:25 .@JesseDodge at #WELM panel #1 reflecting on how difficult it is to enact change at the community level, and asking for ideas on how to adapt incentive structures. Partly in response to: https://t.co/r6QWTiXsdX 2021-05-07 18:24:05 .@natschluter on #WELM panel #1 talking about the value of (presently devalued) work WITHIN #NLProc 2021-05-07 16:08:56 RT @WiNLPWorkshop: Three more weeks until our early (visa) deadline for the #WiNLP Workshop co-located with #EMNLP2021! Find out more about… 2021-05-07 04:26:28 @athundt @mmitchell_ai @timnitGebru @rajiinio @jovialjoy @mathbabedotorg These are all great questions, and I don't want to make light, but I'm also really enjoying the visual of papers driving around a desk (and running into things) because I keep misparsing that noun phrase.... 2021-05-07 03:44:12 @k_ditya Normal and helpful context. 2021-05-06 14:00:05 @onurbabacan @Ozan__Caglayan If someone were seriously making that pt in the context of particular use cases, they would describe those use cases. The papers I'm complaining about (& 2021-05-06 13:02:49 @Ozan__Caglayan I didn't know about this -- thanks. 2021-05-06 05:04:04 What emerges as the most elementary insight is that, since we do not have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom." 2021-05-06 05:03:53 They cannot be settled by asking questions beginning with "can." The limits of the applicability of computers are ultimately statable only in terms of oughts. > 2021-05-06 05:03:31 What I conclude here is that the relevant issues are neither technological nor even mathematical > 2021-05-06 05:03:17 They may even be able to arrive at "correct" decisions in some cases---but always and necessarily on bases no human being should be willing to accept. There have been many debates on "Computers and Mind." > 2021-05-06 05:03:01 More Weizenbaum (1976, p.227): "Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they _ought_ not be given such tasks. > 2021-05-06 03:42:38 RT @TeachingNLP: ... the deadline to apply for this great #naacl2021 initiative is May 6 ... https://t.co/lOvvh9aK51 2021-05-06 00:22:51 I guess grandiose statements from AI bros are really nothing new. 2021-05-06 00:22:27 > 2021-05-06 00:22:14 As Professor John McCarthy, head of Stanford University's Artificial Intelligence Laboratory said, "The only reason we have not yet succeeded in formalizing every aspect of the real world is that we have been lacking a sufficiently powerful logical calculus. > 2021-05-06 00:21:44 I've been reading Weizenbaum's _Computer Power and Human Reason_ (1976) and it is absolutely full of gems. Making me lol just now, from pp200-201: 2021-05-05 19:43:01 @gradientpub h/t @ruthstarkman for the link! 2021-05-05 19:41:44 @gradientpub That is, asking students, for a given #NLProc product, to think about what specific uses they would list in such a clause & 1) Hard-to-decide corner cases 2) Missed undesirable uses 3) Prohibited good uses 2021-05-05 19:39:59 @gradientpub In particular, the points about making restrictions specific, to facilitate both compliance and enforcement suggested a possible #ethNLP exercise to me: https://t.co/un0olXJojw 2021-05-05 19:38:38 I found this essay by Christopher Moran in @gradientpub really thought-provoking. https://t.co/rgxySkJeYr > 2021-05-05 17:52:28 @RainKotut @UW_iSchool Welcome!!! 2021-05-05 13:01:48 @kirbyconrod @VasundharaNLP @VerbingNouns https://t.co/KtUgn6ONQ2 2021-05-05 00:51:07 @kirbyconrod @VerbingNouns Here's my shortest known path: https://t.co/RI6NjG6Hh9 2021-05-05 00:36:22 @kirbyconrod @VerbingNouns For some of us at least! My Erdős number is 4 :) 2021-05-05 00:02:11 @soldni Apparently this _Le Monde_ article translated it as "perroquets chanceux" aka "lucky parrots" (below the paywall fold)... so ask for a raise => https://t.co/SPeDPiMbgn 2021-05-04 22:16:08 @rcalo What about dad-joke immunity? 2021-05-04 20:01:28 @EmmaSManning Future work :) 2021-05-04 19:17:39 RT @inlgmeeting: We're delaying the INLG submission deadline and allowing submission via ACL Rolling Review (@ReviewAcl) as well! Updated… 2021-05-04 16:25:25 @brendan642 You quoted my tweet --- you were either talking *to* me or *about* me, and (apparently) complaining that I was pushing back against claims that linguistics is irrelevant. at-mentions are clearly discursive moves and it is a weird flex to pretend that you don't know that. 2021-05-04 15:40:54 @ggdupont @slippylolo @LightOnIO @InriaParisNLP We cite this & 2021-05-04 15:39:55 @ggdupont @slippylolo @LightOnIO @InriaParisNLP See also the proposal of data statements, which is a call for specific documentation of NLP datasets: https://t.co/9n9fXESIFg 2021-05-04 15:35:03 @brendan642 Perhaps I misread your first tweet then --- seemed like irritation at the discussion and so I wondered why you were directing that (only) at me. 2021-05-04 14:05:43 @meganmorrone Had about 36 hours of being sleepy and a little under the weather with Pfizer #2 during which time I knew I wasn't risking infecting anyone else bc the symptoms were just vaccine side effects. And then two weeks later: fully vaccinated 2021-05-04 13:55:34 @brendan642 Did you also reply to Chollet then, when he dumped all over linguistics, or just to me? And if so, why? 2021-05-04 13:04:54 @mervenoyann Uh, go tell that to all of the folks doing incredibly important work on the impacts of deploying CV? 2021-05-04 13:04:22 @brendan642 Not sure what you mean? I have nowhere said that statistical methods aren't relevant. All I've ever done is to push back against the claim that *linguistics* isn't. 2021-05-04 13:02:42 @mervenoyann Also, it's hardly gatekeeping to pushback when the field with all the money says: we don't need any input from anyone else. This is all ours now. https://t.co/j7xbLJUP36 2021-05-04 13:02:14 @mervenoyann So I don't get to be at all disgruntled when a techbro with 200k+ followers says that my area of expertise is obsolete? I just have to be super polite and meek because CS has all the money? No thanks. 2021-05-04 03:36:26 https://t.co/9Kit58xmwI 2021-05-04 03:31:11 Also, I'd love to see ML/AI/CS folks in general get in the habit of finding & 2021-05-04 03:29:05 This take was infuriating enough, without adding the jab above. Techbros gonna techbro I suppose. Gah. https://t.co/fbkfc0Y2oh 2021-05-04 03:28:10 FFS. Just because most ML folks publishing in #NLProc conferences can't be bothered to actually learn about the application domain doesn't mean linguistics isn't relevant. https://t.co/HR8tULf3IR 2021-05-04 02:49:52 RT @WiNLPWorkshop: Reminder: the deadline for registration waivers for underrepresented countries and affinity groups is May 6! https://t.c… 2021-05-03 23:42:10 RT @begusgasper: We're hiring (@BerkeleyLing @UCBerkeley)! a two-semester appointment for one lecturer. Duties include teaching 2 courses:… 2021-05-03 20:46:37 @anne_e_currie IKR? 2021-05-03 19:45:21 @XandaSchofield You've done way too much in the first half. Perhaps they'll be impressed that you know how to say no and/or that you've done so much already. 2021-05-03 18:59:08 The article lays some of the blame at the feet of "sycophantic journalism", but I think that a big portion also goes to academics, other researchers, VCs, and entrepreneurs promoting #AIhype. https://t.co/PR55VcuMQz 2021-05-03 18:57:46 This contributes to: "Perhaps the greatest ethical issue is one that has received the least treatment from academics and corporations: Most people have no idea what AI really is. The public at large is AI ignorant, if you will." 2021-05-03 18:57:07 I think the term "AI" is obfuscating things here. I'd really like a more direct statement of what techniques are being used and how their outputs are being contextualized. Calling this stuff "AI" suggests an autonomy that isn't there. 2021-05-03 18:55:04 "Tim O'Reilly, the publisher of technical books used by multiple generations of programmers, believes problems such as climate change are too big to be solved without some use of AI." 2021-05-03 18:54:35 Finally, the article explores a few ways in which "AI" is being applied, in biomedical domains (looking for potential drug/disease combos to test), in mitigating climate change, and others. 2021-05-03 18:53:23 I think a better statement would be that one of the sources of risk in LLMs is that they are presented and perceived as engaging in meaningful discourse, when they don't. https://t.co/chapWn4gDt 2021-05-03 18:52:41 This makes it sound like we think if only we had AGI, there wouldn't be any problems. I couldn't disagree more. https://t.co/chapWn4gDt 2021-05-03 02:02:55 @histoftech Sorry for the false alarm. I, too, am glad that it's not an additional source.... 2021-05-03 01:51:52 @histoftech Back cover is blank inside and the page facing that is the last page of the index! So, I think this is just old print run. 2021-05-02 14:03:21 @histoftech Ugh I'm so sorry. I just bought a copy from Powells and it also has the wrong info (corrected by hand now). But maybe this really is an old print run still? https://t.co/tHdTEbLu7f 2021-05-01 14:52:38 "Yes there are still people doing HPSG" https://t.co/61eqM7Psqg 2021-04-30 23:50:32 RT @annaeveryday: interdisciplinarity is when you smoosh together two or more titles from different “types of papers” memes 2021-04-30 19:35:48 RT @NAACLHLT: Help make NAACL 2021 a success - apply to volunteer + get perks like free registration! Deadline: May 6. We need volunteers f… 2021-04-30 19:21:36 @kirbyconrod The face work fills up the space a bit and makes the absence of the "excuse" less obvious, but really: no explanation needed! 2021-04-30 19:21:10 @kirbyconrod The main thing, though, is that you don't actually have to say *why* you can't make it. Just that you can't. And I was just voicing what I read into your message about wishing you could... 2021-04-30 19:13:34 @kirbyconrod "I'm really sorry to have to miss this. I hope it turns out as amazing as it looks to be and that I'll recover from the FOMO." 2021-04-30 18:12:56 @amyjko @metageeky Reading to evaluate is so much harder for me than reading to learn. Maybe a strategy: read to learn and then evaluate what I learned? 2021-04-30 17:59:17 RT @rctatman: General question bc I've honestly reached semantic saturation on the term: how do you feel about a technical system being des… 2021-04-30 15:44:55 I've been doing some of this! Kind of a pandemic project, I guess, but likely one I'll continue (time permitting) https://t.co/4C8ZjOaA3i ... probably not papers of interest to the OP, but I thought I'd share anyway :) https://t.co/rJpT58pK0r 2021-04-30 13:27:01 RT @ReviewAcl: Want to be the face behind this account? We have an opening for a PR lead. Responsibilities: maintain website 2021-04-30 12:43:56 RT @ReviewAcl: We have added a dates page: https://t.co/8NFBhFC6Lz Authors, refer to this page for a list of ARR participating venues and… 2021-04-30 00:05:10 RT @KCPubHealth: Looking for a COVID-19 vaccine in King County? Look no further! With more vaccine supply and plenty of appointments, get… 2021-04-29 19:13:25 RT @QueerZoomer: NAACL 2021 will waive registration for all authors from underrepresented developing countries and affinity groups with acc… 2021-04-29 17:39:52 @VocalFriesPod This is making me think of synesthesia ... a very specific, food-based synesthesia. 2021-04-28 19:29:45 RT @mathbabedotorg: Whenever I come across a new technology I ask myself, for whom does this fail? 2021-04-28 19:02:04 RT @csdoctorsister: Did a guest blog on the critical infrastructure need: broadband. https://t.co/K8oLRukDi6 2021-04-28 05:51:04 RT @amandalynneP: Calling all ACL Student Research Workshop alums! Participate in our retrospective for the 30th anniversary of @acl_srw.… 2021-04-28 02:50:33 Starts in 10 minutes! Check out @tobysmenon 's microtonal composition. I can't wait to hear it :) https://t.co/y9a4NyAvWS 2021-04-27 21:14:33 RT @_vsmenon: @tobysmenon 's piece tonight will be in a 19-tone scale instead of the standard 12-tone one. https://t.co/vPxHlkY8Sf 2021-04-27 19:41:15 @ShannonVallor What I noticed about a year or so ago is that talking in terms of "ethics" seems to invite people to theorize how to handle the competing needs of groups that are fundamentally on an equal footing ... which rarely describes what we're trying to deal with. 2021-04-27 17:06:30 @alexhanna Awesome move that I'll have to remember. Also the replies to a query in response to that tweet about the Puget Sound are awesome https://t.co/EX5DSS8aYh 2021-04-27 16:48:06 RT @ReviewAcl: We are sending invitations out to action editors and reviewers. And we have *submissions*!! Thank you for volunteering, resp… 2021-04-27 15:48:05 #proudmama moment! Here's the info for @tobysmenon 's second concert as a composition major at @UCLAalpert Happening in just over 11 hours :) https://t.co/y9a4NyAvWS 2021-04-27 15:47:02 RT @tobysmenon: @UCLAalpert Spring Undergrad Composition Concert premieres tomorrow at 8pm pacific! https://t.co/wk2BVigSiH 2021-04-27 15:46:45 Got opinions about how virtual formats work & The link took me to a Twitter warning about spammy links, but then worked when I continued. #NLProc https://t.co/jp9vlqDeAh 2021-04-27 15:43:15 RT @NAACLHLT: You can help determine an effective format for the virtual NAACL 2021! You can provide quick feedback on options or volunteer… 2021-04-27 15:34:37 @_m_libby_ Oh heavens! A poor choice! Do you still fret about singular uses of you when it should be thou? 2021-04-27 15:30:33 @_m_libby_ @StefanoCoretta Oh: And I'm very much willing to bet that you already have singular 'they' in your usage. For example: "Oh no, look at that warm coat that someone left here. ___ must be chilly right now." 2021-04-27 15:28:55 @_m_libby_ @StefanoCoretta a) Singular 'they' in this kind of usage goes back to Chaucer. b) If you don't like that, you can pluralize the antecedent c) If you don't like that, you can use 's/he' or 'he or she' though these are less inclusive than 'they'. It's not actually hard. 2021-04-27 13:38:35 @monojitchou Interesting self-awareness on the part of the email writer! But yeah, assuming a newborn to be male seems like the most extreme possible case of "the default human is male". 2021-04-27 13:08:52 @jtmuehlberg Oh definitely. I always (well, almost always) include a section called "typos/stylistic suggestions" or similar (which obvs doesn't impact the accept/reject recommendation). It goes there. 2021-04-27 05:46:25 @StefanoCoretta I do plan to point it out constructively in the review. I’m putting the ranting on Twitter instead. 2021-04-27 05:32:47 @StefanoCoretta There's no excuse for a grown up who can write clear academic English and is a linguist no less to be naive about the problems of generic masculine in 2021. 2021-04-27 05:10:51 @TimoRoettger Wait -- aren't there both masculine and feminine forms of reader in German? (Leser/Leserin?) So what do you mean 'reader' is masculine? 2021-04-27 04:59:41 @TimoRoettger It's exactly analogous to accidentally using a swear word because of false friends or whatever ... in contexts where you really didn't want to swear. Your listeners will likely have a visceral reaction. 2021-04-27 04:58:32 @TimoRoettger There's all kinds of mistakes that one can make as an L2 speaker which are just grammar mistakes. But the ones that land on top of existing patterns of sexist language use ... land of top of existing patterns of sexist language use. 2021-04-27 04:54:13 @TimoRoettger Sorry, but my reaction to encountering "he" for generic humans is visceral. I'm sharing that reaction here. Learn something from it if you like. 2021-04-27 04:47:43 @TimoRoettger So avoiding this use of 'he' falls into the same category of avoiding words that you know to be rude. Because it is. If the rest of the writing weren't so fluent, I'd cut these authors some more slack. (Though I am now guessing German L1, from the abstract.) 2021-04-27 04:46:23 @TimoRoettger And not masculine in the sense of grammatical gender of the antecedent, because English doesn't have grammatical gender like German/French/etc do. It means I perceive that person to be male. 2021-04-27 04:44:59 @TimoRoettger Just so you know, for certain English readers, it is EXTREMELY off-putting to encounter this, especially in modern text. If you're referring to a generic person, use they. The English word 'he' means *specifically* masculine. Always has, actually. 2021-04-27 04:43:59 @RPKarlinguist It's not ... that would have been slightly more palatable. (And the actual example wasn't in reference to the reader of the abstract, but rather a talking about speaker/hearer/reader as interlocutors.) 2021-04-27 04:36:07 @RPKarlinguist It's particularly striking because it's been a while since I've seen this in any recent writing. (I'm currently reading Weizenbaum's book and it's EVERYWHERE there, and really annoying, but that was written in 1970-something.) 2021-04-27 04:35:08 @RPKarlinguist It is. Dunno which one because anonymous review. 2021-04-27 04:34:43 @RPKarlinguist It's the not being aware of the discourse that I'm really skeptical of ... what I'm currently reading appears to be written by established academics (it's an abstract I'm reviewing). 2021-04-27 04:28:26 @RPKarlinguist Not realize or not care, I guess. 2021-04-27 04:23:41 @RPKarlinguist For someone to make it to 2021 and not realize that it's flat out rude to assume generic people are male... 2021-04-26 22:31:09 @AlvinGrissomII That's a super clear statement, but yeah, the people who need the info might not know to look for it on the syllabus. I guess the question is: would your .sig be any more salient? 2021-04-26 21:41:15 @AlvinGrissomII I said good idea, but I think it might be more effective to just say this directly to e.g. students and then leave it as an unspoken rule with others who email. 2021-04-26 20:25:03 It is appalling to watch as Google not only gave up their chance to continue to benefit from this talent, but also continues to try to denigrate the work, achievements, values of these two amazing scholars in the media. 2021-04-26 20:23:57 .@mmitchell_ai and @timnitGebru are not only amazing scholars but also fantastic leaders. This thread from Meg sheds some light on what they were able to achieve. https://t.co/dSqOne2KkM 2021-04-26 19:43:29 @Etyma1010 @adelegoldberg1 I think this one only works if "synonymous expressions" means "sharing exactly all their meanings". Otherwise, it's quickly taken down by the ways in which composition can draw out different senses. 2021-04-26 19:38:38 @mer__edith Meaning they probably got away with it lots of times. All the more reason to make some noise! And I'm sorry this happened to you. 2021-04-26 19:36:59 @mer__edith Ugh. I live in fear of doing this by accident (thinking something was my idea when actually I'd learned it from someone else), but giving credit once it's been pointed out is the easy part!! And not doing so suggests it wasn't any kind of honest mistake. 2021-04-26 19:35:58 @mer__edith @profhoff You shouldn't have to be cagey! This is all on "sr scholar". I'm glad you have receipts. 2021-04-26 17:39:41 RT @NAACLHLT: NAACL 2021 is calling for organizations/individuals to host affinity groups socials. Socials are a great way to build communi… 2021-04-26 17:39:29 RT @NAACLHLT: If you considered attending NAACL but are not sure if you can because of financial hardship, caring duties, or because you ne… 2021-04-26 15:52:45 RT @dialect: We are hiring a sociolinguist!!! ('Lecturer' is like Asst Prof, but *with* tenure.) Go to: "Click here for a copy of the fu… 2021-04-26 14:48:30 @AutoArtMachine Plurality might be a good word for what I'm thinking of, true. The point is that accessibility requires thinking about scale but in a different way. 2021-04-26 14:17:00 Inclusive generalization makes room for & 2021-04-26 14:15:21 I think the difference is in the approach to/reason for generalization: extractive generalization tries to look for "one clever trick" that scales so as to maximize the possibilities of extraction. 2021-04-26 14:13:49 But accessibility first means a keeping focus on another kind of generality: designing something so that it is accessible to all. 2021-04-26 14:12:57 I have been advocating for a turn away from "general" and towards specificity: understanding the particular problem to hand, understanding the particular context of deployment etc. (/waves to @rajiinio @amandalynneP @alexhanna @cephaloponderer ...) 2021-04-26 14:10:58 I really enjoyed this @RadicalAIPod episode with @clb5590 Sticking with me most is @clb5590 's discussion of her work on conflicting needs in deploying ML for accessibility. https://t.co/sCHjdPlsVb 2021-04-26 13:10:18 Full talk can be found here: https://t.co/BHwE2pmOjm 2021-04-26 13:09:26 I'm reminded of what @mer__edith said at her talk at BU back in Feb: https://t.co/397t85flHB 2021-04-26 13:08:03 Link for NEH numbers: https://t.co/GyBx7Ji0gE 2021-04-26 13:07:42 FY2020 NSF budget for Social, Behavioral & FY2020 NSF total research budget: $6720m FY2020 NEH total research budget: $162m https://t.co/LpOMAmy0Ke I see a massive underinvestment in the research that will help us understand the social world. https://t.co/O3pIArMpzT 2021-04-25 20:46:37 RT @multilingual_s: On that note: If you considered attending NAACL but are not sure if you can afford it because of financial hardship, ca… 2021-04-25 14:47:54 @cainesap Not in the least --- though I did get to use my language skills yesterday :) And I probably do better than average at name pronunciation. 2021-04-25 03:10:34 Attn @dirk_hovy: the link I had to remove was to the recording of your #ACL2016 paper with Spruit. 2021-04-25 03:09:36 Apparently 2016 is affected as well ... thanks to @LucianaBenotti for alerting me to a link on one of my old course web pages. Work searching for the domain (techtalks dot tv) on all pages you maintain, if you've ever linked to @aclmeeting videos. cc @ArneKoehn https://t.co/1UL7BnT3td 2021-04-25 03:03:47 RT @niais: Big bunch of vaccines headed Seattle way - if you aren't vaccinated yet, sign up to get notified when our big sites have appoint… 2021-04-24 21:53:29 Volunteer role: data entry 2021-04-24 21:53:03 King County is in the midst of a 4th wave. So grateful to have a way to help out (in addition to masking up and minimizing contacts) https://t.co/Uq4syvyASX 2021-04-24 18:40:38 @strubell Thanks for the heads up. I'm relieved to note that I don't seem to have been recorded at an ACL event in 2013 or 2015. 2021-04-24 18:16:23 RT @strubell: PSA: this means that if your personal website links to one of these old videos, your website now links to porn. Probably a mo… 2021-04-24 17:47:34 RT @emilymbender: As a follow up, what do you think the ratio of NEH funding to NSF behavior & 2021-04-24 17:47:29 RT @emilymbender: Fellow USians, at a guess, what % of the NSF's research budget do you think goes to behavioral & 2021-04-24 03:25:55 @setlinger Kind of an evergreen tweet, I'd think, but usually prompted when I hear someone talking about (or quoting others talking about) "AI Winter" as if it were the main worry... 2021-04-24 03:18:28 RT @KathyReid: Great interview with @emilymbender and @mmitchell_ai with the @twimlai podcast talking #StochasticParrots - and the themes t… 2021-04-23 22:09:54 As a follow up, what do you think the ratio of NEH funding to NSF behavior & 2021-04-23 22:08:02 Fellow USians, at a guess, what % of the NSF's research budget do you think goes to behavioral & 2021-04-23 21:38:48 @SAB0920 @Mixed_jpg @ProRoMo @NYT_first_said "The poster" here = a bot that catches words that appear for the first time in the NYT. 2021-04-23 19:57:31 @yuvalpi Looks like @ojahnn would use sublivetweeting for what I was actually doing though: https://t.co/7BRSPagHhi 2021-04-23 17:55:43 @vinayprabhu How is that claim substantiated? How could it be substantiated? How many AI researchers, looking to show off what their system can do, would even know to look at it critically? 2021-04-23 17:54:50 @vinayprabhu Clicking through, btw, I see no mention of the language (presumably English?) and this wishfully mnemonic statement: "By necessity, correctly answering all questions requires understanding the meaning of the text, common sense reasoning, and a human-like emotional intelligence." 2021-04-23 17:53:34 @vinayprabhu I don't have time right now to look too deeply, but on a quick skim, I have to say I'm skeptical. The framing of this task, for example, suggests that an "AI" *might* be "capable of making the right ethical decisions." https://t.co/c5i5ZJKSFJ 2021-04-23 17:36:52 @MelMitchell1 (I guess until the techbros get ahold of @emanlapponi 's idea and manage to convince themselves/the media/the VCs that they are actually predicting the future....) 2021-04-23 17:35:30 This is such a good idea. Picking up on @MelMitchell1 's #EACL2021 keynote today and her call out to McDermott's (1976) points about "wishful mnemonics", using seems just right. It doesn't draw "wishful" comparisons to human brains/intelligence and is also appropriately silly. https://t.co/nT6KM3gmnq 2021-04-23 17:26:48 @vinayprabhu @MelMitchell1 I guess what I'm looking for is: how do the tasks change if the framing isn't "measure progress" but "deflate hype"? I think a lot of our current tasks are designed around what seems almost possible, and then overinterpreted due to "wishful mnemonics". 2021-04-23 16:36:40 @elinmccready @ToddTheLinguist @FeoUltima Seems like a similar phrase to "diverse person" 2021-04-23 15:43:31 @MelMitchell1 Alternatively: Can we set up experiments that attempt to show lack of e.g. common sense rather than progress towards it? 2021-04-23 15:43:19 @MelMitchell1 Re "how to measure progress towards general intelligence": it seems that any such evaluations presuppose the possibility of such progress. Does avoiding the fallacy of wishful mnemonics require breaking that presupposition, and if so, what ideas do you have about how to do so? 2021-04-23 15:42:21 Generally I like the set up of questions in chat/moderator brings them to the speaker, but there's definitely an art to asking the questions effectively in this medium, and maybe limits on subtlety. @MelMitchell1, here's the question I was asking: 2021-04-23 15:40:55 @_rabiulawal The transcription system can't "know" how right it is sometimes... 2021-04-23 15:35:55 RT @_roryturnbull: Yeah I guess technically English has reduplication but it's not reduplication reduplication 2021-04-23 15:14:24 https://t.co/dO8ICwqWIV 2021-04-23 15:12:46 To be clear, I'm #livesubtweeting because Melanie Mitchell's #EACL2021 keynote is coming across as a giant subtweet of "AI". Love this: 2021-04-23 15:06:54 #livesubtweeting 2021-04-23 15:06:47 Also, do people making claims about projected "AI" achievements not watch the talks / read the articles gathering all of the earlier predictions & 2021-04-23 15:04:55 The risk of another "AI winter" is far, far down the list of risks raised by AI hype. 2021-04-23 02:58:11 @yuvalpi @TwitterSupport I haven't tried to request verification... 2021-04-23 02:26:41 @TwitterSupport Not gonna click on anything here, because it looks sus, though the URL under that button does start with https://t.co/IevsIJkNJT. OTOH, this bit of the header info is odd: Received: from https://t.co/quzHrJx5Rm (https://t.co/quzHrJx5Rm. [199.16.156.145]) 2021-04-23 02:17:30 Uhh @TwitterSupport any reason I should be getting an email like this now, for an account I created in (checks notes) 2010? https://t.co/wlphykoXNC 2021-04-23 02:05:33 @alexhanna :( 2021-04-23 01:12:16 @yoavgo @Miles_Brundage @timnitGebru Oh, yes, environmental racism is CLEARLY orthogonal to any discussion of climate impacts. Seriously, Yoav -- if you're going to gas on about any of this, could you at least do me courtesy of untagging me in the discussion? 2021-04-22 20:12:39 Or rather, that's the damage I image Google might care about in terms of PR-style "damage control". It doesn't even touch the damage of firing researchers: to those fired and to the people on their team. 2021-04-22 19:40:35 If this was "damage control" it seems completely mis-aimed. The damage of the fallout from how Google handled our paper was to their reputation as a place where people can do good research (and as a trustworthy company). https://t.co/Bpg2KjGXOE 2021-04-22 19:02:47 RT @gleemie: Exciting jobs alert! U Michigan is hiring a cluster of professors who work on racial justice in science and technology policy,… 2021-04-22 18:40:37 @colingorrie @mayhewsw I think there could/should be similar books for phonetics & 2021-04-22 18:40:07 @colingorrie @mayhewsw Thank you! Btw, I wrote (together with Alex Lascarides) a second volume: Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics https://t.co/7fSWxKPNd6 2021-04-22 15:53:18 The live captions at #EACL2021 have the odd property of displaying *before* the speaker has said (roughly) the words written. I guess we have a tiny tape delay of some sort? 2021-04-22 15:50:54 This doesn't seem like a necessary correlation to me. @ojahnn, Zhiyuan and I organized a large, multilingual live-tweeting effort for ACL last year... https://t.co/YY0H9eDB78 2021-04-22 15:23:11 @mdekstrand @ShlomoArgamon I get 3 (not 4, which is also what I think mine is) but one of the links looks likely to be a name disambiguation error. 2021-04-22 04:58:55 [Citation needed] 2021-04-22 04:55:53 RT @KLdivergence: @math_rachel While I disagree with the perspective that a solution must be offered in order to point out a problem, it’s… 2021-04-22 04:42:14 @KristinHenry @timnitGebru Maybe I should have skipped over "engineering mentality" and gone straight for "tech solutionist mentality", actually. 2021-04-22 04:39:25 @KristinHenry @timnitGebru The leadership of a company that has amassed as much power as it has, can't possibly be a force for good if it's primarily constituted of folks focused on a) amassing $$ and/or b) gee-whiz tech "solutions". 2021-04-22 04:38:23 @KristinHenry @timnitGebru I picked "engineering" as the term because I was thinking in particular to contrast with the kinds of scholarship that are required to understand how technology impacts people (through interaction with the social world). 2021-04-22 04:37:33 @KristinHenry @timnitGebru Yeah, "engineering" might not be the right word here: maybe it's just CS. The only thing worth attention is "solutions" and the only point of a problem is to motivate a solution. 2021-04-22 04:05:29 @LeonDerczynski @Miles_Brundage @timnitGebru I'm really struck by the juxtaposition of "Remarkably, the choice of DNN, datacenter, and processor can reduce the carbon footprint up to 100-1000X." and "the most sustainable energy is the energy you don’t use" As if ML (incl LLMs) is some necessary activity? 2021-04-22 03:53:00 @Miles_Brundage @timnitGebru Also surprising that they don't cite Henderson et al 2020 https://t.co/0zWhlQclmj 2021-04-22 03:51:28 @Miles_Brundage @timnitGebru Well, our paper is not on arXiv, only in the ACM Digital Library, so I guess it doesn't count? ¯\_()_/¯ 2021-04-21 22:32:49 @TaliaRinger But *still* there's a difference between holding that opinion and telling it to the media, with apparently no concern for how it reflects on UW (CSE & 2021-04-21 22:31:47 @TaliaRinger I think the problem here though is the jump from "acknowledging that someone made a mistake" to "seeing that person as flawed", which is probably exactly the hero worship you're talking about. Not: he can't be flawed, but he can't possibly do anything wrong. 2021-04-21 22:29:03 @TaliaRinger Based on previous (email) interactions I've had with Ed, I really doubt it. At any rate, I'm not interested in giving it a try. 2021-04-21 21:10:20 @mervenoyann Yay! (For some reason, the second one seems to be finding less of an audience than the first, even though semantics & 2021-04-21 21:07:02 @mervenoyann Thanks! Did you also find the second volume in the series? Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics https://t.co/7fSWxKPNd6 2021-04-21 21:06:06 @sreekotay @timnitGebru @mmitchell_ai @UpFromTheCracks (I think you meant me, though, and not @\EmilyBender who is probably thoroughly sick of being confused for me.) 2021-04-21 21:04:53 @cynthiablee We also have a whole section on "paths forward" in the paper (Sec 7). But, guess what, those are about process solutions, not "technical" solutions, so I guess they don't cout? 2021-04-21 21:03:40 @cynthiablee 2021-04-21 20:45:56 I just have to add: I'd have assumed that a more pragmatic approach to the AI Ethics group would involve, you know, listening to what they have to say? That seems *pragmatic* to me. https://t.co/GNNnip7w3Q 2021-04-21 20:44:39 @timnitGebru @mmitchell_ai That means so much, coming from you two --- who are amazing advocates! 2021-04-21 20:41:12 @GooberPyle5 @timnitGebru @mmitchell_ai 2021-04-21 20:06:08 RT @emilymbender: You can't know to look for solutions if the problems aren't well explored. If you tell those with the expertise to surfac… 2021-04-21 20:05:03 To finish this unwieldy thread: go read the whole article. @NicoAGrant @dinabass and @josheidelson did excellent reporting https://t.co/6zyQewijYp 2021-04-21 20:03:10 But the Lazowska quote comes off as centering Dean as if he is somehow the victim in this story who needs defense against attacks on his reputation. Classic DARVO. 2021-04-21 20:01:22 @timnitGebru @mmitchell_ai Any given individual's personal qualities are only relevant insofar as the story is about how the incentive structures (at Google, at other tech cos, at big companies) can lead "even good people" to make the kinds of harmful decisions that management at Google made. 2021-04-21 15:22:45 @SlidesLive And to @eaclmeeting or whomever is designing the virtual conference, please have a look at this. This is what I see when I click on "plenary" in the schedule page. Which "join" link do you think is most apparent to a user in this case? https://t.co/JkQdLpErVp 2021-04-21 15:19:57 But serious suggestion for @SlidesLive there is NO REASON for the recording of a live stream to include 15 minutes of "starting soon" at the beginning. Who does that serve? Can you really not manage to avoid recording that as part of the live stream recording? 2021-04-21 15:18:49 No good clues that I should scroll down further rather than clicking "join livestream". And it's before breakfast here so I'm a little short on patience.... 2021-04-21 15:18:14 Okay, sorted. The problem is that the "plenary" block on the schedule page has no information about what this plenary is, and when I click on it, it takes me to a page that is scrolled down to center Marco Baroni's talk (which was actually in the middle of night, here). 2021-04-21 15:17:13 @maury_green @SlidesLive Thank you. 2021-04-21 15:10:25 @maury_green All I can find there is the stupid @SlidesLive page which seems to start with 16 MINUTES of padding and NO indication that it isn't *LIVE*. 2021-04-21 15:06:35 Hey folks at #EACL2021 --- does the plenary not start until 15 past or am I in the wrong place? Signed, grumpy in Pacific Time 2021-04-21 14:53:00 RT @ZoeSchiffer: I'm working on a larger project about how NDAs are used to encourage a code of silence in Silicon Valley. If you work in t… 2021-04-21 12:49:19 RT @barbara_plank: NoDaLiDa (the nordic #NLProc conference) is 100% free this year - Registration is open now: https://t.co/9m9VhLTNtQ K… 2021-04-20 18:13:22 @tttthomasssss @ojahnn Hmm, tried but I'm not sure I did it right, because it's not clear how to post a photo there... 2021-04-20 18:08:15 Is there no Pets-of-EACL channel in the #EACL2021 chat yet, or do I just not know how to find it? 2021-04-20 18:01:58 #EACL2021 panel on language diversity: @cigilt shouts out the Universal Dependencies project as infrastructure that has provided support to people working on language resources across many languages. 2021-04-20 17:52:01 @natschluter ... despite (what's perceived at least) as the prevailing attitude that primarily values model development. @natschluter calls on those with larger platforms to use them to push for such culture change. 2021-04-20 17:49:30 At the #EACL2021 panel on language diversity, @natschluter has all kinds of excellent things to say about the importance of culture shift towards valuing work on resource development, esp for non-English. This is difficult, rich, interesting work > 2021-04-20 17:40:52 @ojahnn Meanwhile I was too impatient to search for the to be my avatar. 2021-04-20 17:40:09 @ojahnn Also you hardly misidentified anything! 2021-04-20 17:30:56 At the #EACL panel on language diversity -- a factoid from an ACL anthology search. In some time period (?2020) BERT was mentioned more in paper titles than English. 2021-04-20 15:59:21 Dispatch from the #EACL2021 reviewing tutorial: Is feedback a sandwich? #linguistsgonnaling 2021-04-20 14:17:40 RT @KarnFort1: The EACL reviewing for NLP tutorial (2nd session) is starting NOW! #eacl2021 2021-04-20 14:10:21 At the #EACL2021 tutorial on reviewing (T5) -- looks to be really interesting and interactive! 2021-04-20 13:54:47 @LingMuelller So that's why I had so many new email messages this morning! 2021-04-20 13:49:50 Because clearly that's what I'm about: "Enhanced language models --- now with ethics!™" In other words, it wasn't even my expertise they wanted for free, but rather my newsworthiness. What vultures. 2021-04-20 13:48:53 Only just now really saw what they wanted me to speak on at their "private conference on language models, computer vision and societal and ethical concerns about AI innovations": my "specific work on enhancing language models". https://t.co/lcTQ9GXcZx 2021-04-20 05:08:15 RT @kalikabali: The EACL 2021 Diversity and Inclusion Committee is organising an event dedicated to language diversity on Tuesday, 20 April… 2021-04-20 00:39:46 @XandaSchofield 2021-04-20 00:34:03 @MarcWigan Yes full professor -- and TT since 2004. I finished my PhD in 2000. I'm willing to bet that I'm older than the average VC. And I lived in the Bay Area from 1991-2003 (and worked at two start ups), so I know something of Silicon Valley culture. 2021-04-20 00:13:02 @MarcWigan Just curious: how old to you think I am / how long do you think I've been faculty? What do you think my academic rank is? 2021-04-19 23:58:23 @MarcWigan I wondered how long it would take someone to ask me to please see the poor VC's point of view here Look, dude, I do lots of outreach. It do it where there is value to the community. And talking to a room full of people would could pay me for my expertise ain't it. 2021-04-19 23:34:53 @ilkedemir But yeah, I see your point, and I bet you're right. 2021-04-19 23:31:53 @ilkedemir Well, I wasn't actually inclined to do this one for money, either... 2021-04-19 23:24:29 @zehavoc Yeah, I would, except I'm not replying in that thread anymore. 2021-04-19 22:59:59 There's more: "I can understand your desire to maximize your profits, but I do not understand or accept your enthusiasm for insulting other experts who choose a different approach to life and scholarship." 2021-04-19 22:56:40 Predictably, he didn't like that and said "You are entitled to your pecuniary values and to your pursuit of maximum your earnings" and pointed out that he had never shirked from doing "pro bono work, including work for NGOS and universities in East Africa, and West Africa" 2021-04-19 22:36:14 That's all hypothetical. I just said no. And then when they wrote back to ask me to recommend someone else, told them a) it's not my job to organize their conference b) it's incredibly insulting to ask academics to speak for free to an audience of VCs. 2021-04-19 22:29:58 The time slot was apparently 20 min talk + 20 min Q& 2021-04-19 22:28:58 Just turned down an invitation to do a "pro bono" (their words!) talk to a private conference for VCs and CTOs. I'm all for making academic conferences widely accessible, including keeping costs low, but for an industry gig? No way. 2021-04-19 20:24:04 @ruchowdh Same :) on a Commodore 64 networked to a tape drive. 2021-04-19 19:09:22 @swabhz @CSatUSC @nlp_usc Congrats! 2021-04-19 16:20:35 What are the benefits? What are the costs/harms? To whom to each accrue? /fin 2021-04-19 16:19:31 But I think we should instead ask: why should we agree to the use of large-scale knowledge graphs of the quality that automation can produce? 2021-04-19 16:19:09 The article quotes Heiko Paulheim as saying: “Automation is the only way to build large-scale knowledge graphs.” ... which seems to presuppose that large-scale knowledge graphs are necessary and/or beneficial. 2021-04-19 16:17:33 But that's what DiffBot seems to be trafficking in: "Diffbot crawls the web nonstop and rebuilds its knowledge graph every four to five days. [...] the AI adds 100-150 million entities each month as new people pop up online, companies are created, and products are launched." 2021-04-19 16:16:38 And here we are with the typical IE use of the word "fact" which actually doesn't match the way the rest of the world talks about "facts". A set of words which, to a person who knows the linguistic system they belong to, expresses a proposition ... is not a "fact". 2021-04-19 16:15:42 Reading the rest of the para, it seems to be mostly about the mechanics of accessing text via a web browser, not about "reading", until the end: "... and uses NLP to extract facts from any text." 2021-04-19 16:14:15 "Reads... as a human would" suggests taking in the worlds, using internalized linguistic knowledge to map words + structure to standing meaning, liking that to hypothesized communicative intent, and furthermore locating the info with contextual cues to veracity, etc. 2021-04-19 16:13:02 We really, really, really need to get more precise in our language when talking about what "AI"s do. Exhibit #7613 from https://t.co/pvhh7A3dEf https://t.co/Yu5DLEEhwZ 2021-04-19 16:11:34 @KyleMorgenstein @timnitGebru Thanks for the ping, though reading that was rather infuriating... thread incoming. 2021-04-19 15:50:36 RT @ojahnn: I also like to create lists of papers by members of underrepresented groups in the #nlproc community. Here's that list for #eac… 2021-04-19 12:30:40 RT @timnitGebru: Again relevant reading for those who want to understand Google: by @KatlynMTurner @kanarinka and @space_enabled https:/… 2021-04-19 04:42:31 RT @rajiinio: This tweet & Not every internal… 2021-04-19 03:49:23 @timnitGebru You (and @mmitchell_ai and your team) have had it much worse than me --- I'm sorry for all you've had to go through. As awful as the harassment has been, the chance to work with you have been amazing! 2021-04-19 03:31:32 @elinmccready Read @timnitGebru 's thread for updates. 2021-04-19 03:28:56 @timnitGebru @JeffDean Me either. It's like: he has all the power and yet still can't be satisfied. 2021-04-19 03:24:12 This is so infuriating. @JeffDean somehow isn't satisfied with firing two top researchers, publicly trashing their (our) work, and sending trolls and harassers after all of us + members of their team who are still at Google. Nope, that's not enough. https://t.co/x3gKJOciaM 2021-04-18 16:57:30 RT @aclanthology: The proceedings of EACL 2021 and its workshops are now available in the ACL Anthology: https://t.co/I2l9jD6CsK 2021-04-18 14:53:57 RT @LucianaBenotti: @emilymbender @NikhilKrishnasw @eaclmeeting #EACL2021 posted news on how to attend the conference in their website: htt… 2021-04-17 19:28:00 RT @chiara_sabelli: Per la newsletter @scinet_it di questa settimana ho intervistato @dirk_hovy linguista computazionale @Unibocconi sull'a… 2021-04-17 18:12:23 @zehavoc 2021-04-17 18:02:25 RT @hadyelsahar: @emilymbender @NikhilKrishnasw @eaclmeeting Don't know about EACL videos but our AfricaNLP workshop videos are up already,… 2021-04-17 16:15:02 @zehavoc Aha -- success! The "login with Congrex" button worked. 2021-04-17 16:14:15 @MadamePratolung @timnitGebru Thanks, @MadamePratolung "the scrapable internet" is such a great phrase, too. 2021-04-17 15:37:32 @zehavoc I don't have a password yet, it seems. Have you succeeded there? 2021-04-17 13:16:17 @zehavoc You might be one step ahead of me --- I've received no information about how to connect to the virtual conf. (And I'm 100% sure I did register.) 2021-04-17 02:09:43 RT @ComputerHistory: As a computer scientist and activist, @rajiinio has worked closely with @AJLUnited on several award-winning projects t… 2021-04-16 21:33:00 RT @arjunsubgraph: .@uclanlp is researching the harms of treating gender as binary in NLP tasks, as seen and experienced by non-binary folk… 2021-04-16 19:58:49 @em_zanoli @linguistiche "Joint attention is all you need": 2021-04-16 19:53:03 @NikhilKrishnasw I, too, am interested in the answer to this question. @eaclmeeting will we be hearing over email maybe? Will the videos be available soon? https://t.co/NA1KVkSlgd? 2021-04-16 18:43:58 RT @mariadearteaga: The use of "ethical approach" in this description is a perfect example of why AI ethics cannot and should not be reduce… 2021-04-16 18:43:28 @alexhanna @linguangst "an ethical approach to verification" .... 2021-04-16 13:56:27 @marc_schulder when I first saw this in my mentions this morning I missed which account you were retweeting ... and the sad thing is the tweet wasn't entirely implausible on the other interpretation. 2021-04-15 23:03:15 @kadarakos @evanmiltenburg @KarnFort1 I have noted your suggestion for future work in this area, but no, I am not doing that work right now. 2021-04-15 22:57:37 @kadarakos @evanmiltenburg @KarnFort1 How you managed to collect your dataset while staying in compliance with local privacy laws, who your work might adversely impact, etc ARE part of the research and should be part of what is shared. 2021-04-15 22:55:55 @kadarakos @evanmiltenburg @KarnFort1 If you're doing this just to stir up trouble then you are out here to either waste my time or to cast doubt on the project of the ACL (as a community) trying to grapple with the societal impact of our work. If that's your goal, I'd appreciate your saying it outright. 2021-04-15 22:54:33 @kadarakos @evanmiltenburg @KarnFort1 Academic papers are not educational material? Of course they are. When we write about research we are teaching others about what we learned in doing that research. 2021-04-15 22:53:51 @kadarakos @evanmiltenburg @KarnFort1 You don't need to include the full document. You could, for example, say that the research followed protocol designed by the company to ensure privacy of the people's who data is collected by doing thus & 2021-04-15 22:53:04 @kadarakos @evanmiltenburg @KarnFort1 I honestly can't tell if you are arguing in good faith here or just feeling super defensive for some reason or trying to stir up trouble. 2021-04-15 22:51:39 @kadarakos @evanmiltenburg @KarnFort1 This isn't a judgment of individuals' ethics (or "criminal background") but rather about bringing considerations of privacy etc into the foreground of our own discussions about our research, because it matters and because our work has implications in these areas. 2021-04-15 22:43:33 @kadarakos @evanmiltenburg @KarnFort1 There's more to it than just review: If you affirm in writing something that isn't true, you are accountable for it. 2021-04-15 22:39:59 @kadarakos @evanmiltenburg @KarnFort1 I have agreed that it would be useful to more fully articulate the connection to the ACM CoE, but I don't believe anything in those guidelines strayed from it. On top of that, it is worthwhile to affirm in the writing that (& 2021-04-15 22:38:25 @kadarakos @evanmiltenburg @KarnFort1 Nor should the fact that some folks want to dismiss as "political" any work that challenges oppression mean that orgs like the ACL can't (and shouldn't) tend to the ethical implications of our work. 2021-04-15 22:38:04 @kadarakos @evanmiltenburg @KarnFort1 Surely when we talk about consideration for marginalized populations, we are in an area that gets politicized. But I am not aware of the term itself being a debated term. 2021-04-15 22:35:24 RT @mcmillan_majora: Hey #AcademicTwitter, does anyone know of any references in HCI and related fields for documentation design and the te… 2021-04-15 22:33:54 @evanmiltenburg @kadarakos @KarnFort1 Yes, absolutely a work in progress! I hope we'll have our blog post out about the NAACL 2021 iteration soon, which lays out our goals and provides some suggestions for what comes next. Just as with all other areas of scholarship, this will continually develop. 2021-04-15 22:32:32 @kadarakos @evanmiltenburg @KarnFort1 So you are both saying that this is unnecessary because it is already subsumed by your own local rules & 2021-04-15 22:30:13 @kadarakos @evanmiltenburg @KarnFort1 I appreciate the feedback that more explicit connections from the ACL author & https://t.co/rYSqqALQzE 2021-04-15 15:36:47 @evanmiltenburg My take: we (as a community) aren't necessarily yet well practiced in looking for these things, so having specific instructions (& 2021-04-15 15:31:36 RT @timnitGebru: Exactly. Our paper was approved through the internal approval process. Then we were mysteriously told to retract When I… 2021-04-15 15:24:22 @evanmiltenburg @KarnFort1 +1 to that! I find that ethical practice and research quality are almost always aligned, despite discourse that suggests that ethical considerations are somehow orthogonal to "science". 2021-04-15 13:16:20 It's important to name English not because it isn't obvious, but because it shouldn't be. https://t.co/RjmifsrSzR 2021-04-14 21:48:39 @rajiinio Hooray!! 2021-04-14 20:24:37 Looking forward to this! https://t.co/trCNe5lplN 2021-04-14 20:13:22 @mdekstrand Excellent thread -- thanks! Also "feeling ranty might delete later" is *chef's kiss* but please don't delete :) 2021-04-14 20:13:03 RT @mdekstrand: Why is critical race theory relevant to my work as a data scientist? Let's look at this picture 2021-04-14 20:07:14 @timnitGebru @radical_ai_ Thanks! I know you're always great about citing & 2021-04-14 20:01:27 I always get a little nervous when I'm working on a thread online (rather than drafting first) and people I really really respect start retweeting the first tweet(s) before I've finished. Like, I hope they'll still feel okay about that choice by the last tweet! 2021-04-14 20:00:35 @timnitGebru (I believe and hope that these aren't new questions, and that folks have thought about them already, just new to me because I've been relatively insulated from them given my own privilege and the fields I belong to.) 2021-04-14 19:59:55 @timnitGebru What does it mean to present work about shifting power at a conference where some of the audience or even other presenters are among those power is shifted *from*? 2021-04-14 19:58:35 @timnitGebru How can professors learn to value, nurture, and support work that charts paths towards shifting power away from us (and all profs have some power, but here I mean especially profs with various kinds of privilege)? Is it enough to believe in and value a more equitable future? 2021-04-14 19:57:13 @timnitGebru But even beyond training folks to recognize such work for what it is, there are also interesting questions about how we structure and manage our academic interactions. 2021-04-14 19:55:06 @timnitGebru Earlier in her talk, @timnitGebru spoke about how there's lots of coding bootcamps and similar, but no social science bootcamps. I was wondering what would go into a social science bootcamp, and here is one idea: a guidemap to how rigorous qualitative work proceeds. 2021-04-14 19:53:25 @timnitGebru Part of the work that those of us trained to see science as "the view from nowhere" and esp those of us benefiting from current power distribution have to do is to learn how to appreciate rigorous, passionate, academic work that challenges power and charts paths forward. 2021-04-14 19:52:00 @timnitGebru When work is honest about being about power (and in particular about *shifting power* as @timnitGebru said), it can't be dispassionate. But folks who equate not-dispassionate with not-rigorous are trying to hide behind "the view from nowhere". 2021-04-14 19:50:16 Still thinking about @timnitGebru 's brilliant talk in #FutureisIntersectional at Spelman this morning, especially what she had to say about talking about power rather than bias & 2021-04-14 19:28:23 RT @emilymbender: #NLProc authors who *still* don't specify what language their experiments are on (poll): 2021-04-14 16:50:55 Proud advisor moment!! Both of Lonny Alaskuk Strunk's MS work and especially of where he's going with it. https://t.co/uoubsopfa5 2021-04-14 16:49:15 RT @UWlinguistics: Check out where Linguistics grads go! https://t.co/HKtdQGkb31 2021-04-14 01:33:25 @katecrawford But also: Scale doesn't just have to be "lots of users" or "lots of data". It can also be "responsive to many different kinds of experiences". Though that kind of scale is much harder to achieve, and only actually achieved with thoughtful grounded work. 2021-04-14 01:32:21 @katecrawford When studying the effects of some technology, what do those effects look like at different scales? (This reminds me of the practices in value sensitive design that encourage us to think about what happens if some new tech becomes pervasive.) 2021-04-14 01:31:29 @katecrawford The pressure to scale technology usually means assuming everyone is happy with an experience designed for those already in power, but I really like the other ways @katecrawford encourages us to think about scale: 2021-04-14 01:29:53 They weren't fooling --- a great episode indeed! I particularly enjoyed @katecrawford 's points about scale (and shout out to the power of ten movie ... middle school nostalgia ho!). https://t.co/otl8Ant68i 2021-04-13 23:51:05 #NLProc authors who *still* don't specify what language their experiments are on (poll): 2021-04-13 22:07:15 RT @gneubig: One important notice about ARR: if you are running an ACL-affiliated conference or workshop, you can use ARR to accept submiss… 2021-04-13 21:25:58 RT @sigtyp_acl: Check out the April 2021 edition of @sigtyp_acl's newsletter (https://t.co/NHUWU7XiQo) to stay up to date with the latest i… 2021-04-13 21:20:40 It's no secret nor surprise at this point that that happens, nor who it happens to. 2021-04-13 21:20:22 And always keep in mind: the whole point of marginalization/minoritization is differential treatment. My experience of a person, environment, institution isn't representative of everyone's. That shouldn't stop me from hearing & 2021-04-13 21:17:27 For folks closely beholden to the profit motive, listening might be a bigger lift (though often beneficial in the long run, even to the bottom line). For the rest of us, it seems like it would only cost a little --- to set aside one's ego and really listen. > 2021-04-13 21:14:35 Deb's pt here is important: The most important contributions are coming from minoritized voices. The same power systems that tech tends to uphold are the ones teaching us not to listen to those voices. When someone persists, despite everything, the world should listen. > 2021-04-13 18:17:57 @_KarenHao So it turns out @_KarenHao was way ahead of me and did thread (some of) them here: https://t.co/dgdL1UXVDa 2021-04-13 17:53:53 @eyspahn It likely is different for different groups of people (or maybe different parts of Twitter?) but I'm starting to see it now, so I think I'm near the threshold for whichever group is relevant... 2021-04-13 17:53:03 @alveselvis2 Fortunately, I've managed to mostly avoid that so far, but keep expecting it to show up... 2021-04-13 17:52:37 @TheOracleM I maintain that it's super disconcerting to see replies to my tweets that look like commentary. (As opposed to quote-tweets, with commentary, which make sense.) 2021-04-12 21:53:28 @AlvinGrissomII I'm trying to figure out how someone could find your Twitter handle and form that expectation --- unless maybe you were posting to show you'd been captured and needed rescue? 2021-04-12 21:52:41 Just created a new section on my "slides/blogs/etc" page, in honor of someone invoking the thread reader app (and because I kinda thought all along that one maybe should have been a blog post) https://t.co/vMH1TdnOn7 2021-04-12 20:44:54 OT from my usual, but important: Someone who mixes up a taser and a gun and fires the latter "by mistake" isn't qualified to carry a gun. Someone who can't see a Black man as a whole human being isn't qualified to carry a gun. Also, let's stop hiring people to carry guns. 2021-04-12 18:39:37 @rachelmetz Well, it *is* still March 2020, by some metrics, isn't it? 2021-04-12 18:35:53 @rachelmetz Just in case you somehow missed the epic totwaffle thread last year: https://t.co/xlqQ4xbe4m 2021-04-12 12:37:47 RT @ReviewAcl: ARR is now accepting submissions! Please see https://t.co/wiXNAvVhZO for an overview of the submission form and link to the… 2021-04-12 03:49:47 Our campus is prettier than your campus :) https://t.co/qWthYmNC0f 2021-04-11 14:23:59 @datingdecisions Yep. Added a link to a relevant piece, if you're interested. 2021-04-11 14:23:37 @datingdecisions https://t.co/55Gkacrj1x 2021-04-11 14:21:42 @datingdecisions Machine learning/computer vision seems to regularly turn up "maybe phrenology wasn't wrong, just not executed well enough" 2021-04-11 13:32:33 RT @ah__cl: Our paper on artificial diversity is finally out! Two parts: * a methodological point: we argue that NLP is in great need of a… 2021-04-10 21:13:17 "This work begins by recognizing and interrupting the tactics outlined in the playbook — along with the institutional apparatus — that works to disbelieve, dismiss, gaslight, discredit, silence, and erase the leadership of Black women.”" something we can ALL contribute to https://t.co/vEvXSqBCQ9 2021-04-10 20:26:43 RT @timnitGebru: This is a must read as well as the Playbook written by @KatlynMTurner @space_enabled and @kanarinka It shows us that wha… 2021-04-10 20:00:05 If nothing else, I hope that the boycott can shed light where it's needed and help those not subject to such abuse understand it better---so that we don't let folks like Cornell Tech admin succeed with their attempts at gaslighting. 2021-04-10 19:58:25 I could spend some time talking about the ridiculous "logic" behind these posts, but they are quite minor in the grand scheme of things. I've received worse and others I know have received MUCH worse. Case in point is what happened to @UpFromTheCracks: https://t.co/egI0a4JuW3 2021-04-10 19:55:39 So I participated in the one-day boycott to encourage @Twitter to do something about online safety. Not sure if enough of us participated to make a noticeable dent. For my part, my post drew out some trolls who wanted to illustrate the issue. https://t.co/Oe7Q9OeMqa 2021-04-09 03:53:12 I'm in! No more from me until Saturday 4/10. https://t.co/2DAluV6CUH 2021-04-08 23:31:51 RT @kathrynbck: Being vaccinated is not a license to go to several department stores and ask all the employees for items you already know t… 2021-04-08 23:24:27 Picked 'white' there b/c that's how I identify, but really this should say: if you're a linguist who's not Black... 2021-04-08 23:22:31 Corollary: If you're a white linguist who's been asked by the media about this, point them to a Black colleague who knows AAL. https://t.co/gJDpQsB8be 2021-04-08 23:21:13 RT @mixedlinguist: If you’re a journalist who wants to talk to a linguist about whether or not George Floyd said “I ain’t do no drugs”, PLE… 2021-04-08 23:20:03 RT @emilymbender: On professional societies not giving academic awards to harassers, "problematic faves", or bigots, a thread: /1 2021-04-08 18:25:32 This will be amazing! https://t.co/u2Rsu4BKJ4 2021-04-08 16:32:45 @niloufar_s @TheOfficialACM Such key points, @niloufar_s --- thank you for the time and energy you are putting into this. 2021-04-08 16:27:22 6. Nobody's perfect II: This also isn't about creating and implementing a perfect process, just one good enough to e.g. not give the Turing Award to someone who for decades maintained a webpage explicitly denigrating Iranians and Indigenous people. /fin 2021-04-08 16:27:13 5. Nobody's perfect: The ask isn't to find awardees who have never made a mistake, nor ever angered anyone that they have power over. Better not best. /19 2021-04-08 16:27:03 4. As a corollary, if the culture of the field was just awful for a whole generation, and no awards are given to the 'leading lights' of that group, that's fine too! /18 2021-04-08 16:26:51 3. Keep in mind no one "has to" get any award. "Dr. XYZ is renowned for their work on (whatevs) but did far too much damage to the field to get the big award" is a perfectly sensible narrative. /17 2021-04-08 16:26:40 2. The process for considering candidate awardees should include a due diligence phase that answers the question: has this person engaged in activities which pushed others (esp. whole groups of people) out of the field? /16 2021-04-08 16:26:30 1. Before even creating a short list, examine both the purpose & 2021-04-08 16:26:23 I'm not currently involved in any award selection committees, etc, but I hope those who are (including future me) take some lessons from this: /14 2021-04-08 16:26:10 To serve as a way to lift up the achievements (and thus voices) of scholars within a field to those outside it? Yet another reason to think carefully about who the field wants to represent it. /13 2021-04-08 16:25:59 To serve as a carrot to inspire academics to work hard? Awards given regularly to harassers, racists and assholes aren't going to inspire hard work by community builders. /12 2021-04-08 16:25:45 Third, it's worth grounding any discussion in what the purpose of the award is. /11 2021-04-08 16:25:31 Someone who has systematically made the field hostile to a whole group of people has thereby harmed the advancement of the field. Those actions are extremely relevant. /10 2021-04-08 16:25:19 Second, to the idea that the awards are only about "advancements to the field" and thus personal opinions/actions aren't relevant, I say: /9 2021-04-08 16:25:07 Therefore, scholarly societies awarding honors have a responsibility to their membership to do their due diligence before selecting awardees. /8 2021-04-08 16:24:54 When the original harm impacted a lot of people, so does the new harm of giving the award. /7 2021-04-08 04:22:15 @Etyma1010 @kathrynbck Japanese does this too: the suffix -tachi (sometimes glossed as plural animate) can attach to a name or noun referring to an individual meaning so-and-so et al. 2021-04-08 04:17:48 @dlowd @TaliaRinger .@dlowd you seem to be very effective at acting with care, and I appreciate it! From where I sit, it looks to me like you care a LOT & 2021-04-07 22:48:21 @EmmaSManning One (not as regular as yours) movie night a few weeks ago, I picked Enola Holmes on the grounds that men are boring (I forget what other option we were considering) and was not disappointed! 2021-04-07 22:22:18 What's the word kinda like subtweet for when someone breaks the fourth wall on Twitter and talks to the investigators they know are reviewing their tweets? 2021-04-07 21:34:59 @timnitGebru @mmitchell_ai Saddened, because, well see terrible circumstances. Google had SO MANY opportunities to do the right thing. @timnitGebru @mmitchell_ai and others had put so much effort into advocacy while they were still there... https://t.co/yHp8TmWXmn 2021-04-07 21:33:10 @timnitGebru @mmitchell_ai Heartened because I am glad to see that people aren't just turning back to business-as-usual (and all credit to @timnitGebru and @mmitchell_ai for speaking out so clearly & 2021-04-07 21:32:09 @timnitGebru @mmitchell_ai 2. An influential Alphabet/Google stakeholder is pushing for better whistleblower protections: https://t.co/Gz8UtqPOuO 2021-04-07 21:31:07 1. Sammy Bengio leaving, seemingly at least in part over the treatment of @timnitGebru and @mmitchell_ai and their amazing team https://t.co/njnr88jwEp 2021-04-07 21:30:34 I am both heartened and saddened to see these developments wrt to Google: 2021-04-07 19:50:31 RT @CodedBias: You do not have to be a tech expert to advocate for algorithmic justice. Learn how by watching #CodedBias, now streaming glo… 2021-04-07 18:03:03 @TaePhoenix I'm so sorry, Tae. I hope that people's tweets are bringing you joy! For your entertainment, here are my cats, walking in circles, before eating: https://t.co/et2LiAtWBg 2021-04-07 18:00:33 @gneubig You could probably leverage the Redwoods corpus to detect VP ellipsis as these have a characteristic representation in the ERS. 2021-04-07 15:51:38 @LucianaBenotti https://t.co/TvbZF5FI9X 2021-04-07 15:50:08 @LucianaBenotti Happens to me too. Especially with people who live in Eastern Time in the US and seem to think it a universal reference point. 2021-04-07 15:29:47 @ArneKoehn I'm really enjoying reviewing for #AmericasNLP this year. Every paper foregrounds the language in question and shows deep familiarity with the data. 2021-04-07 14:23:54 RT @eaclmeeting: Check out the schedule for EACL 2021 at https://t.co/Rp0BbJFnVk. And remember, today is the last day to get early bird reg… 2021-04-06 18:27:51 @jovialjoy @CodedBias @netflix @AJLUnited @schock @ruha9 @rajiinio @timnitGebru @mmitchell_ai @merbroussard @math_rachel @safiyanoble @sandylocks @onekade @BrendaDardenW @EthanZ @mathbabedotorg I had to look under "Documentaries" to find it, and got the same image: https://t.co/zZW0RhAGpz 2021-04-06 18:23:55 @RadicalAIPod @luke_stark @morganklauss @timnitGebru @EthanZ That one caught me up a bit too. I mean, isn't the lesson from clothing makers that people come in many shapes & 2021-04-06 18:23:15 @RadicalAIPod @luke_stark @morganklauss @timnitGebru @EthanZ But it could probably have been written with more humility and DEFINITELY could have been written without the insistence on binary gender. 2021-04-06 18:22:35 @RadicalAIPod @luke_stark @morganklauss @timnitGebru @EthanZ To the extent that he has a platform that reaches people who don't otherwise participate in these discussions and wouldn't otherwise have heard of the scholars he's citing, it is a kind of lifting-up. 2021-04-06 18:21:48 @RadicalAIPod @luke_stark @morganklauss @timnitGebru @EthanZ These are good questions, Dylan! I think that it's tricky to thread the needle between using one's privilege & 2021-04-05 21:49:53 RT @ReviewAcl: Would you like to be a reviewer for ARR? Our reviewer invitation survey is here: https://t.co/p4gJKQhV0I 2021-04-05 21:39:53 RT @DataSciBae: If you read this thread and learned something, check out the full versions of each paper. I rounded them up here: https://t… 2021-04-05 18:29:36 @nsaphra @mmitchell_ai I suspect this is going to keep getting longer & 2021-04-05 18:27:41 @nsaphra @mmitchell_ai Wait, updated for Meg's addition: 2021-04-05 18:26:03 @nsaphra @mmitchell_ai 2021-04-05 18:22:10 @mmitchell_ai @nsaphra I should add: despite that mismatch, @_KarenHao did an excellent job summarizing the paper and putting it in context! (On super short turn around, too.) 2021-04-05 18:17:55 @mmitchell_ai @nsaphra Yeah, when Karen Hao asked us for access to the submitted version of the paper, coming with the angle of "Let's see what Google so upset", my reaction was: Good luck... You're not going to find it in the paper! 2021-04-04 22:42:01 @TaliaRinger My cats walking in circles before eating, for your entertainment: https://t.co/fCHtT47uMo 2021-04-04 16:37:31 RT @writingitreal: With My Grandsons, Trivoli Friheden, Aarhus Rainstorm Only one coat for three We share it like an awning The flowers'… 2021-04-04 12:58:55 RT @sjmielke: How do you write a good #NLProc paper? What is the lifecycle of a conference submission? And who is this Reviewer 2? @Vas… 2021-04-03 18:13:45 RT @chris_brockett: I rarely listen to podcasts, because I can read faster than I can listen, but this interview with @sulin_blodgett is a… 2021-04-03 00:58:30 RT @twimlai: “That might've just been one mistake, but it's one mistake that's the same mistake all over the world... You have the same sor… 2021-04-02 17:16:31 Some thoughts this morning on #ethnlp inspired by the latest @RadicalAIPod episode, with @sulin_blodgett : https://t.co/Q0Mn2zfAzI 2021-04-02 17:15:01 And finally, I think we now know enough to know that we should never look to #NLProc as a quick fix to get around problems that arise when something else has been dramatically scaled up, and always think critically about how that #NLProc fits in & 2021-04-02 17:13:46 Not just, as @sulin_blodgett points out, at the point of classification or machine decision making, but also e.g. in the design stage (whose language are we choosing to accommodate in this system?) etc. 2021-04-02 17:12:55 All of this means that when we build & 2021-04-02 17:11:33 Other times, it's the #NLProc system itself that is infrastructure: e.g. conversational agents mitigating access to commercial (& 2021-04-02 17:09:56 Sometimes it's not language tech that's the infrastructure, but something like social media, where the activity is happening *in language* and at a scale where usual models of interaction (handling disruptive participants) can't apply ... & 2021-04-02 17:08:30 So I think my answer to the original question is that the new problems that #NLProc raises (in addition to existing problems around language & 2021-04-02 17:07:21 But conversely, technical choices can exacerbate societal problems. And when tech is always striving for scale, this happens at scale: to many people and quickly. 2021-04-02 17:06:43 Basically: societal problems don't have technical fixes. 2021-04-02 17:06:15 The question is definitely sticking with me, too! One of the things Su Lin pointed out (and @_jessiejsmith_ echoed in the outro) is that we ultimately can't fix all of the issues with #NLProc without fixing the underlying problems (oppression) in society. 2021-04-02 17:04:31 The @RadicalAIPod hosts also posed a really interesting question: to what extent does #NLProc raise different issues around language & 2021-04-02 16:08:47 Got to listen this morning, and this was every bit as amazing as I'd guessed! I particularly enjoyed what @sulin_blodgett had to say about the importance of specificity and precision: in what we mean by 'bias' and in thinking about algorithms in their deployment contexts. https://t.co/OyPP9sUszc 2021-04-02 15:42:21 @kathrynbck I'm sorry. I hope you can reschedule easily. 2021-04-02 00:38:46 @eaclmeeting Thanks, @RShekhar_vision for pointing me in the right direction! https://t.co/3MVCO6AwYj 2021-04-01 23:24:49 @RShekhar_vision @eaclmeeting And, I'm happy to see that 2/3 of the keynotes are at an accessible time for me! So, off to register... 2021-04-01 23:23:50 @RShekhar_vision @eaclmeeting Ah yes -- thanks! I guess I clicked on "program" and expected to find the info there.... 2021-04-01 22:36:39 Hey @eaclmeeting --- trying to decide whether to register for #EACL2021. The keynotes look great, but I'd like to know if they're going to happen in the middle of the night in my time zone. Will the schedule be published before the early reg deadline? Thx! #NLProc 2021-04-01 20:51:50 @ZeerakW @annargrs @LeonDerczynski Ah, I see. So you could either reveal in review or ask the venue. Still safest to ask the venue first if they mind if you reveal in the review. If not, you can just put your contact info there and leave it up to the author(s)! 2021-04-01 18:56:54 @ZeerakW @annargrs @LeonDerczynski I'm assuming you already know who the author is and are just asking about breaking your own anonymity. The venue might want you to wait until their decisions are finalized, though. 2021-04-01 18:55:47 @ZeerakW @annargrs @LeonDerczynski It's probably best to check with the venue that you were reviewing for, first, but if they see no impediment then reaching out to the author with enthusiasm and an offer of future feedback is probably fine! 2021-04-01 16:57:29 @GretchenAMcC I LOLed at this one, too, but then: it would be interesting to see what kind of local idiosyncratic vocab develops during those long stays and to what extent it persists across research groups... #linguistsgonnaling 2021-04-01 16:50:07 @RadicalAIPod @sulin_blodgett @MSFTResearch I didn't get a chance to listen yet, but bookmarked it SO FAST when I saw your tweet announcing an episode with @sulin_blodgett ! 2021-04-01 16:45:37 @_KarenHao @techreview Congratulations!! 2021-04-01 16:07:21 RT @emilymbender: This is terribly important: I encourage all USians to write to their representatives drawing their attention to it. (I ju… 2021-04-01 14:43:07 My hobby today: Reading tweets announcing new #NLProc papers as if they are April Fool's pranks (often where I'm not quite in on the joke) 2021-04-01 14:38:08 @Kamhawy Hi! I'm on sabbatical this quarter. If you emailed me, my 'away' message should have pointed you to my colleague who is acting as CLMS faculty director while I'm away. 2021-04-01 13:18:51 RT @twimlai: "This is a cost-benefit analysis where the people paying the costs and the people getting the benefits are not the same people… 2021-03-31 22:27:30 @ian_soboroff @TaliaRinger @LongFormMath That might make some sense in math, but it makes NO SENSE AT ALL in papers explaining specific things that the author(s) themselves did in the actual world. 2021-03-31 21:42:02 @ThomasScialom @TaliaRinger It could be, but then it should be edited to "I" for publication. But also, that is not relevant here. Look at the tweet I'm replying to. 2021-03-31 21:39:08 @GretchenAMcC @queerterpreter Yep! My audio papers so far can be found here: https://t.co/4C8ZjOaA3i Not sure if those are topics you're interested in, but it's about 2h40 total... 2021-03-31 21:36:04 Those of us who have a front-row seat to the failure modes of "AI" (really, pattern recognition at scale) have a duty here to share that expertise with our elected officials. 2021-03-31 21:35:35 Perhaps most frightening: "On this basis and despite growing calls, the Commission argues that it would not be in the US interest to support a global prohibition on lethal autonomous weapon systems." > 2021-03-31 21:34:47 This is terribly important: I encourage all USians to write to their representatives drawing their attention to it. (I just did.) https://t.co/CKKuKqeD6r 2021-03-31 21:12:36 @alexanderklew @TaliaRinger I think it's fine to use 'we' in that case if the referents of the pronoun are made clear. "This work was done in collaboration with..." 2021-03-31 20:49:42 @chris_brockett @twichtendahl @TaliaRinger Well yes, that's fine -- but it's not the situation under discussion (scroll up to first tweet). 2021-03-31 20:36:31 @Combsthepoet Oh, that seems fine! The one I just saw was different: a cold-call email from someone I don't know... 2021-03-31 20:21:43 @twichtendahl @TaliaRinger Yeah, there's mathematicians 'we', which might be appropriate in Talia's case. It's generally *not* appropriate in my field ... like when people use 'we' to describe the methodology that they (as single-authors) used to carry out an experiment. 2021-03-31 20:20:24 Is there ever any good reason to start an email subject header with 'Re:'? (It always looks to me like I'm getting added to some email conversation in the middle. Maybe that's just because I started using email in 1991?) 2021-03-31 19:58:58 @TaliaRinger I find we in singly-authored documents immensely off-putting. It sounds like someone is reaching for a view from nowhere and obfuscating their own role as the scientist. Own your work! 2021-03-30 19:09:54 I mean, I sympathize with trying to get a large enough sample, and can understand sending one invitation + one chaser (maybe a week or two apart), but six copies?? They arrived 3/9, 3/11, 3/17, 3/19, 3/24 & 2021-03-30 19:08:38 Has anyone else been received repeated copies of an invitation from Texas A& 2021-03-30 17:36:11 RT @rachelmetz: hello! did you have to fill out a captcha to make a vaccine appointment? if so, what kind of captcha was it, and which vax… 2021-03-30 16:34:36 A Google alert I set a decade ago on the phrase "Natural Language Processing" delivered me a super cringey article on the #NLProc market incl. awful takes like describing Tay as "Some Redditors figured out the bot would repeat what they said. Hilariousness ensued." Should I: 2021-03-30 16:12:33 @kharijohnson @WIRED @ScottThurm @tsimonite @willknight Congratulations! 2021-03-30 15:29:06 @cocoweixu Thanks, Wei! 2021-03-30 15:28:59 RT @cocoweixu: This is my favorite set of advice I pass on to my students for writing author responses — https://t.co/MG79PiLGpp 2021-03-30 14:31:58 @JohnCGeorge2 @mmitchell_ai @compbiobryan Thanks, @JohnCGeorge2 & 2021-03-30 14:31:10 @mathbabedotorg There's also the issue of #longcovid, too --- and even if there's some DB tracking that, there's an inherent delay in those numbers. So for that reason and to protect the populations with less access, let's keep worrying. 2021-03-30 14:28:14 @rikvannoord @LeonDerczynski It's been a few years now, but I don't remember the ones that sounded particularly angy being more likely to also sound like they were written in L2. As for less experienced researchers: That's why we wrote the blog post! 2021-03-30 12:58:47 @evanmiltenburg @LeonDerczynski In fact, I wouldn't be surprised if there are different norms about code-switching in different communities, the modeling of which could also improve performance... 2021-03-30 12:57:50 @evanmiltenburg @LeonDerczynski Right. And code-switching in the world doesn't involve all possible language pairs. It's a way of speaking which belongs to specific communities with specific sets of languages. 2021-03-30 12:56:25 @zehavoc Lol. Thanks for this gloss because otherwise I would have been here a long time too, wondering. 2021-03-30 12:26:35 @evanmiltenburg @LeonDerczynski I think LMs built for code-switching trained on corpora with code-switching would be more like monolingual LMs than multilingual LMs, in that they're built for a specific variety... 2021-03-30 04:28:24 @mmitchell_ai You could possibly leverage Twitter search and/or a hashtag for this. e.g. in a separate tweet something like LLM = large language model #techterms. Then folks could search the unfamiliar term + that hashtag + your handle to see if you'd defined it.... Need a better hashtag tho. 2021-03-30 03:35:53 @Borillion @mariaruizv Thanks! 2021-03-30 02:22:29 Esp as in cases like this where someone makes that ask (for time & 2021-03-30 02:21:29 @mariaruizv Well, nothing stops you from doing it, but there are people on the other end of that, getting notifications from everything they are tagged in (speaking of, I've untagged the others now). So just like off-line, it's worth considering how those interactions feel to the others! 2021-03-30 02:20:26 @mariaruizv @JohnCGeorge2 @mmitchell_ai @compbiobryan And part of my own personal context is that I get bombarded with requests for my time & 2021-03-30 02:18:38 @mariaruizv @JohnCGeorge2 @mmitchell_ai @compbiobryan I don't think that's necessary. I do think it is valuable to learn how to learn on Twitter from conversations that aren't 'for' us (as I've written about here): https://t.co/czeL2UvmFW 2021-03-30 02:09:11 @mariaruizv @JohnCGeorge2 @mmitchell_ai @compbiobryan I'm happy to answer questions on here, and do so on a regular basis. What I was objecting to was the tone of the request, which seemed to suggest that there was something wrong with Meg using an abbreviation in the first place (in a medium with character limits, no less). 2021-03-30 01:53:54 @mariaruizv @JohnCGeorge2 @mmitchell_ai @compbiobryan Also, "acronyms and company slang" in front of new people isn't really a good analogy here. This isn't a company, there's no group that we all belong to, just someone walking up to someone else in a public place, who wasn't talking to them in the first place. 2021-03-30 01:49:04 @mariaruizv @JohnCGeorge2 @mmitchell_ai @compbiobryan "Intended audience" = the people the OP was talking to in writing the tweet. It isn't necessarily about being exclusive, just naturally how one writes in the context of a particular discourse. 2021-03-30 01:47:19 @mariaruizv @JohnCGeorge2 @mmitchell_ai @compbiobryan There's a difference between politely asking for clarification (after doing the bare minimum to figure it out for oneself) and demanding to be accommodated. Not everything we say on here has to be interpretable by everyone. 2021-03-29 23:06:39 @resistredaction @JohnCGeorge2 @mmitchell_ai @compbiobryan It's not your fault: it's fine to retweet or quote tweet as you see fit. I would, however, expect someone whose been on twitter for 5 years to understand enough about the platform to know who they are replying to. 2021-03-29 22:53:13 @JohnCGeorge2 @mmitchell_ai @compbiobryan My dude, if a tweet includes an acronym you are not familiar with, perhaps consider that you aren't in the intended audience, rather than demanding a definition? Or, you know, look into the feeds of the people in the conversation to see what field it's coming from? 2021-03-29 21:13:20 RT @mmitchell_ai: And for specific details on how to do "responsible dataset construction", check out our paper at FAccT! https://t.co/bo9g… 2021-03-29 20:31:05 RT @rcalo: Excited to do a TEC Talk with @mutalenkonde on STS, policy, and disinformation, hosted by @techethicsnd @techethicslab. April 5,… 2021-03-29 18:58:23 Just RSVPed and super excited for this!! https://t.co/GVj5Avkpn7 2021-03-29 18:29:50 RT @AJLUnited: You’re Invited! Join @AJLUnited & 2021-03-29 18:11:20 RT @sara_ibrah: #Google says it cares about #AIEthics but then fires those who deal with it critically. The consequences for users can be s… 2021-03-29 18:07:38 @jessgrieser @nancyf Totally understand why you'd want to. But possible counterpoint: the book was written while you were still an Assistant Prof, right? 2021-03-29 18:05:56 RT @LeonDerczynski: This piece is important - @haydenfield paints a clear and sharp image of how large language models are changing tech in… 2021-03-29 18:04:15 RT @mmitchell_ai: Love this article from @haydenfield!! Clear explanation of what's been up with "Large Language Models", with neat quotes… 2021-03-29 18:04:08 RT @twimlai: "This is a cost-benefit analysis where the people paying the costs and the people getting the benefits are not the same people… 2021-03-29 17:35:12 @geomblog @struthious @mmitchell_ai As in: it's not about assigning blame but about fixing the problem. And we all have responsibility for that. 2021-03-29 17:34:48 @geomblog @struthious @mmitchell_ai Yes! And more agreement: https://t.co/nCRwt5GzEh 2021-03-29 17:33:28 Great overview by @haydenfield in @MorningBrew ! I particularly like the quotes from @mmitchell_ai and @LeonDerczynski https://t.co/vjDrKOGKPn 2021-03-29 17:23:40 RT @shalinikantayya: I'm so incredibly thrilled to announce that @CodedBias will be available to stream globally on @netflix April 5th! It’… 2021-03-29 15:59:15 RT @struthious: good points made in this twiml podcast with @emilymbender and @mmitchell_ai about the 'bias is in the data not the model' a… 2021-03-28 22:22:00 @EmmaSManning I answered based on how I would pronounce them if xe wasn't around to tell me how, but otherwise the answer is: like xe says! 2021-03-28 22:21:17 RT @EmmaSManning: How do you pronounce the 'x' in English neopronouns that start with it, like xe/xem/xyr? (Choices are in the Internation… 2021-03-28 02:09:23 @djg98115 Lol 2021-03-28 01:24:58 RT @rajiinio: These are the four most popular misconceptions people have about race & I'm wary of wading into t… 2021-03-27 22:09:57 RT @anoushnajarian: "If you train on data from white websites, you're going to call white people 'people' & 2021-03-27 20:08:37 @marty_with_an_e 2021-03-27 19:32:23 @emanlapponi Oh yes 2021-03-27 19:05:13 RT @lousylinguist: Linguist Twitter Assemble! Linguists looking to move to industry, help is coming. Fill out this survey.  Meant for stud… 2021-03-27 19:04:39 .@davidschlangen says he's being provocative in this thread, but in fact it's spot on! Here's a tweet from the middle, but do read the whole thing, especially if you are interested in building, using or dealing with the effects of people deploying "open domain" #NLProc. https://t.co/5BWWUxprMY 2021-03-27 18:07:09 @tobysmenon Attn @linguisticats 2021-03-27 18:06:38 @tobysmenon Alttext: This video includes multiple different clips of the cats walking in circles around me as they wait for me to open the can of catfood and put it into their dishes. After the first two clips, there are four panes, each showing a different day. The video has no sound. 2021-03-27 18:05:29 Ingredients: 1 can of catfood/day 1 Euclid, walking counterclockwise 1 Euler, walking clockwise 1 GoPro 1 @tobysmenon interested in learning film editing software Result: https://t.co/T38VdwP3eH 2021-03-27 18:00:16 @AlexBaria The authors are as shown on the paper. You can find bib entries (with multiple solutions for including the ) linked from my publications page: https://t.co/GCIxuGpRbp 2021-03-27 16:27:46 This is frankly terrifying. Also another clear illustration of how words matter. @Tesla has no business calling this (marketing it as!) "full self driving". #AIHype does real harm in the world ... and we're about to see a whole lot more. https://t.co/5Aaa3XVhE5 2021-03-27 13:46:33 RT @emilymbender: For #NLProc friend working on author response for #acl2021nlp ... these notes from #coling2018 might be helpful: https:… 2021-03-27 05:29:45 @TaliaRinger Never did find out if that was on purpose or not, but we all loved it. 2021-03-27 05:29:25 @TaliaRinger I went to the Berkeley Hillel Seder in ~1992. Part way through the evening, this older gentleman showed up, joined our table, drank some wine and left. It was *awesome*. A couple of years later, I discovered he was a prof emeritus in the linguistics department. 2021-03-27 04:54:43 For #NLProc friend working on author response for #acl2021nlp ... these notes from #coling2018 might be helpful: https://t.co/7zL8lJJ3c7 2021-03-27 03:48:39 RT @TheMikeChase: Good news everyone. https://t.co/V1jL6zP1x2 2021-03-27 03:40:34 @JWchronicle Wow, I'm really sorry to hear that. I hope you will be done with this program soon. 2021-03-27 03:36:29 @JWchronicle I'm not sure why research reports would even need grading, but that does sound like a total communication fail. 2021-03-27 03:31:51 @JWchronicle 2021-03-27 02:27:14 RT @twimlai: “That might've just been one mistake, but it's one mistake that's the same mistake all over the world... You have the same sor… 2021-03-26 23:49:47 Time it takes me to actually get to the task if it involves a .docx file > World, please don't send me at .docx files. kthxbye. 2021-03-26 23:44:40 Yes, we're an interdisciplinary field, and yes it's good to define technical terms, but calling technical terms from one of the constituent disciplines that is just beyond the pale. It's not the "Association for proving that your Computation works on Language". Sheesh. https://t.co/g0onaUQVEF 2021-03-26 22:06:04 For those interested in the book, if you are at an institution with a library, check there first. You may be able to download a copy for free! Otherwise, it can be found here: https://t.co/7fSWxKPNd6 2021-03-26 19:20:11 3. I can't say I've read everything there is to read, but impressionistically, those who talk in terms of "charges", "allegations", and "blame" never show any signs of actually being interested in fixing the problem. 2021-03-26 19:19:20 2. Discussions of who is to "blame" for sexism/racism/etc (e.g. as of "AI" tools perpetuating such patterns) are a distraction. What's needed is an end to the systems of oppression and thus work towards redress, not allocations of fault. > 2021-03-25 20:55:39 So not only do I meet the students when they matriculate with a clean slate (no preconceptions based on their applications), I also don't remember who wrote objectionable letters.... 2021-03-25 20:54:51 I don't hold any of this against the applicants --- just make notes of the info I'm looking for that I could find in the letters. And one upside to reviewing 155 applications in a week is that I can't possibly retain any specifics once we're done with the decisions. 2021-03-25 20:53:45 Oh, and keep it brief, please. I don't need the technical details of the work. The worst offenders there were all literature profs, btw, especially in English departments. Admissions for their program must be a nightmare! 2021-03-25 20:52:41 The best letters describe the applicant's strengths in non-comparative terms, and provide context for research/classes/etc the applicant was involved in. What was their contribution? How was that activity relevant to our program? 2021-03-25 20:50:54 Or work supervisors applauding a work ethic that has the applicant "voluntarily" working weekends to meet deadlines. What an indictment of the workplace! 2021-03-25 20:50:16 Even worse than that, though, are some of the things applicants get praised for. One letter described as student as (inter alia) "honest", leaving me to wonder what kind of opinion they hold of students in general. 2021-03-25 20:49:18 I want to know what this particular applicant's strengths are, not how they compare to their classmates. "They were always the best prepared in class": maybe they had the most time to dedicate to that class? "Where most students struggle with...": and so? 2021-03-25 20:47:50 I realize that this stems partially from the form our University uses (and maybe we can change it?) asking who the writer is comparing the applicant to, but I really wish the letters weren't so focused on comparison. 2021-03-25 20:47:05 So I'll just rant a little on Twitter instead :) 2021-03-25 20:46:53 I wish we could do the same for letter writers, but I know (as a letter writer) that having different requirements for different addressees is unwelcome. 2021-03-25 20:46:18 This is the first time I've been on admissions since we provided guidance to applicants on what should go into the statement of purpose, and that seems to have helped focus things really well! 2021-03-25 20:45:47 Just finished my pass through 155 applications to the CLMS program. (We have three faculty on admissions this year, each reading 2/3s of the overall pile.) Yet again, it's a great batch of applicants, making the decisions hard. A good problem to have, but a problem nonetheless. 2021-03-25 20:11:53 RT @twimlai: “So how the data was collected, how the data was documented? Can you actually tell what's in the data, the model itself? At ev… 2021-03-25 19:05:26 RT @roger_p_levy: The MIT Computational Psycholinguistics Lab seeks to fill an open postdoc position for an @MITIBMLab supported multi-PI p… 2021-03-25 18:58:42 @SashaMTL @samcharrington @mmitchell_ai Thanks :) 2021-03-25 18:05:02 @MadamePratolung @timnitGebru Thank you, @MadamePratolung -- well put & https://t.co/1JnDJoehEQ 2021-03-25 17:38:23 RT @mmitchell_ai: @emilymbender and I discussed our paper with @timnitGebru & 2021-03-25 17:37:54 This was lots of fun to do with @mmitchell_ai and @samcharrington --- thank you @twimlai ! https://t.co/IcuRU4xt4j 2021-03-25 15:24:36 @thisblacklady Shameless plug of a family member, but perhaps @writingitreal might have something of interest? https://t.co/9NdAIjZIaI 2021-03-25 14:34:39 @SashaMTL < Sorry to use your tweet as the starting point for a little rant this morning... 2021-03-25 14:33:48 @SashaMTL And furthermore funding agencies should support research in the humanities and social sciences way more than they are. Sometimes it feels like we're trying to "educate computer scientists" as if they are the only ones who can/should have decision making power. 2021-03-25 14:31:10 @SashaMTL IOW: Yes, #NLProc people should study linguistics (that's why I wrote those books), but also companies should HIRE LINGUISTS. 2021-03-25 14:30:43 @SashaMTL Which, BTW, involves teaching computational linguistics to MS students, who end up very well qualified to bring linguistic insight into industry. > 2021-03-25 14:29:49 @SashaMTL Well, the concept there is that it is in bite-sized chunks. If kids these days can't handle that via reading and need video instead ... I'll leave that to someone else, though. I already have a teaching job. 2021-03-25 14:06:08 @SashaMTL @MorganClaypool There really should be two more books in this series (at least): phonetics & 2021-03-25 14:05:32 @SashaMTL The @MorganClaypool publishing model includes selling subscriptions to institutions, so if you belong to an institution with a library, check there first: you may be able to download both (to keep!) for free. 2021-03-25 14:04:48 @SashaMTL Here are the books: Morphology & https://t.co/7266PLcuE3 Semantics & https://t.co/7fSWxKPNd6 2021-03-25 14:03:58 @SashaMTL Well, I've written two books (one co-authored with Alex Lascarides) and also taught two tutorials (NAACL 2012, ACL 2018) based on early versions of each... 2021-03-25 00:08:25 RT @rachelmetz: Google wants to give researchers money, but not everyone wants to take it. ⁦@luke_stark⁩ turned down $60k. As he pointed ou… 2021-03-24 17:07:33 RT @WellsLucasSanto: In my mind, a messaging app is *not fully functional* if it doesn't have a block feature. It's like writing a RESTful… 2021-03-24 17:07:23 RT @WellsLucasSanto: Say it with me folks -- blocking should be a *core* feature of a messaging app. Just as sending and receiving messages… 2021-03-24 13:29:29 RT @cfiesler: There’s a current trend on TikTok to display comments on videos as inspirational quotes. These are all real, and last one is… 2021-03-24 01:19:15 @kfrostarnold @lizjosullivan Thanks :) 2021-03-24 01:01:53 @lizjosullivan Tapped out.... 2021-03-24 01:01:33 @jonst0kes But to suggest that women *as a group*, trans people *as a group*, POC *as a group* etc, have (relative) power in society is to be willfully ignorant of the facts on the ground. /fin 2021-03-24 01:01:23 @jonst0kes Furthermore, to the extent that I have power (as a white person, etc), I am interested in finding ways to share it with others who have less privilege than I do. /8 2021-03-24 01:01:15 @jonst0kes For one thing, racism etc *should* be shameful. For another, if that's not what is meant and people push back, it is fine and appropriate to listen, learn & /7 2021-03-24 01:01:04 @jonst0kes If my Twitter presence is such that people think twice before saying things that might be construed as racist, etc, well: I don't really see a problem with that. /6 2021-03-24 01:00:56 @jonst0kes Likewise, my Twitter reach (not particularly different to yours), means that I am mindful of what I say on here and who I engage with (and how). /5 2021-03-24 01:00:44 @jonst0kes I do not disagree that I personally have some power: I definitely have power over students in my classes, in admissions committees, when reviewing papers and I try to use it wisely and be mindful of power differentials. /4 2021-03-24 01:00:33 @jonst0kes As for this, I find it rather telling that the "free-speech absolutist" crowd is so terrified of other people ... speaking: https://t.co/6iablFtWfS /3 2021-03-24 01:00:19 @jonst0kes I'm glad to see you admitting that this is a conversation about power, and that you are happy with the status quo: https://t.co/0GuRk6wm4J /2 2021-03-24 01:00:10 @jonst0kes This will be my last contribution to this conversation. /1 2021-03-23 19:56:18 @lizjosullivan Thanks 2021-03-22 22:15:23 RT @BastingsJasmijn: Here it is! A script that updates your dear old .bib file with all the latest developments. It even replaces arxiv ent… 2021-03-22 22:13:21 RT @NAACLHLT: As you might have noticed, #NAACL2021 is going virtual this year. We expect to share more logistical details, including progr… 2021-03-22 18:03:42 RT @OlgaZamaraeva: (Please retweet): OK, history was easy 2021-03-21 22:03:51 RT @BastingsJasmijn: Thanks @QueerinAI! To make this a success we need everyone to help out, and to *proactively* help with making name cor… 2021-03-21 16:36:47 @fernandaedi Why are men? 2021-03-21 14:27:00 RT @sleepinyourhat: PSA for #ACL2021 reviewers: The schedule this year is extremely tight, and if your reviews are late, the authors of you… 2021-03-20 17:35:57 @radamihalcea Hard agree. Also more documentation debt. 2021-03-20 17:34:46 @raciolinguistic Congrats!! 2021-03-20 13:06:52 RT @mjmichellekim: Why the name thing is so triggering to me and so many other Asians. A thread. I spent my entire adult life in the U.S.… 2021-03-20 01:05:38 RT @OlgaZamaraeva: (Please retweet): What's your favorite reading on history of computational linguistics? 2021-03-20 00:46:42 @kirbyconrod @EmmaSManning Oh, not to worry! Keeping an eye on #BenderRule is totally opt-in afterall! 2021-03-20 00:44:29 @EmmaSManning @kirbyconrod Was just coming here to say this. In addition, the token "English" does appear, in a footnote on p10 and then in the main text on p30. *sigh* 2021-03-19 20:25:54 @rcalo Right --- it can (attempt to) only measure the effect of the apps. To answer the other question, you'd need to also measure/estimate costs & 2021-03-19 20:17:56 @rcalo # of positives tested COMPARED TO what would have happened with random notifications? 2021-03-19 19:51:36 @MattQuinn16 @_KarenHao @rachelmetz I wonder if the fonts need to be updated for the CMS or something. (I don't know when the was added to Unicode, but probably later than et al...?) 2021-03-19 19:48:58 @rachelmetz @MattQuinn16 @_KarenHao You do have other important uses for your time!! 2021-03-19 19:47:15 @mdekstrand @_KarenHao @rachelmetz @MattQuinn16 See my publications page for sample Overleaf files that include it in the bib (with ACM and ACL style files): https://t.co/GCIxuGpRbp (Just below the entry for the paper itself.) 2021-03-19 19:46:03 @MattQuinn16 @_KarenHao @rachelmetz Can the CMS handle other (less pictographic) "special" characters? That's where the issue really lies. Do you do okay with Turkish ğ, for example? 2021-03-19 19:43:06 @_KarenHao @rachelmetz @MattQuinn16 Meanwhile, don't get be started on "We can't accurately print your name because our style guide disallows middle initials." Wha?! 2021-03-19 19:42:36 @_KarenHao @rachelmetz @MattQuinn16 I mean, you'd think that a desire to be factually accurate (the *is* part of the title!) would outweigh whatever prescriptive aversion to emoji the copyeditors or whomever have, but I guess not. Anyway, @rachelmetz 's lower-ascii-text-based solution finessed the problem! 2021-03-19 19:41:04 @_KarenHao @rachelmetz @MattQuinn16 Hah -- that makes sense! I got the parrot through ACM's eRights system by pointing out, when it choked on it (and gave me mojibake), that their system wasn't Unicode compliant. Since then, I've been amused at how stuffy news publishers have been (not journalists). 2021-03-19 18:10:13 @ACharityHudley @ChrisHudley @Stanford @StanfordEd @stanfordccsre @AAASStanford1 Congrats to you and especially to Stanford :) 2021-03-19 18:06:33 This is fabulous news! Thank you @BastingsJasmijn for your leadership. #NLProc authors: please take heed. If someone asks you to update your bibliography to get their correct name, you should do so, even on arXiv papers. It does not affect the anonymity period. https://t.co/lV8pWTIp90 2021-03-19 18:01:10 Bravo, @luke_stark ! And so very well stated too. https://t.co/094qcHg3YN 2021-03-17 22:30:38 @kirbyconrod This looks awesome! 2021-03-17 22:29:59 RT @UWlinguistics: New for Spring Quarter! Grad students are welcome to enroll, and should contact the instructor (@kirbyconrod) for info o… 2021-03-17 04:53:24 @_KarenHao I'm so sorry Karen. 2021-03-16 20:02:34 RT @alexhanna: I am glad that @Abebab and @vinayprabhu have pointed out this pattern of citation erasure from ImageNet authors. There's a w… 2021-03-16 18:42:05 RT @timnitGebru: This statement by Yann is so wrong & There are many languages & 2021-03-16 18:06:52 When someone reports a %, always make sure you know what the denominator is. https://t.co/JxHab3fDni 2021-03-16 17:20:51 Can't recommend enough getting a "no buddy". Thanks again, @KarnFort1 :) 2021-03-16 17:15:21 @nsaphra Congrats, Dr. Saphra!!! 2021-03-16 16:57:20 This may feel a little "inside baseball" to those not in computer vision, but the heart of it is really important: academic integrity and accountability towards not just other scholars but also the people whose lives are affected by the tech we build. 2021-03-16 16:56:34 A must read! "the biggest shortcomings are the tactical abdication of responsibility for all the mess in ImageNet combined with systematic erasure of related critical work, that might well have led to these corrective measures being taken." https://t.co/SlbfefWWGW 2021-03-16 16:49:10 I'm glad to see this covered, and I'm glad to see folks taking a stand! https://t.co/8XFSJzlTne 2021-03-16 16:26:27 RT @cephaloponderer: There is still time to submit to the paper track of the first CVPR workshop on Ethical Considerations in Creative appl… 2021-03-16 16:20:54 @Joey__Schafer @midmagic It's now available (also Open Access) through the ACM Digital Library: https://t.co/kwACyKdufD 2021-03-16 14:29:24 RT @AINowInstitute: Are you a tech worker? Join us & Blowing the whistle safely, lawfully & 2021-03-16 04:30:55 RT @timnitGebru: they'll still get lots of $$$ and it won't harm them. So we have to make sure to continue the momentum towards real outcom… 2021-03-16 03:55:50 RT @alexhanna: Call for papers: Genealogies of Data Junior Scholars Workshop to be held at USF's Center for Applied Data Ethics. Deadline f… 2021-03-16 03:51:33 RT @_KarenHao: There have been several false rumors and misleading claims circulating on Twitter that have understandably generated confusi… 2021-03-15 22:14:25 RT @mmitchell_ai: This article is the first time I've weighed in on what Google may have been referring to in their public statements ab… 2021-03-15 22:07:21 @EvpokPadding Pacific Transformation (and) Binding? 2021-03-15 19:24:12 RT @timnitGebru: "As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account f… 2021-03-15 17:49:29 Update! It looks like the #facct21 Q& https://t.co/mEtNYCaz0t In addition to Stochastic Parrots, the session features one of the best student papers! 2021-03-15 14:59:03 .@FAccTConference any word on when we can expect to see these? If this is in fact relying on volunteer labor, I understand that folks are busy, but if this is part of the (paid!!) services of the event management company, WTH? 2021-03-15 14:58:00 I understand the point of this to be making discussions accessible across different time zones, but that only really makes sense if the recordings are posted *promptly* so that we can all be part of the same (temporally congruent) discussion. A week later doesn't cut it. 2021-03-15 14:56:32 So, when can we expect the #FAccT21 Q& 2021-03-15 14:53:14 Just a little #GenX erasure in the #FAccT21 post-conference survey.... (with some Boomer erasure, too, of course) https://t.co/fEefITSuKz 2021-03-15 14:12:31 RT @atbwebb: If you haven't read the @_KarenHao magnum opus on Facebook AI and misinformation, do find the time. There aren't many more imp… 2021-03-15 00:45:48 @leonpalafox @DiverseInAI @FAccTConference @JCCHerington @ZoeSchiffer You read a whole article documenting harassment and were disappointed because ... it say anything about an email that wasn't relevant to the story and which you want to call an "ultimatum" (which it wasn't)? Like I said: your contributions are not in good faith. /Emily out. 2021-03-14 23:32:47 @leonpalafox @DiverseInAI @FAccTConference @JCCHerington @ZoeSchiffer Nevermind, he's read it. And even shared it with some further harassment of his own. https://t.co/S56NZqBPTL I don't think his contributions here were ever in good faith. 2021-03-14 22:22:35 @leonpalafox @DiverseInAI @FAccTConference @JCCHerington In case you somehow missed the harassment that we've been subjected to, please read this piece by @ZoeSchiffer who carefully documented a portion of it: https://t.co/76YlmqmYv2 2021-03-14 22:21:00 @leonpalafox @DiverseInAI @FAccTConference @JCCHerington So, I repeat, I suggest NOT commenting on things that you do not actually have the inside information on. None of this is any of your business, and you are out here saying things that are FALSE about people who are continually being harassed about it. So how about just ... not? 2021-03-14 22:01:08 @athundt @DiverseInAI @leonpalafox @timnitGebru @mmitchell_ai @alexhanna After the fact = after it was approved, no less. 2021-03-14 22:00:37 @DiverseInAI @leonpalafox @FAccTConference @JCCHerington Baldly claiming uncredited reviewing on Twitter hardly seems discreet. 2021-03-14 21:55:20 @leonpalafox @DiverseInAI @FAccTConference @JCCHerington When you're going around making claims about things that have nothing to do with you (you don't work for Google, you weren't involved in the internal pub-approve process), I'd suggest being VERY CAREFUL about not over-claiming your relationship to the conference. 2021-03-14 21:53:25 @DiverseInAI @leonpalafox @FAccTConference @JCCHerington For reference here's the tweet where he makes the claim: https://t.co/bC9SMATRlZ 2021-03-14 21:52:25 @DiverseInAI @leonpalafox @FAccTConference @JCCHerington The 2020 edition of the conference is completely irrelevant to this discussion. Also, it took place in JANUARY 2020, whereas FAccT 2021 was in March. So the reviewing timelines would be completely different too. Furthermore your tweet clearly implied FAcct 2021 & 2021-03-14 21:35:22 @DiverseInAI @timnitGebru @mmitchell_ai @alexhanna https://t.co/JOtgvF1BD5 2021-03-14 21:34:53 This is clearly none of his business in any respect and I don't know why he's even bothering to chime in here, except possibly out of some urge to harass Timnit & 2021-03-14 21:34:13 Not only is he making up fantasies about what happened inside Google, but also the name "Leon Palafox" does not appear on the FAccT 2021 program committee page: https://t.co/w9BLZUSbfm https://t.co/bC9SMATRlZ 2021-03-14 21:11:39 RT @timnitGebru: I never said that, neither is it true & 2021-03-14 20:56:01 @leonpalafox @DiverseInAI @timnitGebru @mmitchell_ai Also, for the record, though the conference decisions were not out yet at the point where Google fired @timnitGebru , the reviews had been completed: https://t.co/94HRGT2kya 2021-03-14 20:52:41 @leonpalafox @DiverseInAI @timnitGebru @mmitchell_ai The paper was submitted for internal review AND APPROVED before we submitted it to the conference. Please don't make things up. 2021-03-14 02:01:07 RT @hypervisible: Feels like a year’s worth of tech news has happened in the last week. Shout out to @kharijohnson for managing to fit so m… 2021-03-14 01:36:59 Ex rando is my new favorite phrase https://t.co/B7vEjYRbHY 2021-03-14 01:29:55 RT @histoftech: Google might ask questions about AI ethics, but it doesn't want answers | Google | The Guardian https://t.co/Xi3jS0hpPj 2021-03-13 21:59:43 RT @sjmielke: what is chart parsing but being inside the forest and seeing all the trees 2021-03-13 17:54:05 RT @roseveleth: It's very funny to me that FB thought they could pull the wool over @_KarenHao's eyes and get her to write some fluff piece… 2021-03-13 17:54:00 RT @glichfield: With all the pushback from Facebook against @_KarenHao's recent story, and as one of the editors on the piece, I thought it… 2021-03-13 13:12:38 Attn @becauselangpod https://t.co/5F09ICLJKQ 2021-03-13 13:00:56 RT @emilymbender: "The firings have also been an unmitigated PR disaster for the tech giant. From a distance, the story sounds like “Google… 2021-03-13 01:27:31 RT @BlackTIDES1: Today for #WHM2021 we celebrate Dr. Brandeis Marshall @csdoctorsister, founder of @BlkWomenInData. She represents one of t… 2021-03-13 01:20:22 As @mer__edith very eloquently pointed out, AI/ML is currently backed into a corner where what's valued, both tangibly and intangibly, relies on & 2021-03-13 01:18:30 RT @vj_chidambaram: Everything works on incentives. If academia is always incentivized to look for the next payout from Google, we can goin… 2021-03-13 01:18:00 @ZoeSchiffer Zoe's piece, specifically, documenting the campaign of harassment we (and especially @timnitGebru) endured. 2021-03-13 01:17:26 When the stochastic parrots paper gets linked from Hacker News, by now it's predictable that people will a) have some awful things to say and b) cite the troll who wrote a 'critique'. This time, though, someone else responded with a link to @ZoeSchiffer 's piece. Thanks, Zoe! 2021-03-13 01:15:33 @samsontmr I've been looking for them too. @FAccTConference do you expect them to be posted soon? 2021-03-13 01:00:14 RT @rachelmetz: If you haven’t already read this, take some time this weekend to do it! 2021-03-13 00:51:38 Come for the optimism, stay for the excellent analysis https://t.co/6Gzk2BfbqI 2021-03-13 00:05:27 @lauriedermer 2021-03-12 23:43:15 RT @RadicalAIPod: okay, the 24 hour grace period has ended. If you haven't yet read this absolute masterclass in journalism by @_KarenHao… 2021-03-12 21:07:03 @hiroara @EmtiyazKhan @anoushnajarian @sanhitamj @timnitGebru @LWH_Bos @hiroara https://t.co/BX1Jh4fJGZ 2021-03-12 20:35:29 @timnitGebru I'm so sorry, Timnit. Thank you for continuing to shed light on this. And, to confirm again for anyone missed it: While I have received some harassment in this whole scenario, it's nothing like the scale of what's been aimed at you. The role of racism here is striking. 2021-03-12 20:05:08 Really just one thing this excellent article gets wrong, though: "Facebook, it turns out, is providing a good model of how to move forward with this new form of oversight." I think @_KarenHao 's excellent reporting shows us otherwise: https://t.co/Ke6565pukG 2021-03-12 20:03:58 "AI ethics teams are an emerging necessity. [...] HR departments need them because decision-making about people needs human oversight. Vendors need them because it’s so easy to miss the sorts of problems that can cause serious harm to people and put companies out of business." 2021-03-12 20:02:36 : "Hiding from the problem and firing the team doesn’t work. It simply tells the world that you have an ethics problem you can’t manage." 2021-03-12 13:43:55 @Abebab Tried this 2021-03-11 23:14:48 RT @_KarenHao: People have been asking: did FB publish this in response to your story? No, let me clarify. They wanted this paper to be in… 2021-03-11 20:58:42 RT @timnitGebru: Do these companies realize that coming out so defensively actually makes them look worse than the article? 2021-03-11 20:58:32 RT @YaelEisenstat: This piece by @_KarenHao is a must read if you care about how Facebook effects democracy, and why the company has (inten… 2021-03-11 19:40:40 RT @timnitGebru: @schrep @_KarenHao I would not respond like this to Karen's extensive, well thought out piece. Those of use at the forefr… 2021-03-11 18:19:17 RT @timnitGebru: ""She responded to Bender that she was trying to get Google to consider the ethical implications of large language models.… 2021-03-11 18:17:26 @rachelmetz @timnitGebru @alexhanna @mmitchell_ai Thank you, Rachel for careful, in-depth reporting! 2021-03-11 18:16:50 RT @rachelmetz: new from me: an in-depth look at months of chaos in google's ethical ai group, and how it has reverberated throughout the A… 2021-03-11 18:13:28 This looks amazing! Looking forward to reading it :) https://t.co/qaif48jcrf 2021-03-11 17:57:25 @norquiben You've gotta start somewhere! 2021-03-11 17:45:23 RT @kharijohnson: @emilymbender @_KarenHao @rachelmetz This quote from @autopoietic et al has been ringing in my ear all week: "Technology… 2021-03-11 17:25:54 That's part of why this reporting (by @_KarenHao, @rachelmetz, @kharijohnson and others) is so important: We can't work towards better incentive structures (or decentralization of tech power) without the light that you all shed on these things! 2021-03-11 17:24:23 That said, I think there is definite value in having people work on these issues on the inside (and the people I know doing the work are fantastic researchers). So the question is: is it possible to align incentives so that their work can have the needed impact? 2021-03-11 17:23:15 @rachelmetz @_KarenHao Thinking again to the #FAccT "The Future is Up for Grabs" panel in this context, it seems pretty clear that if the future involves such concentrations of power in big tech, internal ethics teams alone will never suffice. https://t.co/3fOlPZwkZc 2021-03-11 17:20:58 @rachelmetz Having it come out the same day as @_KarenHao's piece about Facebook and how the incentives there (also?) thwart any meaningful application of #AIEthics work gives lots of food for thought. https://t.co/Ke6565pukG 2021-03-11 17:19:08 Thank you @rachelmetz for this thorough reporting that highlights several key issues https://t.co/zteQU0Jz5z 2021-03-11 17:14:04 RT @timnitGebru: https://t.co/Fph8ukQSBG. In depth article by @rachelmetz "She responded to Bender that she was trying to get Google to c… 2021-03-11 17:11:35 RT @schock: The recording from yesterday's "The Future is Up for Grabs" panel discussion at #FAccT21 with me, @cori_crider, & 2021-03-11 16:37:40 RT @alexhanna: This piece by @_KarenHao hits the nail on the head with the problems of "AI bias" research as conceptualized by large tech c… 2021-03-11 16:09:14 RT @uwnews: Teaching a computer the gift of language may cause real environmental and social harm, says a new study co-led by @emilymbender… 2021-03-11 14:31:58 @csdoctorsister Yay!! 2021-03-11 14:28:09 @Wjrgo @timnitGebru @mcmillan_majora @mmitchell_ai Oh, thanks for flagging. I will look into that. 2021-03-11 14:26:54 At #FAccT21 ("The Future is Up for Grabs") there was a great sequence on reconceptualizing optimization. Here, the inimitable @_KarenHao clearly lays out how fb optimizing engagement (→growth→profit) selects for algorithms that necessarily optimize misinformation & 2021-03-11 00:11:57 @ruchowdh Even when you're just in the audience and no one can see? 2021-03-10 23:22:02 RT @dlowd: “On the Dangers of Stochastic Carrots : Can Vegetables Be Too Big?” https://t.co/qdUReVxLO7 2021-03-10 23:21:01 @dlowd 2021-03-10 22:24:03 I guess these were a bit subtle in the small Zoom boxes ... not sure anyone noticed I was wearing mine (in the Q& 2021-03-10 22:23:09 RT @JesseDodge: @timnitGebru @mcmillan_majora @emilymbender @FAccTConference @mmitchell_ai @blahtino @vinodkpg @benhutchinson I completely… 2021-03-10 22:18:16 @KLdivergence Thank you, Kristian, for your hard work. I thought it all well really smoothly! 2021-03-10 22:14:09 @JCornebise @DiverseInAI @FAccTConference @timnitGebru @mmitchell_ai @mcmillan_majora Please do not share anything outside the conference. My understanding is that they will make things available eventually. 2021-03-10 22:13:31 @JCornebise @DiverseInAI @FAccTConference @timnitGebru @mmitchell_ai @mcmillan_majora Don't know and thank you! I think the Q& 2021-03-10 22:07:58 @DiverseInAI @FAccTConference @timnitGebru @mmitchell_ai @mcmillan_majora I wish I knew! It was helpful to have multiple authors in the panel so it felt like there was some audience at least, but the only signal of audience beyond that was the questions in SliDo and comments on Twitter... 2021-03-10 21:53:28 RT @gleemie: Please make a direct and sustaining donation -- any amount builds power! -- to support worker organizers from @turkopticon who… 2021-03-10 21:50:43 @timnitGebru @FAccTConference Thank you, @timnitGebru! It has been a joy to be on this journey with you (even though parts of the journey, well, you know). 2021-03-10 21:50:13 @Abebab @timnitGebru @FAccTConference Wow, thank you @Abebab! I'm really honored to hear that from you 2021-03-10 21:49:31 @SashaMTL @FAccTConference Thank you! 2021-03-10 21:49:07 RT @Abebab: "Bigger isn't always better" @emilymbender WORD!!! #facct21 2021-03-10 21:17:22 @Spkr2Managers @timnitGebru @mcmillan_majora @FAccTConference It's available, Open Access, here: https://t.co/kwACyKdufD 2021-03-10 18:07:49 RT @schock: Yes! Over at https://t.co/ydcpsWwxSZ we are building the CRASH project (Community Reporting of Algorithmic System Harms) to tha… 2021-03-10 18:07:31 RT @schock: The #MakeAIEthical statement from @GoogleWalkout is available here: https://t.co/kbEV6r68O0 #facct2021 https://t.co/FZL2bvMTPn 2021-03-10 18:05:19 Super inspired by "The Future is Up for Grabs" panel just now at #facct21 and especially the point that a just future must include justice for the tech workers treated as disposable by big tech. Thanks to @schock for shouting out #TW4TW and @turkopticon https://t.co/t7NI1YHZAM 2021-03-10 18:01:37 Did anyone else just wave back at their screen at the end of that awesome #facct21 panel? Just me? 2021-03-10 14:09:05 @JCornebise If the "often hear" part isn't true, then don't waste their time with the argument. 2021-03-10 14:08:35 @JCornebise "So there's this terrible/disingenuous argument I often hear that I'm never quite satisfied with my response to. How do you handle ____?" 2021-03-10 14:07:29 @emanlapponi Hm, subject NPs are a bit hard. Can I make it a topic NP + start of subject NP? "Some of our best friends our cats chose for us" 2021-03-10 03:26:54 @dlowd @timnitGebru C'mon CA! They seem to be going about this backwards.... 2021-03-10 03:12:19 @dlowd @timnitGebru WA passed a law to do this last year, but we need federal approval, which I think is possibly contingent on OR and CA doing the same. Are you in? 2021-03-10 03:04:26 @timnitGebru Where I live, I *really* love the DST schedule. If we are to give up the changes, I'd want to stay on DST. We don't need sunlight at 4:12am in June, but having sun until past 9pm is lovely :) 2021-03-10 01:54:07 @anoushnajarian @FAccTConference The presentations are prerecorded and then there is live Q& 2021-03-10 01:51:51 lol at "best friends our" typo ... that wasn't a direct quote anyway, but those vibes were there. 2021-03-10 01:40:50 It seems like anyone working with data that is sensitive in anyway should spend some time rehearsing how to receive feedback, so that they're ready when it happens in a public forum... https://t.co/DlT5Idsleh 2021-03-10 01:36:25 @anoushnajarian @FAccTConference Thank you for your support. I believe that the recordings will be available later. I'd prefer to be able to present this paper normally, without additional spotlight, and I'd guess the other presenters in our session would too. 2021-03-10 01:35:26 And then (different speaker) "Devil's advocate" ... not a phrase I expected to hear (unironically) at #facct21 2021-03-10 01:34:09 Yikes: "no intention to", "some of our best friends our", "sorry if anyone was offended" 2021-03-09 23:07:51 RT @HadasKotek: PSA: The US is switching to Daylight Saving Time this weekend but it'll be another two weeks before Europe does. Just in ca… 2021-03-09 22:03:44 "if you want to be a company that touches billions of people, then you should be responsible and held accountable for how you touch those billions of people." https://t.co/FtJA8UMfRb 2021-03-09 20:17:12 @sjmielke Have you ever seen a paper that shows "not a fluke" while still presenting a black box? 2021-03-09 20:09:39 @sjmielke That final version of the paper types was the results of the discussion here, also interesting: https://t.co/UbF2maXU4G 2021-03-09 20:09:00 @sjmielke I don't think there's a one-size-fits-all answer, but "> 2021-03-09 19:26:04 @sjmielke I said "useless noise", but wanted to ask: What else did you learn about the task, modeling techniques, their connection to each other? 2021-03-09 16:39:13 @myrthereuver @BrownSarahM Another thing making it particularly difficult is that I wasn't able to fully clear my schedule for FAccT. This is the last week of classes. So I want to be able to 'drop by' something, rather than set up my own Zoom link and then feel obligated to be present there... 2021-03-09 16:10:10 @myrthereuver @BrownSarahM Especially at a conference where I'm pretty new to the community, I want to be able to scan the room, find someone I kinda know, and see if it feels okay to join their conversation. 2021-03-09 16:09:22 @myrthereuver @BrownSarahM Yeah, the worst thing about Zoom for this is that you can't see who's there before barging in. A big Zoom with move-yourself break-out rooms is better, I suppose, but still far from ideal... 2021-03-09 15:56:07 @BrownSarahM It's also interesting to me how much more friction circle seems to present than https://t.co/soFD9oXyPB for text-based conversations. 2021-03-09 15:55:29 @BrownSarahM I see -- thanks for letting me know! It's really too bad that Gather hasn't gotten their accessibility act together, because it's a much better concept for this kind of socializing than Zoom. 2021-03-09 15:47:00 Q for #facct21 attendees: How are folks using the 'social' time? Is there something like https://t.co/su22smN5zp than I'm just not in the loop about? 2021-03-09 14:12:16 @magentaroyle1 @mer__edith @GoogleWalkout I'd say it's definitely worth talking about in your networks! 2021-03-09 14:11:08 @histoftech @YESHICAN I'm not in the loop on #FAccT21 organization, so I don't know. 2021-03-09 14:03:32 @amitava_physics @timnitGebru Thank you!! 2021-03-09 13:53:15 RT @DingemanseMark: Counterpoint: language is the most robust and flexible brain-to-brain interface known, providing an infrastructure for… 2021-03-09 13:07:45 RT @timnitGebru: I honestly am blown away by the support we've gotten (forgetting haters & 2021-03-09 03:27:02 @dinabass @timnitGebru I had large, wooden, bright multi-colored parrot earrings in middle school, too! 2021-03-09 03:24:12 @dinabass @timnitGebru Yes. I originally thought of earrings, but actually didn't know who among the authors has pierced ears, so this seemed like a better choice :) 2021-03-09 03:20:55 @timnitGebru And +1 on who knew?? I got these because I thought it would be a fun way to commemorate working together on the paper. Got them in early December, but hatched the idea well before that. 2021-03-09 03:17:05 @timnitGebru I find myself really missing the in-person conference experience for FAccT. It would been really fun to all show up wearing these ... and get to see each other!! 2021-03-08 23:15:16 @Emil_Hvitfeldt Yeah, good idea. In this case, it's audio alerts (no one is sharing their screen), but the same hack would work! 2021-03-08 23:11:31 PSA to presenters at online conferences: Especially if you are not using a headset, PLEASE turn off notifications on your device before you're "on stage". (And silence your cell phone.) 2021-03-08 22:17:04 RT @kate_mckean: When I sign an email “Yours” it’s not a term of endearment— it means this email is now yours I’m done with it get it away… 2021-03-08 21:32:14 @CortexNihilo Posted! https://t.co/BX1Jh4fJGZ 2021-03-08 21:29:00 @CortexNihilo @threadreaderapp please unroll ... merci! 2021-03-08 21:25:21 @CortexNihilo I'm interested in collecting and linking to translations & 2021-03-08 21:24:43 @CortexNihilo It's really great! I think the only thing I spotted that might not be 100% is this one, as the problem with reinforcing stereotypes extends to actions beyond speech acts. (But perhaps we weren't very clear about that!) https://t.co/BYX4PJd6eu 2021-03-08 21:21:24 Merci, @CortexNihilo, pour ce résumé super de notre article! A mes amis francophones qui s'intéressent au contenu de l'article "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ", un fil: https://t.co/LTzNAHbbg7 2021-03-08 21:06:32 Really appreciating this summary by @CortexNihilo of our paper ... including this bit where French vocabulary makes the point very pithy! https://t.co/tx2sUb6ycZ 2021-03-08 18:30:35 @YESHICAN ... and also so impressed with the long-term vision she expresses at each point, describing each project in terms of being part of a longer trajectory of building political power of marginalized people (esp. Black people). 2021-03-08 18:24:08 Listening to @YESHICAN 's powerful keynote at #facct21 and getting more and more furious. 2021-03-08 18:07:21 RT @SeeTedTalk: well this about sums it all up : we'll never achieve fairness, accountability, and transparency until we first demand just… 2021-03-08 17:54:53 RT @SeeTedTalk: takeaway from @MarzyehGhassemi consider what you are willing to trade for safety automated robotic dogs don't make you saf… 2021-03-08 17:52:21 RT @FAccTConference: "We are trying to replace public heath service with machine learning." @rajiinio "Don't do that." @MarzyehGhassemi r… 2021-03-08 17:51:59 @MarzyehGhassemi @rajiinio @sedyst .@sedyst points out that ML methods only seem cheap because so many resources are being thrown at them right now. What if those resources went instead to employing people (e.g. in public health) & 2021-03-08 17:50:00 Excellent panel at #FAccT21! So many highlights: @MarzyehGhassemi and @rajiinio calling out tech solutionism in the early days of COVID, @sedyst bring up "data PacMan" and the problem w/training monoculture at universities. All talking about artificial scarcity > 2021-03-08 17:42:53 RT @alexhanna: Google is not going to #MakeAIEthical, but we can. - Conferences + journals: require an explicit statement on paper review p… 2021-03-08 15:00:00 Calling folks in #NLProc, do we have the political will to get this done? https://t.co/UhCS9mkO6C 2021-03-08 14:59:31 Calling folks in #NLProc --- as we work on our *CL reviewing procedures, can we incorporate this? https://t.co/bFYlKFUtwr 2021-03-08 14:58:08 Have you been watching Google's attacks on their own Ethical AI team, worrying about the implications for AI research and society, but feeling helpless to do anything about it? If you are a researcher (student or otherwise) in AI, @GoogleWalkout has some concrete suggestions: https://t.co/LWlLzqPACO 2021-03-08 14:05:55 TFW you wake up and first thing in the morning the conference hashtag is super busy. #LifeInAPokeyTimeZone 2021-03-08 05:16:54 @WellsLucasSanto :) I don't see any need to take the tweet down. I'm looking forward to the paper's session on Wednesday and hope you can join us then! 2021-03-08 05:07:21 @soldni I just noticed now, while reading the agreements. (Talk recordings were due a while back 2021-03-08 04:48:33 Getting ready for #FAccT21 main conference starting tomorrow and reviewing the agreements, as we were invited to do in the welcome message. Here's a key point, that I think all conferences should emulate: "Do not record or screenshot any session without notification and consent." 2021-03-08 01:20:59 @anoushnajarian @sanhitamj @timnitGebru @EmtiyazKhan @LWH_Bos * 2021-03-08 00:09:27 @anoushnajarian @sanhitamj @timnitGebru @EmtiyazKhan @LWH_Bos Thank you / / / / Спасибо ! 2021-03-07 23:29:07 RT @RealAbril: Thank you @CiCiAdams_ and @aprilaser for giving so many of us a chance to highlight THIS part of the narrative. I have so mu… 2021-03-07 23:05:53 Still more of the story about what's been going on at Google. Thank you @aprilaser and @CiCiAdams_ for this reporting, and @RealAbril @timnitGebru and @mmitchell_ai for sharing your stories. No amount of counseling of victims can fix a hostile environment, indeed. https://t.co/ZT9NuY3PUZ 2021-03-07 22:31:13 @sanhitamj @anoushnajarian @timnitGebru Thank you! 2021-03-07 22:23:58 @sanhitamj @anoushnajarian @timnitGebru No hurry! And thank you very much for your effort. 2021-03-07 22:23:17 @anoushnajarian @timnitGebru @sanhitamj I'll gladly add links to translations here! @sanhitamj --- I've tried to include your piece in my list of media coverage. I don't read Marathi, however. If you have a minute, can you let me know if this looks okay? https://t.co/BX1Jh4fJGZ 2021-03-07 22:14:01 @anoushnajarian @timnitGebru Wow, thank you @anoushnajarian and @sanhitamj ! What an honor. If you make translations, please let me know if they are summaries or full translations (i.e. roughly sentence by sentence). For a few languages, I'd be able to tell myself, but that number is small. 2021-03-07 18:23:39 RT @nitashatiku: My story‘s on the front page of the @washingtonpost‼ I talked to @RealAbril, Google workers, HBCU grads/faculty, & 2021-03-07 14:05:55 RT @roger_p_levy: Psycholinguists – our field has a new journal, in partnership with the #openaccess trailblazer @glossa_oa! We look forwar… 2021-03-07 04:12:26 Among things that apparently need repeating from time to time: https://t.co/oYZC1Irfky 2021-03-07 04:11:18 @mdekstrand @ian_soboroff Agreed. Also, it's important to keep the use case(s) at hand in focus, to the extent possible. What kind of bias matters & 2021-03-07 03:54:05 @ian_soboroff This is the point of proposals like model cards, datasheets, and data statements. 2021-03-07 03:53:24 @ian_soboroff Yes, it is tricky, and it has to be recognized from the start that there's no such thing as a perfectly unbiased dataset. That said, it is worth carefully considering where you're collecting the data and documenting it well, so that there is a hope of knowing the biases. 2021-03-07 03:09:37 RT @XandaSchofield: Harvey Mudd CS (my department!) is hiring a Visiting Assistant Professor: https://t.co/fqu6bFYENl 2021-03-06 16:47:46 I can't tell if this is just a sign that this sign-in page has been untended for a long time, or whether it's some really exquisite shade on the part of Microsoft. https://t.co/Od9A5GbfO7 2021-03-06 16:36:10 @BenjyRayMunson @timnitGebru Thanks, @BenjyRayMunson :) It's not the troll's claim that I'm "nobody" that's troubling (though I am enjoying everyone's kind words to the contrary!) but rather the idea that I would harm my co-authors with the goal of seeking fame. 2021-03-06 04:49:14 @yuvalpi @asayeed In grad school, I wrote a paper on the Mandarin construction & 2021-03-06 04:17:18 @kathrynbck Solipsism? 2021-03-06 01:27:46 @asayeed 2021-03-06 01:26:01 @lauriedermer Wowul! 2021-03-06 01:03:41 @_eddieantonio Ugh. 2021-03-05 23:17:08 @yoehanee That's this one: https://t.co/z1F7fESFOn 2021-03-05 23:16:38 @asayeed I do have an opinion there, too, but this is a bit more specific: it's people denying that humans actually perform joint attention and intersubjectivity and might as well just be string generating machines... 2021-03-05 22:02:34 There's a genre of response to my work on LMs ( and ) that I refuse to respond to anymore, that goes something like "How do we know that other people aren't just sending text in response to our text?" If you refuse to presuppose my humanity, I don't think I want to converse. 2021-03-05 20:02:13 @timnitGebru I'm sure he had no idea who I was before all of this (I was "nobody" to him), because he is *not in my field*. I wish it had stayed that way. 2021-03-05 20:00:14 @rctatman That is 100% infuriating and you have every right to be angry! 2021-03-05 18:32:05 @rctatman 2021-03-05 18:24:30 It is super cringe, though, to read the presentation of the harassers' positions. I hope that it is obvious to most (preferably all) readers how off-kilter they are. 2021-03-05 18:23:22 It's painful to read a recounting of (and thus relive) harassment directly experienced, but I think this reporting is important, especially in that it makes visible to others the pattern that was clear to us as targets but maybe harder to see from the outside. https://t.co/OtPrdEcJXT 2021-03-05 17:52:57 RT @le_science4all: L'article qui a conduit au démantèlement de l'éthique de Google est officiellement publié !! Cet article marquera sans… 2021-03-05 15:56:50 @EvRedecker @ah__cl You're making my day :) @timnitGebru @mcmillan_majora @mmitchell_ai see 2021-03-05 03:17:17 Thank you, @nitashatiku for this important reporting and @RealAbril for sticking it out as long as you did, fighting upstream against so much BS. 2021-03-05 03:16:41 Oh, and if someone sets up programs to help Black engineers through the various hoops that Google sets up (to keep them out): gotta cancel that, that's "special treatment" and "unfair". 2021-03-05 03:15:27 This is required reading ... and case study in how to screw up DEI efforts. Google not hiring enough Black engineers? Clearly the problem must be with the training the Black engineers are getting & 2021-03-04 20:56:13 RT @_alialkhatib: !! come to the info session! ask me stuff! 2021-03-04 20:50:11 @yunyao_li @wbelluomini @AnnaWTopol @RuoyiZhou @asoffer @rama_akkiraju @mffacm @tessalau @senseofsnow2011 @elgreco_winter @astent @fusionconfusion @jmgarcia82 @radamihalcea @BreneBrown Thank you, @yunyao_li !! 2021-03-04 20:13:24 @LingMuelller @adyantalamadhya I know that in some other traditions, people are told that using 'I' is arrogant or some such. I think it is important to take ownership of and responsibility for one's own work. 2021-03-04 20:12:40 @LingMuelller @adyantalamadhya I think there are at least the following uses of 'we' in English, in kind-of singular contexts: 1. Royal we 2. Mathematician's we (ostensibly including the reader) 3. Engineer's we (invisible collaborators?) 4. Nurse's we 4 is like what you say: "How are we feeling today?" 2021-03-04 19:29:53 @adyantalamadhya I use I and get grumpy when people use we. 2021-03-04 16:37:47 @alienelf Can maybe help to remind yourself of everything else (that you care about that) that you've said 'yes' to. The 'no' this time is actually a way to make sure that you can make good on those 'yes'es. 2021-03-04 15:08:12 Burning question: what's the official @FAccTConference hashtag? I'm seeing both #faact2021 and #faact21 2021-03-04 14:54:19 RT @aclmeeting: The mentoring program co-chairs have prepared 5 videos to help reviewers dealing with the reviewing process, the new review… 2021-03-04 05:57:16 @samsaranc Furthermore, if there is increased interested in what can be done with medium to small sized data sets, that probably lowers the barriers to entry. 2021-03-04 05:56:39 @samsaranc It's definitely good to consider this angle, but I think the answer lies in encouraging sharing of data, and in particular in constructing datasets that can be shared. 2021-03-03 23:05:55 @lucy3_li @cfiesler @michaelzimmer Ohh! Also, I want to make sure you know about this part of the discussions: https://t.co/C7LNem1KVg 2021-03-03 22:39:22 @AlexBNewhouse That's a great counterpoint (and I really appreciate your work!). I guess some questions I would ask are: what are you doing with the data, and is it qualitatively different than what would happen if you were looking at the data in situ, rather than scraping it? 2021-03-03 22:24:03 And raising the bar of expectations about what goes into a research paper should also help with the reviewing overwhelm. (Small MPU is only part of the problem: over-concentration of resources in “AI” leading to LOTS of people following those resources is also part of it.) /fin 2021-03-03 22:23:01 The expectation that a research contribution can be built quickly on scraped data & > 2021-03-03 22:22:39 (I hesitate to say "pace of science", because I'm skeptical that the current pace is actually really nurturing of science.) > 2021-03-03 22:22:01 This ofc takes more time & > 2021-03-03 22:21:26 Finally, they asked me how this would work in my ideal world. My answer: Opt-in data collection, where the authors of the content choose to donate their posts. > 2021-03-03 22:21:16 We talked about (& > 2021-03-03 22:21:00 Had a conversation today about the ethics & > 2021-03-03 21:57:48 @jessgrieser How..???? 2021-03-03 21:29:45 Currently being reminded that the host/moderator role in panels involves a fine balance between saying *enough* that the speakers are appropriately honored while not hogging the floor.... 2021-03-03 20:51:36 @BlancheMinerva @YJernite @mcmillan_majora I think the suggestion is to create a structured exercise, along the lines of what we did in the data statements workshop... 2021-03-03 17:24:20 @e_davishale Shhh 2021-03-03 17:20:32 Don't platform eug*ncists challenge 2k21 https://t.co/weJ2DEqxfu 2021-03-03 16:00:27 PSA to journalists working in this space: "AI" (and especially "AGI") attracts some people with absolutely vile views on intelligence. Esp. if someone is posting anonymously, check around a bit before giving them a platform. https://t.co/LdtXCmQj0B 2021-03-03 15:58:53 The absolute worst thing about working in the space I currently find myself in is being quoted in articles alongside eug*nicists. (Not about eug*nics, but still.) 2021-03-03 14:30:30 More excellent reporting by @kharijohnson on the continuing fallout of Google firing @timnitGebru and @mmitchell_ai https://t.co/aRAaanmLmt 2021-03-03 14:24:56 RT @emilymbender: Now waiting for the first journalist to actually include the . It's part of the title, you cowards! 2021-03-03 14:20:53 RT @MattGoldrick: Deadline 6 April: Postdoc, sociophonetics of voice quality, Aarhus Univ. https://t.co/OsQF3z3Sku 2021-03-03 14:09:56 RT @RDBinns: Finally read the Stochastic Parrots paper by @emilymbender @timnitGebru @mcmillan_majora & 2021-03-03 14:09:53 @RDBinns @timnitGebru @mcmillan_majora @mmitchell_ai Thank you, Reuben! 2021-03-03 05:11:14 Is there a setting for browsing the ACM Digital Library where the pages don't have things popping up and wiggling around? I'd prefer to never see the "recommended" pop up, and not have the space-hogging drop-down menu thingy at the top of the page TYVM. 2021-03-03 02:55:40 RT @gleemie: UCSD seeks a cluster of Assistant Professors in STEM, including Engineering / Computer Science who work on racial and ethnic e… 2021-03-02 20:24:14 RT @timnitGebru: https://t.co/uKj4ZHzNXa from @financialtimes "The US Congress has been considering an Algorithmic Accountability Act, whi… 2021-03-02 14:37:22 RT @alicegoldfuss: March_2020 March_2020(1) March_2020_final March_2020_FINAL March_2020_FIXED March_2020_FUCK 2021-03-02 13:47:36 RT @aclmeeting: ACL: We're looking for your input on NLP groups, associations, networks and #NLProc initiatives (like online seminars/meetu… 2021-03-02 05:28:12 @TaliaRinger Sunbeam appreciation, by Euclid https://t.co/7khJuiat3k 2021-03-02 02:11:10 RT @VerbingNouns: IF YOU ARE NOT A LINGUIST can you please do this experiment for a friend? takes 20 min: https://t.co/eFWdO6mh7x 2021-03-01 23:52:18 @timnitGebru @JeffDean The whole point of peer review is to determine what does and doesn't merit publication. We'll be presenting the paper at #FAccT2021 next week, TYVM. We didn't need additional "reviews" from the ML fanboys and trolls you sent our way. 2021-03-01 23:50:19 @timnitGebru I should have kept a tally of how much time I've spent over the last two months dealing with this crap because it pleased @JeffDean to say in a public post that our paper "didn't meet the bar" for publication. 2021-03-01 23:46:08 And it's not just about "people who don't want to be contacted" FWIW. He's been spamming all kinds of folks with derogatory remarks about @timnitGebru, me, and others who stand up for us. 2021-03-01 23:42:02 Better late than never, I suppose, but as one of the targets of his harassment, I could have wished that you didn't embolden him and his like in the first place, nor sit by for two months while this went on. https://t.co/XvOJUwfkhw 2021-03-01 23:07:43 @syardi @alexhanna @mdekstrand As @MCHammer (2021) so eloquently put it: When you measure include the measurer. 2021-03-01 23:02:14 @alexhanna @mdekstrand @syardi Yeah -- much easier with the bibtex styles that give author year citations instead of the silly boxed numbers! 2021-03-01 22:56:22 @mdekstrand @alexhanna We do some of that, but of course we don't do it for all citations. At least with (author year) as the parenthetical citations, you get the bare minimum positionality of *temporal location*, plus, signals such as how many different authors are being cited, etc. 2021-03-01 22:41:37 People on Hacker News discussing the Stochastic Parrots paper vs. what we actually say in the paper. I'd say this shows the downside of the ACM citation style, which makes it far less obvious when a point is being grounded in previous literature, not "assumed as fact". https://t.co/D4nTEu1SlB 2021-03-01 19:54:16 "bridges without vehicles" is so apt! (And see the whole thread) https://t.co/BRQPS8Qnoa 2021-03-01 19:51:09 RT @SeeTedTalk: i've read #StochasticParrotts a few times. i have no flaming hot takes, only small thoughts. it's a really good paper. igno… 2021-03-01 19:51:05 @SeeTedTalk Thank you, @SeeTedTalk ! I really like the "bridges without vehicles" metaphor, too. 2021-03-01 17:02:46 @j_w_baker @timnitGebru Thank you, James! 2021-03-01 13:58:41 Thanks @LingMuelller I know we were a bit slow there in the end, but a fun side effect of that is the seasonal topicality of this post, as sometime in the next couple of weeks @uwcherryblossom will start their show https://t.co/zhfFQ8L1ue 2021-03-01 06:17:37 @DingemanseMark @MSFTAcademic And I guess the whole @aclanthology counts as "computer science" and (worse) "artificial intelligence"? *sigh* 2021-03-01 06:00:13 Just created a profile on @MSFTAcademic and claimed my papers ... and I am deeply puzzled by this graph on my profile page. What data is this based on? It is definitely *not* an accurate representation of the prevalence topics in my published work. https://t.co/oBv9B2KN6q 2021-03-01 02:40:54 RT @emilymbender: And here's a direct link to the concert recording (scroll down the page a bit): https://t.co/IGg1OrhIMV @tobysmenon 's… 2021-03-01 02:40:52 RT @emilymbender: Proud mama moment: My son @tobysmenon's first concert as a composition major at UCLA's @UCLAalpert !! Check out his Insta… 2021-03-01 01:08:48 @anggarrgoon @HadasKotek These are all great points :) I got so fed up about this that I wrote two books.... 2021-02-28 20:50:28 @Spectregraph @tobysmenon @UCLAalpert Thanks :) 2021-02-28 20:49:12 And here's a direct link to the concert recording (scroll down the page a bit): https://t.co/IGg1OrhIMV @tobysmenon 's piece Earthen, performed by Isabelle Fromme, is the first one. https://t.co/PZGHCO1krQ 2021-02-28 16:50:21 @sibinmohan @tobysmenon @UCLAalpert Thank you! 2021-02-28 16:12:01 @anoushnajarian @tobysmenon @UCLAalpert Thank you :) 2021-02-28 16:11:50 @AndyPerfors Aw, thanks, Andy! 2021-02-28 15:52:04 Proud mama moment: My son @tobysmenon's first concert as a composition major at UCLA's @UCLAalpert !! Check out his Instagram for the link to the recorded concert & https://t.co/1ppYyYR1nB 2021-02-28 01:22:41 @ian_soboroff I think I'll start with the conf publications chairs, but I'll let you know if I need that contact! Thanks :) 2021-02-27 23:19:08 RT @ctdicanio: Four openings in the new department of Indigenous Studies at UB. Two are clinical (renewable 3 year contract positions, with… 2021-02-27 21:12:43 @trochee Hopefully it was mostly emeriti and not emeritae? 2021-02-27 19:08:42 RT @RadicalAIPod: ALSO we want to make sure to center the work of scholars now experiencing public backlash for doing the necessary work of… 2021-02-27 17:31:47 So it looks like the #FAccT2021 papers have entries in the ACM Digital Library now, but just the metadata (including bibliographies) and no pdfs yet. Meanwhile, ours is missing its in the title. Should I wait and see if that's still true when the pdf posts? 2021-02-27 17:11:35 Thank you, @AndyPerfors -- your post not only goes to the heart of the present issue and explains it so so clearly (and gently, though one could wish that level of gentleness wasn't required) but also develops really important points about academic integrity & 2021-02-27 16:53:47 RT @jad_kabbara: Few more days until the pre-submission mentoring deadline for the @acl_srw 2021! If you’re an undergrad or junior grad st… 2021-02-27 00:39:23 RT @KoryStamper: "Mr." from the Latin "magister": masculine. "Potato" from the Taino "batata": feminine. "Head" from the Old English "hēa… 2021-02-27 00:27:51 @complingy @EmmaSManning @ThatAndromeda And when you're knee-deep in a very narrow hole, I guess it is easy to feel small. But you aren't! You are creating knowledge that wasn't there before! It's just hard to feel any appreciation for it when it's been hanging over you for a couple of years. (Been there.) 2021-02-26 20:21:25 @deliprao These discussions will have impact if they lead to regulation: by governments and also by scholarly organizations. "Corporations gonna corp" helps nothing. 2021-02-26 20:13:01 @deliprao Evergreen counterpoint to "genie is out of the bottle": https://t.co/z0HBPD37Dy 2021-02-26 19:19:09 @kirbyconrod Scope resolution FTW 2021-02-26 17:40:48 Not saying that I necessarily deserved that particular grant, just that it was a very on-the-nose example of how the corporate $$ going to fund AI & 2021-02-26 17:39:38 I applied for the first round of the Amazon/NSF funding, as part of a multidisciplinary/multicampus team. But I couldn't be the PI of the grant, because they required the PIs to be in CS depts. (And then we were rejected anyway, for not being "CS" enough...) 2021-02-26 17:38:38 One thing that is sticking with me just now is your answer to the question about how to get CS depts to build bridges to the other fields of study & 2021-02-26 17:37:19 This was awesome!! Thank you so much @mer__edith for your clarity and for taking the time to lend your expertise here. Your envisioning is so important and you are role model in how to do that while lifting up others. https://t.co/xySxcMBGqi 2021-02-26 14:54:58 New today: a article in @FastCompany by @kschwabable that presents the broader context of AI ethics, AI harms, and who is working on this and how https://t.co/0mPfTtNrVr 2021-02-26 14:28:15 Thank you @kschwabable for reporting that puts Google's actions in firing @timnitGebru and @mmitchell_ai their broader context: the deployment of AI in the world, the co-opting of AI ethics, and the people & 2021-02-26 04:51:26 @timnitGebru I wonder if their internal "investigation" looked into the difference between these two papers and learned anything.... (doubt it). 2021-02-26 04:47:54 @timnitGebru Reading that story I couldn't help but notice, though: theirs got "we conducted a thorough edit" 2021-02-26 04:33:31 RT @_alialkhatib: it's distressing that after work like @gleemie & 2021-02-26 04:10:00 @geomblog @mdekstrand @curtosys Just as with divorce, which is often seen as sad when it's really the resolution to something else sad, loss of trust when the object of trust wasn't trustworthy is actually progress. 2021-02-26 03:45:05 @mdekstrand @curtosys I think we'd definitely need the professional orgs involved one way or another (ACM, ACL, etc). It'll take some pushing, though, as those orgs in turn probably don't want to threaten corporate sponsorship. (Also: the policies will have to work internationally.) 2021-02-26 03:42:58 @curtosys @mdekstrand Hmm ... well, documentation of company research publication approval processes. I'd expect it to apply across projects/departments/papers, not specific to each paper. 2021-02-26 03:39:38 @curtosys @mdekstrand And then I guess also: Under what conditions do they just block publication? 2021-02-26 03:39:14 @curtosys @mdekstrand Maybe one thing we could hope for would be clear statements from all companies employing researchers who publish at conferences (/in journals) of what their review process entails. Who is doing the reviewing? What are they looking for? What kinds of changes can they request? 2021-02-26 03:38:10 @curtosys Yeah, could be tricky. @mdekstrand had some interesting thoughts here: https://t.co/H6OFiofTOU 2021-02-26 03:31:31 @ssshanest 2021-02-26 01:37:03 @mdekstrand @rajiinio @amandalynneP @cephaloponderer @alexhanna Apropos https://t.co/jlPpKomYGP (But more seriously, we are working on further iterations of that paper. More soon, I hope!) 2021-02-25 18:13:45 @mdekstrand I'm curious what reasonable liability screening would be, actually. "Don't publish this because it shows we know the problem and thus could get sued for not handling it" ... probably isn't it, right? 2021-02-25 18:10:05 @athundt @aclmeeting @NAACLHLT @emnlpmeeting Well, yes, but all of this feels like a derailing of the point of my initial thread. I'm talking about the *lack of transparency* around research done at Google, specifically. 2021-02-25 17:35:27 @athundt @aclmeeting @NAACLHLT @emnlpmeeting But those IRB criteria need to be evaluated external to the companies then, or we'll have even less transparency, I'm afraid. 2021-02-25 17:07:33 The more I think about it, the more I think we do need this. Not just for #nlproc --- I'm sure similar considerations apply elsewhere. https://t.co/oPMhcza0J7 2021-02-25 16:58:57 Thinking about #NLProc only, should @aclmeeting @NAACLHLT @emnlpmeeting et al require attestations that papers we publish in those venues haven't been edited by corporate lawyers? 2021-02-25 16:58:11 Back to "conducted a thorough ___" What does your internal language model complete that sentence with? Who would be editing papers with the same attitude as someone conducting an audit? How does that fit in with what we understand the process of scholarship to be? 2021-02-25 16:53:03 This reporting isn't that transparency, either. It appears to be based on leaked information, i.e. NOT Google actually providing transparency. 2021-02-25 16:52:13 This is striking reporting. Also jumped out at me: the phrase "conducted a thorough edit". If Google wants to retain any credibility, what's needed is transparency, not just internally but also externally about their paper review and "editing" processes. https://t.co/GflFFpAmAA 2021-02-25 15:12:19 RT @pardoguerra: Maybe making AI tractable/explainable isn’t only about making the algorithm more transparent but also about fostering orga… 2021-02-25 14:42:31 RT @emilymbender: If you follow @timnitGebru and @mmitchell_ai, it may feel like everyone must know the story about how @Google fired them… 2021-02-25 14:23:24 I've been reading about a faculty member at U Melb and their awful website set up to curate & 2021-02-25 14:10:47 RT @timnitGebru: Excellent tictoc video. https://t.co/WsXtk8XjKy 2021-02-25 01:10:17 @anoushnajarian @le_science4all Thanks, @anoushnajarian --- what a great resource! 2021-02-24 23:28:22 Thread with detailed, informed, important information about how @Google fired @timnitGebru and @mmitchell_ai and what it means: https://t.co/rRjWWyEjmz 2021-02-24 23:27:29 For anyone whose reaction on reading this was "Well, I don't have enough information..." I've written a thread for you! (linked in next tweet) https://t.co/WcSVLxUuqj 2021-02-24 23:26:30 And in their open letter (the same one @le_science4all talks about in the videos above), which lays out what Google could have done (rather than doubling down on terrible decisions): https://t.co/wDiZsPBa6L 2021-02-24 23:25:27 @timnitGebru Further important context comes from @GoogleWalkout in this Medium post: https://t.co/1ppS88RgoO > 2021-02-24 23:24:36 @timnitGebru Likewise this interview by @kharijohnson where @timnitGebru lays out the inaccuracies (and *harms*) in Google's statements about the situation: https://t.co/u1STh7u4iS 2021-02-24 23:22:38 Including this piece, where she interviewed @timnitGebru about her last weeks at Google: https://t.co/BJBf74pWFZ > 2021-02-24 23:21:48 @timnitGebru @mmitchell_ai Also important is @_KarenHao 's careful coverage of the story: https://t.co/BtBJyiN8YF > 2021-02-24 23:20:37 @timnitGebru @mmitchell_ai @Google @le_science4all Same video in English: https://t.co/r5Sm4EI2uO > 2021-02-24 23:20:19 @timnitGebru @mmitchell_ai @Google Other valuable sources include @le_science4all 's videos (in French and English) French: https://t.co/6yzLJCwjlm > 2021-02-24 23:18:36 If you follow @timnitGebru and @mmitchell_ai, it may feel like everyone must know the story about how @Google fired them (& https://t.co/9rTMylGvUw > 2021-02-24 22:34:50 RT @TeachingNLP: CFP updates! We've shared our Instructions for Reviewers & 2021-02-24 20:43:57 @Kelly_Clowers @rcalo Yes! Sundials in the Southern hemisphere go the other way! But sundials here in Seattle (when they work) go (what we already call) clockwise. 2021-02-24 20:23:26 @Kelly_Clowers @rcalo I came here to ask the same question... 2021-02-24 19:13:50 Thank you @cephaloponderer for this insight into their and your amazing work. I definitely experienced the value of this when working with you and other members of the team but only saw a fraction of it. https://t.co/uLoQefmAFS 2021-02-24 17:30:43 RT @NAACLHLT: Considering submitting to #NAACL2021 workshops? Deadlines are approaching and so many great workshops to consider! To help au… 2021-02-24 17:30:32 RT @le_science4all: The situation has gone from horrible to TERRIFYING, as more information is revealed, and as @Google fired @mmitchell_ai… 2021-02-24 13:18:19 RT @superlinguo: Aww yeah #LingComm21 call went out on @linguistlist so it's real now! 2021-02-23 22:51:01 @zacharylipton I could recommend some, but maybe not from the past 3 years. 2021-02-23 22:31:14 Come join us at #ComputEL! https://t.co/nGWlrPJfOi 2021-02-23 21:27:40 @alienelf Thank you for your careful work! 2021-02-23 21:23:08 @alienelf True, but maybe not the ones you are most interested in reading... 2021-02-23 19:53:48 @alienelf Sorry 2021-02-23 19:00:46 RT @emilymbender: Pour mes amis francophones... https://t.co/2E8GpuIgG8 2021-02-23 05:36:24 @timnitGebru Agreed that it's worse now, since it's so much clearer how false it all is. Both given how they've treated Meg (and your team) in the meantime and (for me anyway) the greater clarity as to what happened just before 12/3. 2021-02-23 05:30:59 @timnitGebru I can't read that without getting furious at just about every sentence ... and it must 1000x worse for you. 2021-02-23 03:25:42 RT @MintakaGlow: Polysemy in action. https://t.co/RZyisp64Z6 2021-02-23 01:56:13 Pour mes amis francophones... https://t.co/2E8GpuIgG8 2021-02-23 00:04:29 @joshraclaw @mayhplumb The proximal cause of my complaint wasn't an abstract (or a paper) but a deadline for a letter of recommendation (for a national scholarship). 2021-02-22 23:46:03 @joshraclaw The famous "Anywhere on Earth" 2021-02-22 23:29:04 @joshraclaw Hawai`i would like a word... 2021-02-22 23:22:14 Somehow I find converting from UTC *way harder* if it's given as e.g. 8pm rather than 20:00, even though I don't usually use the 24-hr format for times locally. 2021-02-22 22:27:48 @JCornebise Very good point about the servers!! (Though I can imagine then people in pokey timezones like mine starting to be in demand as coauthors, just to get the later deadline. Sigh.) 2021-02-22 22:23:45 @CaroRowland Wow -- grr. 2021-02-22 22:23:37 @JCornebise I think that's quite different to "11:59pm Feb 22 AOE", since then it's up to you to decide if you want to figure out how much of your Feb 23 that includes and if you're going to use that. 2021-02-22 22:22:49 @JCornebise This isn't that kind of deadline: it's for a letter of recommendation for a student. And it's "5pm Central Time, Feb 22". It's a good thing I thought to check the actual time. 2021-02-22 22:19:27 @zehavoc For in-country things, I don't even really mind "midnight eastern time". It's deadlines that land in the middle of my afternoon that are particularly infuriating. 2021-02-22 22:15:14 (Remembered this one in time, but just barely.) 2021-02-22 22:15:02 Deadlines that are 5pm Central Time or Eastern Time are really completely unreasonable. I'm submitting it electronically. 5pm is the end of the workday for whomever is receiving it. Why can't I have my full workday? Grrr. 2021-02-22 19:59:54 RT @jonathanmay: The NAACL Regional Americas Fund is now accepting proposals for 2021-2022! Grants of up to $1500 are available for NLP/Com… 2021-02-22 19:57:16 RT @UWlinguistics: Congrats to Prof. Sharon Hargus on well-deserved LSA Kenneth l. Hale Award! https://t.co/IZ9I3rpnhD 2021-02-22 17:54:25 RT @timnitGebru: That shouldn't be the title. Should be Google AI head apologizes for how people feel about it in the morning and fired t… 2021-02-22 17:51:14 @myrthereuver @MarijnSax @evanmiltenburg @annargrs @aclmeeting I think "societal impact" has some promise as a term, but "NLP for (social) good" doesn't cover the full space and (depending on who's promoting it) can in fact be quite suspect: https://t.co/g6v8rOm8i6 2021-02-22 17:50:12 @myrthereuver @MarijnSax @evanmiltenburg @annargrs @aclmeeting Just speaking to what I have read, which generally starts from the notion that it's dealing with competing needs of people who are otherwise on a level playing field. That is not where we are, and approaches that don't include an analysis of power dynamics are insufficient. 2021-02-22 17:49:03 @myrthereuver @MarijnSax @evanmiltenburg @annargrs @aclmeeting I definitely don't want to say that nothing in philosophical ethics is relevant/useful, as I'm far from an expert in the area. > 2021-02-22 15:09:09 @MarijnSax @myrthereuver @evanmiltenburg @annargrs @aclmeeting In fact, I think it was a mistake to talk in terms of "ethics and NLP" or "AI ethics" for this reason. But the term seems to have stuck and now I guess we're stuck with it. 2021-02-21 22:11:36 @JonathanBranam @osazuwa @csdoctorsister Oh absolutely! The point is that the field of medicine doesn't also have workshops on "medicine for good". 2021-02-21 21:18:02 RT @osazuwa: @emilymbender @csdoctorsister More simply, it feels like the a Freudian slip among a research community who implicitly believe… 2021-02-21 21:17:55 This is not to say that everyone who has published in such workshops or taken such funding is themselves adopting that POV, but as a whole, that is how it comes off. 2021-02-21 21:16:46 And all of that suggests an adversarial stance towards AI ethics work that really doesn't need to be there. See, e.g.: https://t.co/nyYFyZQtmp > 2021-02-21 21:13:20 In that context, "AI for social good" initiatives seem to be a way of saying: but you can't put the brakes on any of this, because, see, it also can be used for good. If we don't help those poor disenfranchised "others", what will become of them? > 2021-02-21 21:11:33 @csdoctorsister The purpose of doing research to identify those harms is so that they can be prevented/mitigated/made right, by some combination of better system design and appropriate regulation. > 2021-02-21 21:10:19 .@csdoctorsister brings out a really important point here. "AI for good" events (workshops, funding sources, etc) feel like a deflecting response to critical work that aims to understand the possible harms of pattern recognition at scale. > 2021-02-21 19:20:39 RT @timnitGebru: So we can "truly advance the field"? We created conferences so we can "speak diplomatically." We wrote papers to do that.… 2021-02-21 18:32:18 RT @timnitGebru: I mean thats what happens when someone who is not in this space is now the face of it for a company. They ignore years of… 2021-02-21 18:32:17 RT @csdoctorsister: @timnitGebru Facts. Ethical AI efforts become AI for Good initiatives that’ll cancel the harmful impacts investigation… 2021-02-21 17:41:26 @asayeed Looks like yes: https://t.co/FGWHBAHfAV 2021-02-21 16:54:00 @asayeed Is the QASRL set open? 2021-02-21 15:02:23 RT @timnitGebru: This is the wildest thing. The one company that has come under fire for LLMs more than anyone is Open AI. And people there… 2021-02-21 03:54:27 @BlancheMinerva @athundt @timnitGebru @MSFTAcademic Both my website and the paper itself are findable on Bing. But maybe it crawls arXiv more frequently... 2021-02-21 03:24:36 @athundt @timnitGebru @MSFTAcademic My guess is that it's because I put the print up on my own web page, rather than on arXiv. (I did that because I know some of these search engines prioritize arXiv over the peer reviewed venues, and I'd rather not have people citing an arXiv version forever.) 2021-02-20 21:18:36 @le_science4all Merci, @le_science4all ... tout très bien expliqué. 2021-02-20 21:18:11 RT @le_science4all: Par ailleurs, le fameux article qui a déclenché toute cette histoire, et que Google a préféré enfouir pour protéger le… 2021-02-20 20:05:28 @superlinguo @rctatman Thanks 2021-02-20 20:05:22 RT @superlinguo: Oh hey it's two of my favourite people working in language technology talking about ethics. It'll be worth a listen becaus… 2021-02-20 19:33:40 RT @amandalynneP: Excited to be a panelist for Digital Humanities Day @UW today! Check it out: https://t.co/Oxcw8AqruH 2021-02-20 18:06:28 RT @timnitGebru: But as Claire wrote power is dumb. We were the ones getting them any ounce of credibility and they have done everything po… 2021-02-20 17:06:58 @geomblog I'm fairly sure racism was a factor, too. 2021-02-20 16:56:04 RT @kate_saenko_: Excited to host @timnitGebru as a speaker at Boston University's #AI & Timniy Gebru… 2021-02-20 16:55:33 RT @rharang: In retrospect, it's *amazing* how badly Google bungled it's handling of @timnitGebru and her team. There's an alternate worl… 2021-02-20 14:49:45 @sandyasm @GaryMarcus Yep spotted it! https://t.co/TMcdPT0kLt 2021-02-20 14:22:56 RT @LingMuelller: First #HPSG conference that is planned to be online right from the start. So everybody can participate, no travel bans, n… 2021-02-20 14:04:14 Waiting for someone to hybridize this with the “if x wore pants” meme https://t.co/LU9YAyMiE2 2021-02-20 13:53:02 RT @CUNY2021: Registration for CUNY 2021 is now open. Given the virtual format, everyone can register for free! Registration is required to… 2021-02-20 04:26:36 Oh hey -- there is a video version! If you're watching the video, the Euler remarks will at least make a little sense https://t.co/s11DYZH0R9 https://t.co/OdKWNXFZcK 2021-02-20 04:09:23 @kharijohnson Me either!! 2021-02-20 03:24:33 So, be "diplomatic" when daring to criticize tech---even when employed by a company to do exactly that---but no need to be "diplomatic" or even kind & 2021-02-20 03:23:37 Thanks you @kharijohnson for this reporting. Stood out: "In a video message, Croak called for more “diplomatic” conversations when addressing ways AI can harm people. Multiple members of the Ethical AI team said they found out about the restructure in the press." > 2021-02-20 02:05:36 .@Rasa_HQ --- do you have transcripts for your podcasts anywhere? I couldn't find them, but I would hope than an org such as yours would know to prioritize accessibility. 2021-02-20 02:00:26 @blprnt If you can find it: Animusic 2021-02-20 01:38:22 RT @alexhanna: Y'all can I just take a second of this shit and say how good @timnitGebru is? Up until all of the trashstorm today, she was… 2021-02-20 01:38:18 RT @mer__edith: Yes. I keep trying to find words for how real and brilliant and committed to people-not-status Timnit is, and what a wonder… 2021-02-20 01:37:55 @paul_rietschka Well, consider both the venue and the people you are @'ing with these comments (tho I've untagged everyone else now). Given how painful this situation is, did we really need your "well, actually..." contribution? 2021-02-20 01:31:13 @paul_rietschka @timnitGebru @mmitchell_ai @Google @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson The consequences might take various forms, e.g. losing the ability to publish @ confs, or more regulation, but it starts with shedding light on what they are doing and not just writing it off as "corporations will be corporations". 2021-02-20 01:30:32 @paul_rietschka @timnitGebru @mmitchell_ai @Google @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson In other words, YES we need more funding for non-corp research. But there ALSO needs to be accountability for Google & 2021-02-20 01:29:06 @paul_rietschka @timnitGebru @mmitchell_ai @Google @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson But more importantly: If the response is "Oh well, profit motive ¯\_()_/¯" then what? 2021-02-20 01:28:22 @paul_rietschka @timnitGebru @mmitchell_ai @Google @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson I don't think "corporations gonna corp" is a helpful response here. Yes, in a better world we would have far more national funding for independent research. But still: having folks *at the big corps* working on these issues is valuable. > 2021-02-20 01:10:25 @timnitGebru @mmitchell_ai @Google @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson I did not expect to get a front-row seat to the biggest series of self-owns in the #AIethics space ... of the decade? ever? on Google's part. So, no longer impressed with Google. Still very impressed with the researchers. 2021-02-20 01:09:05 @timnitGebru @mmitchell_ai @Google @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson (It seems important to note, at this point, in case this isn't clear: neither I nor the UW grad students who were also part of these collaborations were paid by Google. We were just engaging in cross-institution collaboration as we would with e.g. another University.) > 2021-02-20 01:08:22 @timnitGebru @mmitchell_ai @Google Not to mention the calibre of the researchers they'd managed to attract: @timnitGebru and @mmitchell_ai themselves, plus @blahtino @cephaloponderer @vinodkpg @alexhanna @benhutchinson and others. > 2021-02-20 01:06:16 Over the past year, I've had the good fortune to collaborate on research with @timnitGebru, @mmitchell_ai and five members of their team, on two separate projects. At the start, I was impressed with the resources @Google was putting towards this area > 2021-02-20 01:05:10 RT @timnitGebru: I think, along with our papers that are taught in classes, what Google did to our team is also going to be taught in class… 2021-02-20 00:42:37 RT @_alialkhatib: i wrote how unclear it is how Google wants serious researchers to engage with research sponsored by Google after @timnitG… 2021-02-20 00:21:48 @alexhanna Gross. I'm so sorry. 2021-02-20 00:14:47 For those listening and confused on these two points: Euler = my cat, who was getting in my way (at least twice) Treehouse meeting = the compling lab group at UW https://t.co/urbe1yk9oM 2021-02-19 23:23:18 At. Every. Step. Firing @timnitGebru, locking @mmitchell_ai out for five weeks, stranding their team, gaslighting to no end, and now firing @mmitchell_ai To what end? https://t.co/yHp8TmWXmn 2021-02-19 22:54:22 RT @math_rachel: The first step to improving retention is to stop firing the globally recognized experts who are successfully doing the wor… 2021-02-19 22:42:29 @geomblog @FrankPasquale @mmitchell_ai @timnitGebru @sibinmohan @RealAbril That would be an impressive demonstration of inappropriate influence, to say the least! 2021-02-19 22:35:27 RT @ia_pure: The tragedy is that Stochastic Parrots illuminated many failings of current language models that could be remedied. GOOG choos… 2021-02-19 22:03:51 @mmitchell_ai Well, damn. 2021-02-19 21:58:15 Fun to relisten (hi @rctatman !) my two favorite moments: 1. App developers have a super-power! (Listen to find out what :) 2. The implications of the fact that humans interpret language *in context* https://t.co/CrFN4AClgS 2021-02-19 20:13:35 RT @TSchnoebelen: Hey linguists of Twitter—are you or do you know someone working on gender & 2021-02-19 20:11:38 RT @emilymbender: I think this pattern ("can we use GPT-3 for task x?") is connected to this point, from the conclusion of the Stochastic P… 2021-02-19 20:11:26 I think this pattern ("can we use GPT-3 for task x?") is connected to this point, from the conclusion of the Stochastic Parrots paper. In other words, we're seeing one type of harm of mimicking human behavior. Full paper here: https://t.co/CSMZPlJd8h https://t.co/RjPdTRWutn 2021-02-19 20:06:17 This was exactly my thought on reading this. How is hiring more staff around "retention" going to prevent the company from abruptly firing people? "Retention" wasn't the problem. https://t.co/FgCgCZ5GmT 2021-02-19 17:26:16 @geomblog Ouch and yes. 2021-02-19 14:07:51 RT @Rasa_HQ: Join @rctatman on the Rasa Chats Podcast as she talks with @emilymbender from @UW about ethics in #NLP, who's responsible for… 2021-02-19 03:39:52 @MintakaGlow And take care! 2021-02-19 03:39:37 @MintakaGlow Probably a good time to call the advice nurse line (associated with your health insurance or a hospital where you are an established patient). 2021-02-19 01:45:32 RT @superlinguo: Announced speakers for #LingComm21: @mixedlinguist @bgzimmer @VocalFriesPod @lanegreene @becauselangpod @GrantBarrett @Gra… 2021-02-19 01:00:27 @Timnit Google certainly has been creating stressful situations for @timnitGebru @mmitchell_ai and their whole team---and doing exactly nothing to alleviate that stress. 2021-02-19 00:59:33 @Timnit Merriam-Webster on the relevant sense of "diplomatic": "employing tact and conciliation especially in situations of stress" https://t.co/FYr9Ti4P7z 2021-02-19 00:58:22 The way Google is trying to spin this is galling. If they were actually interested in being "diplomatic", they would have had the conversation @timnit was requesting. https://t.co/h2TsKoCf6o 2021-02-18 20:48:42 @timnitGebru 2021-02-18 20:19:14 RT @fchollet: A big reason why research labs that hype up general AI progress are irresponsible: their talking points end up shaping the wo… 2021-02-18 19:38:27 @mmitchell_ai That's beyond reprehensible, Meg. I'm so sorry. 2021-02-18 19:36:36 @alexhanna Just awful. No accountability means no credibility. Thank you for continuing to shine a light on this. 2021-02-18 18:14:54 RT @studies_centre: Are you a sign language interpreter working in multilingual settings with a min 3 years experience? If yes, we'd be rea… 2021-02-18 18:06:51 @timnitGebru @JeffDean Ugh. I'm so sorry. Both predictable (given past behavior) and awful. And you are totally right: If Google won't listen to the voices telling them that (& 2021-02-17 19:08:16 @_akpiper @quinnanya @timnitGebru That sounds a lot like what Gehman et al explored here: https://t.co/XewdgRv1MF 2021-02-17 19:06:41 Thank you, @StatalieT and the #DataEthicsClub crew for this thoughtful discussion! I was particularly intrigued by the idea that learning how to trick chatbots can burst the hype bubble. https://t.co/ryn0Cn4701 2021-02-17 06:22:44 @RishiBommasani Thank you, @RishiBommasani ! This really has been an amazing evening for me with so many kind words on Twitter!! 2021-02-17 04:59:47 @TermyCornall Thanks, @TermyCornall! 2021-02-17 04:57:20 @timnitGebru You really are too kind, Timnit! At any rate, it was a complete joy to collaborate with you and your whole team---and I know I couldn't have pulled that together on my own. 2021-02-17 01:06:02 @maria_antoniak Thanks, Maria, and lol re emacs. (I'm teaching 567 this quarter and also wowed some current CLMSers with old-school grep & 2021-02-16 22:44:04 @jessgrieser Rerecorded and made a point of actually naming racism (and misogyny and ableism and transphobia) and it's much stronger now. The written paper isn't shy about these things, but somehow in speaking aloud, I needed a nudge. 2021-02-16 22:43:41 @jessgrieser Thanks for this, Jessi! I've been working on the FAccT presentation for the Stochastic Parrots paper and something felt ... off. Went back and listened and it's because I was talking about "bias" and "prejudice" and "different groups of people". 2021-02-16 22:37:34 @mmitchell_ai @timnitGebru I am proud to count you as an alumna!! 2021-02-16 22:37:11 @timnitGebru Well, as a measure of my professional success, I'd say that being the sort of academic that you, Meg, and your team were happy to co-author is a good sign :) (And yes, Meg was in one of the first CLMS---then CLMA---cohorts.) 2021-02-16 22:27:21 @mmitchell_ai @timnitGebru @sibinmohan @RealAbril I mean, if they were afraid that our paper was somehow going to be bad PR for Google, well.... 2021-02-16 22:26:54 @mmitchell_ai @timnitGebru @sibinmohan @RealAbril I'm still just bewildered at how many times Google has made the wrong move and then doubled down on the wrong move. As you say: collateral damage, but to what actual end? https://t.co/s8UNYZLvzW 2021-02-16 22:25:53 @mmitchell_ai @timnitGebru @sibinmohan @RealAbril Thanks, Meg. I'm proud of our work and I think it stands for itself. Anyone close enough to me for their opinion to really matter will likely read it and decide on their own. But yeah, I could do without the harassers---as could you!! 2021-02-16 22:23:42 @sibinmohan @timnitGebru @mmitchell_ai @RealAbril We certainly have other things we'd rather be doing with our time! But having others also speak out helps, for sure. 2021-02-16 22:20:58 @sibinmohan @timnitGebru @mmitchell_ai @RealAbril I figured it might have been that but then didn't go check for another piece of the thread. Anyway, thanks. 2021-02-16 22:14:41 @asayeed @timnitGebru @sibinmohan That could be what is motivating him, but in the details of his messages, he is white knighting for Google. 2021-02-16 22:13:40 @timnitGebru @sibinmohan @mmitchell_ai @RealAbril .@sibinmohan didn't include the last bit of the email in the tweet above, but here it is. I particularly like how dude thinks he knows what I am seeking and that he has any idea whether I'm well-known or not in my field. https://t.co/KKh4p75eKj 2021-02-16 22:08:20 @timnitGebru @sibinmohan @mmitchell_ai @RealAbril As always, I am in awe of your energy & 2021-02-16 22:08:06 @timnitGebru @sibinmohan And more to the point: @timnitGebru and @mmitchell_ai (and @RealAbril and others) your shedding light on this and continuing to do so brings great value. How can we get to better corporate (& 2021-02-16 22:06:17 @timnitGebru He sent it to me this morning too. I just archived it without reading it, because I figured there would be nothing of value there. (I wasn't wrong.) What is it with this guy? As you say, @sibinmohan why does he think Google needs his defense? 2021-02-16 19:55:30 Or maybe it's the time saved by writing 'know' instead of 'knowing' and similar typos... 2021-02-16 19:49:49 Sometimes I think my most valuable time management skill is know when and how to look away from the impossible todo list so as not to get immobilized by it. 2021-02-16 16:59:35 @jacobmbuckman @cinjoncin @timnitGebru @mmitchell_ai That seems really implausible to me and I'll need to read some of your work to try to understand where you're coming from. Don't have time to do such additional reading in the next couple of weeks, alas. 2021-02-16 16:51:24 @jacobmbuckman @cinjoncin @timnitGebru @mmitchell_ai And "unfathomably large" training sets are unmanageable not just because you can't hope to filter out the trash (even with methods better than a list of 403 "very bad words") but also because if you can't document the data, you can't study/understand/mitigate the biases. 2021-02-16 16:50:18 @jacobmbuckman @cinjoncin @timnitGebru @mmitchell_ai But there's space between "bias free" (doesn't exist) and "well, we'd better just grab everything, hate speech and all". > 2021-02-16 16:49:12 @jacobmbuckman @cinjoncin @timnitGebru @mmitchell_ai Thanks for the flag. So: I think we might be talking past each other here. There is no such thing as a fully debiased dataset, and I don't think any of the Stochastic Parrots authors would say there is. > 2021-02-16 15:47:36 @SashaMTL His sister's name is Euler. I can't claim credit though: they're really my son's cats, and he named them (when they were kittens and he was 11). 2021-02-16 15:09:24 RT @mer__edith: After firing @timnitGebru + @RealAbril + nurturing a racist and toxic culture, Google has the paternalistic AUDACITY to imp… 2021-02-16 14:34:02 Something about cats playing with and tangling strings? 2021-02-16 14:32:30 There's a joke to be made here about guarded strings but its 630am here in Seattle so I can't quite put it together. 2021-02-16 14:31:21 Euclid put his paw on my keyboard just before the #SCiL2021 talk playback glitched but I swear that has nothing to do with it. 2021-02-16 14:10:28 RT @emnlpmeeting: The call for demo papers at EMNLP 2021 is out! https://t.co/o4gIhx4ett 2021-02-16 05:04:16 And instead of following that clearly articulated path, @Google is promising to "train" 100,000 Black women in tech. How about looking inward and doing the work to make their own environment one that supports Black women instead? https://t.co/fcujGkv8Uq 2021-02-16 05:01:32 Meanwhile, all the way back in early December, @GoogleWalkout laid out the path that @Google should have taken, to make things right. https://t.co/wDiZsPBa6L 2021-02-16 05:00:12 This is awful -- awful for Meg, awful for Timnit, awful for their team, and awful for all of us. Meg and Timnit and their team should be putting their time and energy into their research. We're all missing out. https://t.co/KTsB6mSh6H 2021-02-15 19:02:01 RT @FAccTConference: ACM #FAccT2021 will be held March 3-10! Make sure you register by February 18th to catch the "Early Bird" rate. http… 2021-02-15 16:46:02 @karinv @ricealumni @LosAlamosNatLab Thank you for sharing this. Also, WTH @nytimes for describing another scholar in this article as someone's wife first and a mathematician second? https://t.co/LcTPSjhoWO 2021-02-15 15:21:50 RT @Abebab: this is so key and should be repeated again and again esp within the fields of AI/ML which are based on the very premises that… 2021-02-15 15:21:31 @AlvinGrissomII Super energizing! 2021-02-15 15:00:37 @timnitGebru Happy to report that it was successful :) 2021-02-15 14:14:12 @timnitGebru The current case in point involves some straight-up appropriation of her (our) work. Trying to address that one directly 2021-02-15 14:12:47 @timnitGebru But also amazing in another way, since I'm getting a much more direct look than I usually do into the way the world (including people I believe to be well-meaning researchers) treats Black women. 2021-02-15 14:11:57 Co-authoring with @timnitGebru has been an amazing experience, first and foremost because she is such an excellent scholar and it has been an absolute joy to work together. 2021-02-15 14:10:50 RT @LingMuelller: Cool chapter on coordination by @AbeilleAnne and @ruipchaves now prepublished. It demonstrates all the power HPSG has for… 2021-02-15 04:08:23 RT @ACharityHudley: This looks like such an awesome postdoc...applicants in the areas of Sociolinguistics, Applied Linguistics, or a relate… 2021-02-14 23:29:40 @BethCarey12 @hannejakobsen Well, I don't think it's possible to do ASR without a language model! But it definitely is possible to intentionally curate a broader range of e.g. proper names to include in the training data, to test for systematic failures, and to consider failure modes in deployment. 2021-02-14 23:18:22 @timnitGebru https://t.co/HpCw9mjlcz 2021-02-14 23:16:57 @hannejakobsen This is yet another example of why we should always step back and ask: who benefits, who is harmed? Especially when pushing for 'scale'. /fin 2021-02-14 23:15:39 @hannejakobsen In this case, @hannejakobsen was easily able to recover the error (she had the recording & 2021-02-14 23:13:56 I don't have the context (i.e. the full sentence I was saying), but @hannejakobsen says I'd said Dr. Timnit Gebru, which makes it even stranger that the LM went down the path that it did. > 2021-02-14 23:12:40 @hannejakobsen That the system should stumble on @timnitGebru 's name is a clear effect of the training data not having good representation of East African names. Why it should have come up with "African MC" is another mystery all together. > 2021-02-14 23:11:24 True story! The ASR system was the one used by @hannejakobsen when she was interviewing me for this piece: https://t.co/0F7kTXVebx https://t.co/2TmFj5LhiZ 2021-02-14 23:08:16 RT @EpiEllie: Roses are red Violets are blue Your mask protects me And mine protects you https://t.co/6HHuqGRxRq 2021-02-14 22:34:32 RT @emilymbender: Hey #linguists let's make an #AcademicValentines thread. Here's a start (mine from 2018): Roses are red Violets are blue… 2021-02-14 22:27:38 RT @jackclarkSF: So when Google inevitably has to disclose more details about JFT and tries to make a big song and dance about its work on… 2021-02-14 18:38:47 @Miles_Brundage @BlackHC The camera-ready pre-print is up now! https://t.co/AjqGbduMQm 2021-02-14 18:33:54 RT @AllysonEttinger: SCiL 2021 starts tomorrow! Schedule and registration here: https://t.co/WUJd6Yow92. We kick off 9am EST with opening r… 2021-02-14 18:32:41 @jessgrieser I mean, to get all the stuff done that you do, you must be at least that organized! (As well as actually being a collective.) 2021-02-14 18:32:09 @jessgrieser It works better in other years! I guess you're just wishing us all a happy early Valentine's Day for 2022 :) 2021-02-14 18:30:15 @jessgrieser Nah, it's Sunday! 2021-02-14 18:00:44 @abitidiomatic And ... it's officially epic: https://t.co/k6tQWwuHVJ 2021-02-14 14:38:25 RT @Joe_Pater: SCiL 2021 meeting begins tomorrow and proceeds through the week. The full schedule is here: https://t.co/75oS5nASdo. Regist… 2021-02-14 05:36:17 @osoleve Way less than a year :) And thank you for your kind words! 2021-02-14 03:27:40 @abitidiomatic Indeed. We're required to post photos as soon as there's any accumulation, but this is quite a bit! 2021-02-13 23:14:56 @johnroblawson We get plenty of days with temps below freezing and (famously) plenty of days with precipitation ... just usually not both at the same time. 2021-02-13 23:14:25 @johnroblawson I'd say one winter in three we get snow that accumulates, but usually only once and the general approach is: "wait for it to melt" which rarely takes more than a day. But sometimes it melts & 2021-02-13 23:11:31 @johnroblawson That depends a bit on what you mean by "the coast". The closest bit of salt water is about 4 miles away, but: https://t.co/HVgQ2MFpqP 2021-02-13 22:27:44 Greetings from Seattle where we are required to post photos of snow when it happens https://t.co/DFHsUuBZ1A 2021-02-13 22:25:11 @alexhanna I'm so sorry Alex. I hope you will have opportunities to share memories of him with other loved ones. 2021-02-13 19:22:24 @ducha_aiki @_krishna_murthy @annargrs @EvpokPadding @Awfidius @srchvrs I have no skin in the game when it comes to your field. My only point here is that "ban arXiv" sounds like much more than just "require submissions not to have been posted to arXiv until after the decision date". 2021-02-13 19:18:07 @amy_tabb @ducha_aiki @_krishna_murthy @annargrs @EvpokPadding @Awfidius Just one reply. (I'd untag you, but I can't, in replying directly to your tweet.) I got "fanatics" from "holy war", which seemed to me to be an unfair characterization at least of the discussion as it takes place within #NLProc (incl. this thread). 2021-02-13 19:16:36 @ducha_aiki @_krishna_murthy @annargrs @EvpokPadding @Awfidius (Untagging Amy at her request). Well, you can ask @EvpokPadding if they meant "get rid of arXiv all together" or (as seems more relevant, to the rest of the tweet) "ban preprints of papers submitted to specific conferences". 2021-02-13 19:12:09 @_krishna_murthy @annargrs @EvpokPadding @ducha_aiki @amy_tabb @Awfidius I definitely appreciate @Awfidius 's take on how individuals with privilege can do something directly. I think there is also need for policies at the level of orgs like ACL. 2021-02-13 19:05:31 @_krishna_murthy @annargrs @EvpokPadding @ducha_aiki @amy_tabb @Awfidius 2 makes much more sense to me, too. As @EvpokPadding is saying elsewhere: for those who resonate with 1, I wonder if there's common ground to be found with fixing the underlying problems (rather than tearing down anonymous peer review). 2021-02-13 19:02:19 @ducha_aiki @_krishna_murthy @annargrs @EvpokPadding @amy_tabb @Awfidius For one thing, I don't think anyone in the #NLProc community at any rate suggests that arXiv should be banned 2021-02-13 19:00:59 @ducha_aiki @_krishna_murthy @annargrs @EvpokPadding @amy_tabb @Awfidius I read part 2. I have to say that your blog post starting with characterizing the discussion with phrases like 'holy war' and 'ban arXiv' doesn't leave me optimistic about this discussion being productive. > 2021-02-13 18:58:47 @evanmiltenburg @EvpokPadding @annargrs I think it is definitely possible to send pointers to your own work without being presumptuous. It's also totally possible to do it in a way that is presumptuous... I'm guessing you managed the former :) 2021-02-13 18:55:41 @annargrs @_krishna_murthy @EvpokPadding @ducha_aiki @amy_tabb @Awfidius A quick read of the blog post suggests that that's something that *might* happen. I'd like to do know though, if it *does*. In other words, the harms of non-anonymous review are known 2021-02-13 18:52:18 @evanmiltenburg @EvpokPadding @annargrs The only thing I can see on OpenReview right now is the ability to post a public comment, which I can understand might not be that appealing. But it seems like this could easily be solved, and that it would be very useful. 2021-02-13 18:43:57 @evanmiltenburg @EvpokPadding @annargrs I don't think we received any comments via OpenReview in either case---and it's possible that that it could be done better, but it hardly seems like an impossible task! 2021-02-13 18:42:04 @evanmiltenburg @EvpokPadding @annargrs Ah, gotcha. But for getting feedback, one could still individually share the paper others (non-anonymously). That doesn't change. 2021-02-13 18:38:01 @evanmiltenburg @EvpokPadding @annargrs And claiming an anonymous preprint as one's own while the paper is under review would violate the anonymity policy. 2021-02-13 18:37:38 @evanmiltenburg @EvpokPadding @annargrs 1) The preprint server should have a contact form that sends email to the authors. 2) Presumably the same way anyone discovers papers by non-famous labs/authors (the RSS feeds etc). 2021-02-13 18:33:05 @EvpokPadding @annargrs Indeed. See also: https://t.co/J9vLA5RzpW 2021-02-13 18:06:59 @hannejakobsen @Strumke @spillteori @Morgenbladet @timnitGebru Tusen takk! I'm curious, too, though, if there's a way to subscribe from abroad. It seems like the other fields on the Abonnement form are locked until a valid phone is put in (and when I make up something with the right number of digits it shows me someone's name & 2021-02-13 17:35:42 @angryseattle @rcalo Will you accept my childhood memory of the no. 5 bus being stuck on Greenwood and the bus driver asking us all to go stand in the middle to improve traction to the relevant wheels? (Late 1980s and it worked!) 2021-02-13 16:12:14 @Strumke @spillteori @Morgenbladet @timnitGebru Between this article and the one from Hanne Østil Jakobsen, I'd love to subscribe (at least for a little while), but this seems impossible unless you have a Norwegian phone number (which I don't). 2021-02-13 15:32:31 RT @mark_riedl: 1. Language models don’t know “why” they say the things they say. 2. Language models don’t know why they shouldn’t say the… 2021-02-13 13:36:23 RT @hondanhon: I see we're talking about volunteer vaccination websites again, thanks to this (imho) irresponsible NYT article about "build… 2021-02-12 20:23:38 @HadasKotek https://t.co/WIR6niEvAV 2021-02-12 20:18:10 @OmaymaS_ Totally agreed! https://t.co/IbhdI486CU 2021-02-12 17:13:33 @ani_nenkova Stanford Linguistics would organize a "wombat" (FedEx pouch) to send all the LSA abstracts... 2021-02-12 15:01:41 @zehavoc @mjpost J'ai pas noté l'accent... c'est peut-être parce que c'est dans le sud-ouest où j'ai appris mon français. 2021-02-12 14:38:38 RT @DingemanseMark: Job alert! We're looking for a postdoc to join our @NWO_SSH project Elementary Particles of Conversation — keywords: s… 2021-02-12 03:27:33 @rcalo It seems like the underlying problems are a) scarcity and b) the fact that this is set up so that individuals have to find their way to vaccination appointments. 2021-02-12 03:26:47 @rcalo I've been hearing a lot of stories along the lines of "The websites were so hard to use, I made one that's easier", and it sounds like it's all well-intentioned, but I wonder if consolidation sites are going to worsen the extent to which well-off people are getting appts first.> 2021-02-11 23:11:31 @ReubenBrasher I'm not looking for your praise, just pointing out that telling Timnit that the nasty message she received might have been automatically generated doesn't help anyone. 2021-02-11 22:56:25 @ReubenBrasher So you'd rather imagine a human creating a bot to send harassing emails then? You might be able to remain naive, but it doesn't help anything to make that choice. 2021-02-11 22:35:45 @timnitGebru Ugh, I'm so sorry. The worst thing here (I would guess) is that there are so many possibilities. You are amazing and should have the space to pursue your research and other leadership without having to deal with this at all, let alone from so many directions. 2021-02-11 22:15:49 Seattle is named for Chief Si'ahl, about whom you can read more here: https://t.co/AF7QeFWK0Z The snow I'm currently watching is falling on the unceded lands of the dxʷdəwʔabš (Duwamish) nation. https://t.co/RJz6lWcQdh 2021-02-11 22:00:12 @kirbyconrod Sorry to hear it :( 2021-02-11 21:58:45 Seattleites are all like: 2021-02-11 18:09:02 Also, I think there's an important conversation about what counts as progress in AI. @timnitGebru is right under all definitions, but esp so w/progress measured as a) tech that benefits everyone, and esp those most marginalized and b) our understanding of tech in the world. https://t.co/AAo7TiBRFO 2021-02-11 16:14:01 RT @timnitGebru: Are we here again? Andrew Ng just did this a few years ago and ppl like @math_rachel detailed how exclusionary this is. An… 2021-02-11 04:40:36 @databoydg So much more wisdom here, but I'll just end with a teaser ... @databoydg has some excellent things to say about why "diverse teams perform better" isn't the helpful slogan you might think. Watch the talk to find out! That link again: https://t.co/3z64mhyAX2 2021-02-11 04:39:27 @databoydg Interview (and admissions) processes should focus on identifying strengths rather than looking for holes. (We've been doing this for a few years in the CLMS admissions process and I can attest that it helps!) 2021-02-11 04:38:21 @databoydg Impact > 2021-02-11 04:37:37 @databoydg 3. Knowing that Black team members successes are likely to get devalued, document them as they happen to protect against this. 2021-02-11 04:37:03 @databoydg 2. Are DEI efforts evaluated the same way that other aspects of individual/team performance are evaluated? How does the org evaluate failure (& 2021-02-11 04:36:18 @databoydg A lot of this is about being proactive and examining processes: 1. In any org, things will go wrong. Are you set up to handle that in a way that supports the most vulnerable? Or are you likely to default to protecting those in power? 2021-02-11 04:34:43 Thank you @databoydg for this amazingly rich talk! Absolute must-see for anyone who wants to be doing something about anti-Blackness Some of my favorite points: https://t.co/3z64mhyAX2 2021-02-11 02:15:28 So I'm sure there are other ways I could have asked this question such that I would have found the results less validating, but nonetheless, thanks everyone :) https://t.co/HpcL6T4oaa 2021-02-10 21:54:26 Today I gave my first "no" to a university service request that falls during my up-coming (Spring quarter) sabbatical. Feels great :) 2021-02-10 17:17:15 RT @databoydg: So I gave a talk yesterday on Anti-Blackness in AI. I added a section on Industry, check it out! https://t.co/TTGdIsLzQG 2021-02-10 17:05:48 @_alialkhatib @TwitterSafety I should add: It hadn't even occurred to me before seeing that tweet that he might be planning to attend the conference. He is not, to my knowledge, a researcher in this area. 2021-02-10 17:05:02 @_alialkhatib @TwitterSafety His tweet was just a screen cap of his own DM to them (weird flex in and of itself), without any reply. I have no info on what happened beyond that. 2021-02-10 17:01:36 @_alialkhatib @TwitterSafety But both of those things would require my having said something, and I didn't, so??? 2021-02-10 17:00:50 @_alialkhatib @TwitterSafety He also had a weird tweet which was a screen cap of a DM he sent to the FAccT conference Twitter account claiming that I (and others) were trying to get him banned from the conference, which we weren't? And something about me harassing him and then denying that harassment. > 2021-02-10 16:58:32 @_alialkhatib @TwitterSafety Well, I did totally subtweet his "paper", but that was really early in the sequence of events and he didn't block me until he came back. https://t.co/2rESMx9c1b 2021-02-10 16:55:05 @_alialkhatib @TwitterSafety I'm not sure I actually talked back though? I can't do the search now (since I'm blocked), but I don't think I said much at all to him directly. 2021-02-10 16:51:16 @nsaphra The way we do health insurance in general here is a mess, but in my experience, dental insurance has always been a good deal (and regular cleanings/check ups a good idea -- seems we might differ on that point). 2021-02-10 16:50:33 @nsaphra Are you expecting 100 paychecks / year? If you're paid 2x/month, that's more likely to be $192, isn't it? ... which should cover all preventative dentistry + most emergency costs (though check that). 2021-02-10 16:32:24 @_alialkhatib @TwitterSafety Also, I see that he's blocked me now (different to before the suspension), which is ... interesting given that he apparently created his account for the sole purpose of promoting his "critique" of the paper I co-authored and how blatant he's been about harassing BW in particular. 2021-02-10 16:30:53 @_alialkhatib @TwitterSafety I'm wondering this morning: Do we know for sure that Twitter suspended his account? Or was it a self-suspension, so he can claim to have been "cancelled" and then "uncancelled"? 2021-02-10 14:44:47 RT @FAccTConference: Registration is now **OPEN** for ACM FAccT 2021! The conference will be held March 3-10. Early bird pricing is availa… 2021-02-10 14:41:59 @y_m_asano @hannahrosekirk @OxfordAI Nice :) 2021-02-10 14:40:35 @hannahrosekirk @OxfordAI @timnitGebru @jovialjoy I'm sorry you're having to deal with this. As others have said: report & 2021-02-10 14:11:07 RT @mixedlinguist: I loved talking with @titonka about the meaning and use of “cancel culture” for this NPR piece! 2021-02-10 01:37:08 RT @rcalo: Je ne regrette rien. https://t.co/3p1OvyQdry 2021-02-10 01:26:43 @SashaMTL So the chat transcript only starts when the recording did, but still ... evidence? https://t.co/1EDOZ5QJq6 2021-02-10 01:14:50 @SashaMTL Dang ... I didn't record that part of it. I'll have to post an anonymized bit of the Zoom chat for my receipts. 2021-02-10 01:12:17 @EmmaSManning Different, and un- actually has two meanings (undo/reverse vs. not, as illustrated in the two meanings of untieable). non- is like the second meaning of un-, I think. 2021-02-10 01:09:07 Btw, if you go to the news tab on one of the searches where it is displayed, you can do the search you want to do. So their move is cosmetic, but no less disturbing. News? No, nothing to see here... 2021-02-10 01:07:10 Wow -- well spotted and yikes. (Just tested and confirmed.) That really says something about how fragile they're being about this whole situation, which itself is just an enormous self-own. https://t.co/TN98qoJEYZ 2021-02-10 00:54:54 @SashaMTL So ... it turns out that for UW at least, it was a question of going into the settings on my Zoom account (as opposed to in the Zoom app) an enabling them. My students were quite amused. 2021-02-09 21:56:00 @lbiester23 @XandaSchofield Ah, that would make some sense. If I weren't so busy, I might see about digging up an old stuff animal to use as a fake filter for the first few minutes.... 2021-02-09 21:47:47 @XandaSchofield Hmm -- still only "studio effects", when I know other people have had filters for many versions.... 2021-02-09 21:38:46 Super disappointed to discover that I somehow still don't have the filters in Zoom. Because I totally would have joined my lecture this afternoon as a cat... 2021-02-09 19:29:10 @ejfranci2 Congrats!! 2021-02-09 19:03:34 @jessgrieser It is, in fact, the only explanation that accounts for all of the data! 2021-02-09 18:31:19 @complingy @_dmh Much appreciated. 2021-02-09 18:03:39 In a paper with a page limit for main content (but unlimited space for references & 2021-02-09 17:59:24 @JKleenankandy Mmm dosa and idly :) But seriously, is there a BERT for Malayalam, Tamil or their neighbor languages? 2021-02-09 04:43:08 @KazukoYasa @JoFrhwld Kelp = that smelly stuff that washes up on the beaches not far from here (Seattle) Konbu = yummy, sweet, chewy seaweed in certain soups in Japan Me at the Asian import grocery store in Seattle finding the package that said KONBU (kelp) = 2021-02-08 21:37:40 @eclairemoon Those are great examples! There also seems to be something going on with the phonology/spelling matching between the two terms. 2021-02-08 21:31:55 RT @ACharityHudley: I will answer questions live tomorrow after my 1pm EST #AAASmtg topical lecture! I hope to see you there! https://t.co/… 2021-02-08 21:24:47 @JoFrhwld Kelp and konbu 2021-02-08 21:24:15 RT @easears: From the wrongful arrest of a Black man based on a faulty facial recognition tool to the firing of @timnitGebru from Google, 2… 2021-02-08 05:16:00 RT @emilymbender: Without looking up the answer, how many distinct individuals do you think have published papers related to AI (ML, DL, RL… 2021-02-08 01:01:07 @dr_nickiw Believe it or not I'm walking on air.... 2021-02-07 22:11:59 @wellformedness I tried to talk the UD folks down from this, too, to no avail... 2021-02-07 17:17:59 (I'll share my source for this info---a paper I'm reading---when the poll closes.) 2021-02-07 17:17:40 Without looking up the answer, how many distinct individuals do you think have published papers related to AI (ML, DL, RL) between 2000 and 2020, including on preprint servers like arXiv? 2021-02-07 15:46:52 RT @MaryEllenFoster: @emilymbender @mmitchell_ai Regarding emoji accessibility, this has been studied by @Rachel_Menzies and colleagues in… 2021-02-07 15:46:49 @MaryEllenFoster @mmitchell_ai @Rachel_Menzies Thank you for this! 2021-02-07 02:49:14 @Tennis_Gazelle @timnitGebru Yes! https://t.co/AjqGbduMQm 2021-02-07 00:59:45 @complingy @mmitchell_ai @LeonDerczynski Ah, could well be. But the bib entries for bibtex still work in that context... 2021-02-07 00:39:40 @mmitchell_ai Also, it was a journey for sure!! https://t.co/2XcVSXw5hO 2021-02-07 00:35:00 @complingy @mmitchell_ai Is the problem really with bibtex, though? @LeonDerczynski got it working with bibtex+lualatex and the ACL style files. It just took one more layer of { }. Details here: https://t.co/73xFuhyZos 2021-02-07 00:20:43 @complingy @mmitchell_ai Yeah, the solutions I found involved LuaLaTex, not pdflatex. The outcome I'd like to see if pressure for better Unicode support everywhere! 2021-02-07 00:11:14 @mmitchell_ai Definitely enjoying using the as a test for Unicode compliance in the various systems we end up interfacing with! And, of course, pleased to see the portents of uptake of emoji in papers. I hope that people do keep screen readers in mind, though... 2021-02-06 17:10:21 RT @NAACLHLT: The NLP Summer School hosted by the Mexican Association of Natural Language Processing (AMPLN) will be co-located with #NAACL… 2021-02-06 02:36:04 @BlancheMinerva @LeonDerczynski Yes! Those templates use latex + bibtex in Overleaf. 2021-02-06 02:07:36 For anyone interested in how to get emoji to render in Latex (including in bib entries), I've now worked up examples for both ACM & https://t.co/GCIxuGpRbp W/thx to @LeonDerczynski for figuring out how to make the ACL one work. https://t.co/AjqGbduMQm 2021-02-06 00:13:25 Anyone up on the literature on ethics of citation practice who can help Kirby out with this query? https://t.co/xr92zQUsX8 2021-02-05 23:09:16 RT @timnitGebru: VPs at Google research reacting to paper--> 2021-02-05 22:55:04 RT @mmitchell_ai: I am concerned about @timnitGebru 's firing from Google and its relationship to sexism and discrimination. I wanted to sh… 2021-02-05 22:54:49 Thank you, @mmitchell_ai for this insight into the value of @timnitGebru 's work (and yours). Required reading for anyone who is working in AI. https://t.co/tW4un8jVFO 2021-02-05 21:04:56 RT @GretchenAMcC: I fail to see the problem here. https://t.co/yFI6PJE6vf 2021-02-05 20:58:53 @ZeerakW @innerdoc_nlp Well, science only started in 2017 or so right? 2021-02-05 20:54:24 @innerdoc_nlp That comes from Firth 1957, and it was a famous #linguistics quote before it was a famous #NLProc quote. 2021-02-05 19:11:33 RT @emilymbender: ACL is calling for more volunteers to join the Professional Conduct Committee. Details here: https://t.co/X6paAyv1iR #N… 2021-02-05 14:10:39 RT @rajiinio: In this article, @_KarenHao summarizes our paper so concisely. Facial recognition is just the latest (& 2021-02-05 00:53:42 ACL is calling for more volunteers to join the Professional Conduct Committee. Details here: https://t.co/X6paAyv1iR #NLProc cc: @aclmeeting 2021-02-04 22:01:36 @lauriedermer Dragonfly 2021-02-04 20:51:27 RT @math_rachel: There is also a common bias at work here: the false belief that you can’t be doing something to increase diversity/include… 2021-02-04 20:37:34 @SeeTedTalk @XandaSchofield Yeah, there's a difference between "political beliefs" and "actively working against democracy" (for example). A process for determining what kinds of actions merit revoking awards & 2021-02-04 20:36:37 @XandaSchofield @SeeTedTalk Looks like yes? https://t.co/bpmdu3wtVP 2021-02-04 20:27:54 @XandaSchofield @SeeTedTalk I'm not sure I'd say that's exactly the block, but it's not too far from it. There was certainly discussion, but the block was roughly "the ACL can't get involved in (national) politics ... else where would we draw the line?" And I think this could be addressed w/ a process doc. 2021-02-04 20:20:22 @XandaSchofield @SeeTedTalk I think it would make sense to ask whether the Exec would be willing to designate a committee to develop a proposed process & 2021-02-04 20:18:53 @XandaSchofield @SeeTedTalk Could be business meeting, but it might be valuable to start first with a query to the exec. The current membership is listed here: https://t.co/5RDj8C5TbR 2021-02-04 20:17:23 @SeeTedTalk @XandaSchofield That requires significant amounts of both volunteer work and political will, more than were available, apparently. 2021-02-04 20:04:00 @SeeTedTalk @XandaSchofield Yeah. I think in order for a society like the ACL to rescind an award, there has to be a process in place and then the process has to be followed. Which means someone has to a) create the process and b) convince the Exec to enact it and c) then carry it out. 2021-02-04 19:43:35 RT @timnitGebru: The men’s apparent rage about Gebru’s conflict with Google and the content of her paper is bizarre. They seem to see thems… 2021-02-04 17:58:05 I appreciate this reporting. Online harassment is real and hard to combat. I especially appreciate how this piece calls out the behavior of the self-appointed commentators, who made a platform out of this story that has nothing to do with them. https://t.co/xZ9BqCHfdz 2021-02-04 17:16:28 RT @kareem_carr: data science https://t.co/74iMR80IBe 2021-02-04 14:50:16 1. From @haydenfield on @MorningBrew https://t.co/NVY71nepI6 2021-02-04 14:49:55 Wall of fame! I'll start collecting links here to articles that include the emoji in the title. 2021-02-04 14:20:02 @LeonDerczynski @XandaSchofield @TaliaRinger @KarnFort1 The other is to set some limits on particular kinds of activities (e.g. reviewing for N conferences per year, giving N online talks per month) and then when requests come in think: is it worth using up one of my N of this type on this one? 2021-02-04 14:18:54 @LeonDerczynski @XandaSchofield @TaliaRinger +1000 on being careful about saying 'yes'. Two further strategies there. One is to have a "no buddy" ( @KarnFort1 ) who you tell each time you say 'no' to something so they can congratulate you on that :) 2021-02-04 14:16:58 @TaliaRinger Not specific to CS, but I've gathered some ideas here about balancing teaching & https://t.co/K0IT5khmn2 2021-02-04 14:14:37 @AureliaAugusta Those will both be on my publications page, eventually! 2021-02-04 14:14:12 @AureliaAugusta Once the #FAccT2021 proceedings are up, I plan to take their official BibTex and provide a couple of versions to help folks get the to appear in latex. If you can use LuaLaTex, then it can be incorporated directly as a true emoji. Otherwise, there's \includegraphics. 2021-02-04 05:43:47 @haydenfield I'll take it :) 2021-02-03 23:49:44 @csdoctorsister @_alialkhatib Me too. I thought, "Read the room?!" 2021-02-03 19:18:20 @caseykennington @gneubig And also: https://t.co/gGJuzsHO4E 2021-02-03 19:18:10 @caseykennington @gneubig I'd be worried about that: https://t.co/wcrr3NImuF 2021-02-03 18:45:55 @gneubig But giving it that click-baity title (and the prose that tries to connect to the title) detracts from that work and feeds AI hype. 2021-02-03 18:45:19 @gneubig I'm glad you're open to suggestions, but my critique is not with your metrics, but rather with your overall framing of the task. I think you have done some useful work towards answering "In what ways can text summarization and citation graph exploration assist in peer review" > 2021-02-03 18:43:50 @yoavgo @gneubig That's a stinging indictment of the ML literature, if true. 2021-02-03 18:36:31 @gneubig It's not on me to come up with metrics that show that your casting of the problem is fundamentally misleading. But even sticking with your findings, what leads you to think there is any reason that further development along these lines will solve this problem: https://t.co/6pt3mdzL6y 2021-02-03 18:34:32 @gneubig My argument is: 1) It can't 2) Form manipulation metrics are the wrong place to look 3) Suggesting that it can is AI hype > 2021-02-03 18:33:30 @gneubig Your question is unfairly shifting the burden of proof: You are taking a task which requires understanding, reasoning, creativity and care, and assuming that success at the task can be measured in terms of metrics that look at form manipulation. > 2021-02-03 18:05:30 @gneubig MT is a form manipulation problem. (And when it's presented as anything other than that, risks ensue...) What is your argument that scientific peer review is properly (safely, etc) understandable as a form manipulation problem? 2021-02-03 17:38:40 @gneubig Those sound like very useful tools to *assist* in peer review (esp the one about flagging mis-citations). But I don't see that as a step towards automated peer review, because it doesn't address the key reason why peer review can't be automated (machines don't understand text). 2021-02-03 15:51:22 Now waiting for the first journalist to actually include the . It's part of the title, you cowards! 2021-02-03 14:42:22 @gneubig Given that peer review requires both understanding the article being read (the small fraction of papers rejected for extreme lack of clarity aside) and reasoning about the content in light of other scientific work, what evidence do you see that current tech is on that path? 2021-02-03 14:39:50 @gneubig So: By framing the paper as answering the "Can we automate?" question and then answering it as "not yet", you are suggesting that we are on a path to being able to, even if the path is long. > 2021-02-03 14:38:04 @gneubig [Before replying, I'm removing @\EmilyBender from this thread. I'm sure she's thoroughly sick of getting tagged in convos meant to be with me.] 2021-02-03 14:37:19 Starting to feel really bad for @\EmilyBender who is *not* me, but who people keep tagging when talking about me on Twitter. My 'M' is part of my name folks, and now I'm not the only one who gets annoyed when people don't include it.... 2021-02-03 14:31:02 @agostina_cal @PhDVoice Check out work by @ZeerakW and also @MaartenSap 2021-02-03 04:08:15 Yes, peer review takes time, effort and care. And it is valuable in exactly that measure. I don't think that being time-consuming + important is motivation for automating a task. /fin 2021-02-03 04:08:04 Finally, I'd like to take issue with this framing, from the introduction: https://t.co/i5xyC6mS1x 2021-02-03 04:07:33 Without careful study of how people would make use of an automatic summarization system in the course of reviewing, I'm not prepared to accept the assertion that it would be a beneficial addition to peer review. /16 2021-02-03 04:07:27 If the purpose of this study is to actually build software to solve some problem in the world, then the software needs to be situated in its use case and failure modes explored. /15 2021-02-03 04:07:15 Furthermore, it seems to me there are large risks in pre-populating reviews, in terms of how the system will nudge reviewers to value certain things---especially reviewers who are unpracticed or unsure. /14 2021-02-03 04:06:54 But casting a system like this as "domain expert" is vastly overselling what it can do. /13 2021-02-03 04:06:40 They also (I think tongue-in-cheek?) suggest that their system is a "domain expert" that can help a reader grasp the main idea of the paper. /12 https://t.co/HL4Gtedy5F 2021-02-03 04:06:00 Yuan et al do not claim that their system is understanding anything, just that it can possibly provide first draft reviews that help (especially junior) reviewers by showing them what is expected. /11 https://t.co/dJdoJQh4Bq 2021-02-03 04:05:12 Yuan et al do call out to the importance of "external knowledge" in future work. However, while summarization can be cast as a text transformation task, scientific peer review cannot: it requires understanding and expertise, which citation/knowledge graphs aren't. /10 https://t.co/uwPIYmRUee 2021-02-03 04:04:38 Re task/tech match: The authors cast reviewing as a variation on the task of summarization, but peer review isn't just about reacting to what's in the paper: human peer reviewers evaluate the paper under review with respect to their knowledge of the field. /9 2021-02-03 04:03:39 Also re hype: Answering the title question with "not yet" rather than "NO" subtly frames the results of a this paper as step in that direction, when it isn't. /8 2021-02-03 04:03:31 This is not at all what the title of the paper suggests. Where journalists aren't usually in charge of the headlines their articles appear under, academics get to write our own titles, and I think we need to be careful. (Especially given how the press picks up our work.) /7 2021-02-03 04:03:18 Re hype: The text actually positions the system as one that assists reviewers in drafting their review, especially the part that summarizes the paper: /6 https://t.co/pMoJ2LFN1q 2021-02-03 04:02:29 However, I think this paper also provides a case study in mismatch between technology and use case as well as a case study in "AI hype". /5 2021-02-03 04:02:20 Also, the paper is thoughtful about what the components of a good (constructive) review are and designs an evaluation that looks at most of those components in turn. /4 2021-02-03 04:02:07 Furthermore, the paper includes significant investigation into the ways in which the system produce biased results, which would need to be accounted for in any deployment context. /3 2021-02-03 04:01:54 I'd like to start with the positive. I appreciate that this paper is direct about what doesn't work well and furthermore clearly concludes that the answer to the the question posed by the title is negative: /2 https://t.co/IXvknhrO4i 2021-02-03 04:01:07 I've now read Yuan, Liu and @gneubig 's paper "Can We Automate Scientific Reviewing?" https://t.co/rboYgHgcr3 ... and of course I have a few things to say: /1 2021-02-02 23:20:02 RT @aclmeeting: Important announcement from the #ACL2021NLP Program Chairs! Please, read it carefully and spread the news #NLProc https://t… 2021-02-02 22:10:50 @lauriedermer I found myself telling my kids the other day that in my childhood, the equivalent of memes was silk-screen-on-demand t-shirts with snarky things written on them... 2021-02-02 19:38:07 @cigitalgem Thanks! Here's the same point in a bit more detail, from back in December: https://t.co/qQ8NuXRluK 2021-02-02 19:20:00 RT @emilymbender: Poll 1 of 2: What should be the primary motivation for investigating automating a task (for a research paper)? 2021-02-02 19:04:26 RT @MiaD: This. “One of the reactions to the paper has been, “This is very one-sided. You can't talk about the risks without talking about… 2021-02-02 18:20:07 @myrthereuver @RadicalAIPod Oh, I'm glad you liked it! @RadicalAIPod is the best :) 2021-02-02 15:25:56 @evanmiltenburg I'm mostly subtweeting the ML literature here, but to answer your question seriously, by scientific interest I meant "Can this be automated?" or "What do we learn about the task by automating it?" I agree that impact on people can be studied scientifically (often w/o automating). 2021-02-02 15:20:45 Poll 2 of 2: What currently is the primary/most common motivation for investigating automating tasks (in the research literature)? 2021-02-02 15:19:56 Poll 1 of 2: What should be the primary motivation for investigating automating a task (for a research paper)? 2021-02-02 14:42:21 RT @aclmeeting: We are very sorry about the problems with softconf and the stress it is causing everyone. The deadline was extended by 1 fu… 2021-02-02 04:49:17 @LChoshen See also: https://t.co/Cndx9YyBiT 2021-02-02 04:47:29 @LChoshen The questions should be designed for each workshop's theme. Conversely: Do you think there is one set of review questions that work for all conferences and workshops? 2021-02-02 03:03:48 RT @OlgaZamaraeva: Please retweet: Are there introductory projects like the Berkely AI "pacman" project, for some text-based data science p… 2021-02-02 02:35:40 @sreecharan93 @rctatman @Rasa_HQ I'm pretty sure its this one :) https://t.co/VZU6Ne3xEV 2021-02-02 02:24:17 @sreecharan93 @rctatman @Rasa_HQ There will be! We just did the recording today ... getting from recording to link takes some work :) 2021-02-02 01:57:44 There really ought to be two other volumes in this series: one on phonetics & I'm not the right person to write either of those, but if there are #linguists out there who are interested, I'd be delighted to chat with you! 2021-02-02 01:56:45 @rctatman @Rasa_HQ For the second one, I did some preview tweet threads, which are all collected here: https://t.co/sHfYADHKq0 2021-02-02 01:55:53 @rctatman @Rasa_HQ Bender, Emily M. and Alex Lascarides. 2019. Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics. Morgan & https://t.co/7fSWxKPNd6 2021-02-02 01:55:19 @rctatman @Rasa_HQ Bender, Emily M. 2013. Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax. Morgan & https://t.co/7266PLcuE3 2021-02-02 01:54:38 Had fun talking with @rctatman for the @Rasa_HQ podcast. Possibly favorite moment: @rctatman asks if I have anything to recommend to #NLProc developers as useful resources sees my blank look and says in a stage whisper: **your books** The books can be found here: 2021-02-02 01:41:07 RT @databoydg: So uhh... I got some things I wanna talk about!!! If you're interested in discussing how "WE" can better address Anti-Black… 2021-02-02 01:35:07 @glottophile @jahochcam But also, not all ling majors take sociolinguistics, either. (Though one would *hope* that the basic lessons of variation being the natural state of language and prestige being simply a question of power are included in the intro classes...) 2021-02-02 01:34:25 RT @XandaSchofield: New fellowship program from NSF for folks who finished a CS (or adjacent) undergraduate degree in 2016-2019 and are now… 2021-02-02 01:26:51 RT @databoydg: Combatting Anti-Blackness in the AI Community: In this work, we aim to elucidate the scale and scope of anti-Black bias in… 2021-02-02 01:18:31 @Dan__McCarthy Thanks :) 2021-02-02 01:18:26 RT @Dan__McCarthy: this interview with @emilymbender is brilliant, important, and incredibly easy to follow — highly recommend for anyone i… 2021-02-02 01:13:49 @glottophile @jahochcam In the CLMS program, we achieve a roughly even split (+ some folks from other fields). My guess is that the majority of NLP grad students in CS programs don't have ling undergrad degrees, though. 2021-02-01 22:54:04 RT @amyjko: Teaching CS online? Don’t exclude students with disabilities. Find out how to design an accessible course at our @AccessCompUW… 2021-02-01 22:46:50 RT @ruha9: Do you know of anyone doing research on discriminatory outcomes of automated traffic enforcement or any other aspect of automati… 2021-02-01 20:57:39 @timnitGebru Oh that's vile. Looks like an account that I had blocked 2021-02-01 20:49:14 RT @mimismash: Ready to do some work? Good! Help shut the harassment down. Here is a good place to start. 2021-02-01 20:40:37 RT @timnitGebru: What happens to me is actually exhibit A of what we write in our paper which the same privileged men like @JeffDean who d… 2021-02-01 19:28:17 RT @Cometml: This #BlackHistoryMonth we'll be highlighting some great Black ML & Deb Raji is a @Mozilla Fellow working… 2021-02-01 18:15:50 RT @haydenfield: A hot-button research paper—the one that made headlines in connection w/Google's Ethical AI team—was just accepted into an… 2021-02-01 15:43:32 @alexandersclark @zeibura Definitely with /kw/ at least some of the time. 2021-02-01 15:26:33 @zeibura I definitely say it with /kw/. (The vowels are the same for me in quart/court, though, and maybe that's what the example was originally about?) 2021-02-01 15:19:34 Now I'm super curious which varieties merge quart and court... 2021-02-01 15:18:15 Do each of the following pairs of words sound the same or different to you? writer / rider cot / caught Mary / marry poor / pour pull / pole vial / vile mirror / mere quart / court wear / where https://t.co/FAiiyuAFai 2021-02-01 14:28:33 RT @symbolicstorage: You can still register for free for the fourth webinar on the #futureoflinguistics which will take place on Feb 11th a… 2021-02-01 04:11:04 RT @BayesForDays: This reminded me to set up an NLP Education Discord (open signups -- if you teach mathy languagey stuff): https://t.co/Yy… 2021-02-01 03:57:05 @mimismash @Twitter @TwitterSupport He created this account in Jan 2021 --- apparently with the purposes of promoting his screed about our paper and harassing us (esp. Timnit), as is immediately clear from his timeline. Is this how @Twitter @TwitterSupport intends their platform to be used? 2021-02-01 03:48:06 Thank you, @mimismash for documenting all of this. @Twitter @TwitterSupport --- this is a *pattern* of harassment. If we report the tweets one by one (as I and others have), can you see the pattern? If not, is there another channel we can use to make it visible to you? https://t.co/o3n8TohjxR 2021-02-01 03:44:11 @complingy @brendan642 @chrmanning This comes up in my syntax class in CLMS --- and each year I do tell the students that it's the single most important slide of the whole course. https://t.co/tM4jFp98Zi 2021-02-01 03:42:08 @complingy @brendan642 @chrmanning True --- but I think regardless of the size of the curriculum (one semester, two-course sequence, MS program) we can think about "what are key ideas about language that should be included?" and some key concepts from sociolx definitely belong! 2021-02-01 03:40:49 RT @TeachingNLP: what should the #nlproc curriculum consist of?? we hope to discuss this at the #TeachingNLP workshop - case studies, posit… 2021-02-01 03:01:56 @brendan642 @chrmanning Totally agreed that sociolx is important because our tech touches people's lives more. But I don't think that just because current nlp engages less with structure means that MS students don't need to know something about it. 2021-02-01 01:25:15 @ZeerakW @annargrs We (w/@LeonDerczynski) did put a lot of thought into the #COLING2018 review forms. The final version is here: https://t.co/x9yXlCcJfA Discussion post here: https://t.co/HBu9oY8YfZ 2021-02-01 01:12:33 @annargrs I was hoping this would catch your attention :) I think such a resource would be great. It could also include some discussion about how the questions asked in review forms shape the reviews & 2021-02-01 01:09:26 @brendan642 Anyway, I'm making a note to talk about this with my colleagues. 2021-02-01 01:09:13 @brendan642 Could we maybe make it choose two of three out of phon/socio/syntax? Maybe. (But also socio has phon as a prereq here, and you can't really learn about how linguistic structures vary with social factors w/o understanding the structures.) 2021-02-01 01:08:09 @brendan642 Definitely a tough Q. In CLMS, we require (1) phonetics, (2) syntax, and then one linguistics elective. I think there's a lot of value in the syntax class (it's my class, so...) esp. in today's NLP landscape where so much work doesn't engage with linguistic structure. > 2021-02-01 00:50:42 Reviewing for workshops and once again seeing the exact same default set of review questions in START. I wonder how to get ws organizers to put thought into this ... maybe it should be part of the workshop proposal. #NLProc 2021-01-31 23:09:12 @SJEqualizer @rajiinio @timnitGebru People who have blocked him have reported receiving emails with screen caps of their tweets---additional super creepy behavior. I have him muted, but that doesn't mean that there's no point in reporting the harassment. (At least, if Twitter ever gets its act together...) 2021-01-31 23:07:50 @rajiinio @timnitGebru Just went through and reported a whole bunch of his tweets as targeted harassment. (Incl the one where he claims that we were trying to get him banned from the FAccT conference, when we'd done no such thing.) 2021-01-31 22:56:01 @TaliaRinger Super creepy indeed. I'm sorry you have to deal with this crap---and thank you for the work you are doing! 2021-01-31 22:04:36 Having said that, we don't have sociolx as a required course in CLMS, though I'll think about how we could. (We do spend one day on linguistic variation in my syntax class, which is required, but that hardly seems sufficient.) 2021-01-31 21:55:36 Reading a paper where the authors describe "informal language" as "incorrect language". Can we make sociolinguistics a requirement component of the #NLProc curriculum already? 2021-01-31 17:35:01 Finally listened to the @lingthusiasm ep about writing -- delightful as always! For those who want to learn more, esp around how Chinese writing was borrowed and adapted, I highly recommend Zev Handel's book: https://t.co/BtkLlWo6yz https://t.co/gLUTEAfANM 2021-01-31 15:51:39 RT @fusaroli: would you want to apply for a Marie Curie postdoc with me (Aarhus, @interact_minds) on experimental and computational approac… 2021-01-31 15:08:45 @dlowd 2021-01-30 19:39:33 RT @ACharityHudley: Join the authors of “Toward Racial Justice in Linguistics: Interdisciplinary Insights into Theorizing Race in the Disci… 2021-01-29 20:31:17 RT @csdoctorsister: Come join us for the rich dialogue + action plans! 2021-01-29 14:27:31 @yoavgo @LeonDerczynski @casademalafama p.s. None of the above is original to me 2021-01-29 14:26:45 @yoavgo @LeonDerczynski @casademalafama fn: lived experience of the US racial construct varies hugely depending on how the construct places you. People accorded whiteness can choose to be ignorant of all of this 2021-01-29 14:26:37 @yoavgo @LeonDerczynski @casademalafama I can certainly imagine that understanding some of this discourse is difficult if you don't have (much) lived experience of the US racial construct for reference, but it's worthwhile nonetheless. /10 2021-01-29 14:26:29 @yoavgo @LeonDerczynski @casademalafama But we aren't special in having systems of oppression. Nor are anti-Blackness, colonization, etc confined to our borders. /9 2021-01-29 14:26:20 @yoavgo @LeonDerczynski @casademalafama The particular histories of genocide of Indigenous people, enslavement of Africans, denigration and exploitation of immigrant populations in the US are particular to the US. /8 2021-01-29 14:26:12 @yoavgo @LeonDerczynski @casademalafama You can't grow up swimming in this soup without internalizing it. So part of anti-racism is the active and on-going process of identifying and working against such messages (internally and externally). /7 2021-01-29 14:26:03 @yoavgo @LeonDerczynski @casademalafama 2. I grew up in a society drenched in messages about what it means to be of different races: white people are "normal" (race-less) and get to be individuals. Everyone else is subject to different stereotypes and furthermore must always represent their 'race' (racialized). /6 2021-01-29 14:25:41 @yoavgo @LeonDerczynski @casademalafama I didn't ask for this, but if I don't work against it, I am accepting it. /5 2021-01-29 14:25:16 @yoavgo @LeonDerczynski @casademalafama Reasons: 1. In many different ways, the society I live in benefits white people at the expense of others: financially, in terms of access to education and jobs, in terms of who the laws & 2021-01-29 14:25:08 @yoavgo @LeonDerczynski @casademalafama (Context: My family immigrated from Eastern Europe in the early 1900s. We're Jewish. My great-grandparents were not considered white. But the way the racial construct works here, my grandparents were effectively offered whiteness if they assimilated and they did.) /3 2021-01-29 14:25:01 @yoavgo @LeonDerczynski @casademalafama As a white person in the US, who understands that racism has done and continues to do harm, it is not enough for me to say, "Well, I'm personally not racist, so this isn't my problem," for two reasons: /2 2021-01-29 14:24:52 @yoavgo @LeonDerczynski @casademalafama I have to admit I haven't actually read Kendi (nor do we cite him), but for me anti-racist means actively working to oppose racism, and contrasts with both overtly discriminatory views/acts and positions that attempt 'neutrality', which are not in fact 'above the fray'. /1 2021-01-29 03:18:54 @rodgerkibble Thanks! 2021-01-29 03:18:48 @arthur_spirling Thanks! 2021-01-29 03:18:42 @BrownSarahM Thanks! 2021-01-29 03:18:37 @IgorBrigadir @lbcao Thanks! 2021-01-28 20:56:33 RT @ledell: The *amazing humans who work on the Google Ethical AI team* (and potentially elsewhere in the future) are leaders in #AIEthics.… 2021-01-28 20:40:04 @MichaelMallari Hey, I see you retweeting my tweets with this set of hashtags a lot. Just wanted to pop in to say that I know how to use hashtags and I put the ones on that I think are appropriate. I don't need help with that. 2021-01-28 20:20:11 @brendan642 @arthur_spirling That's part of what I'm trying to figure out. It seems like it might, via a) approaches to "unstructured" data and/or b) university administrators wanting to cash in... 2021-01-28 19:31:16 #AcademicTwitter: What is your favorite overview reading on Data Science? (Looking to understand what that term refers to, and how it relates to NLP, computational linguistics, & 2021-01-28 17:47:02 RT @jahochcam: Dear everyone who signed and everyone who's considering doing work with the signed language communities, the chapter has bee… 2021-01-28 17:46:59 Great news!! Thank you @jahochcam for spearheading this effort! https://t.co/rpqq5wzDyD 2021-01-28 17:00:34 @John_J_Howard @_KarenHao @techreview @jovialjoy @rajiinio @woj_zaremba @chelseabfinn Thanks for the shout out, but I am not qualified for any "under 35" lists by ... more than a decade. (PhD 2001, faculty at UW since 2003, but I guess I come off as youthful/less experienced than that?) 2021-01-28 06:21:59 RT @katestarbird: At the UW Center for an Informed Public (@uwcip), we’re looking to bring on another cohort of postdoctoral scholars who a… 2021-01-28 04:59:52 @linasigns Something like: When I talk with people through an interpreter, I am looking at both the interpreter and the person talking, so I can understand ... So, if you are comfortable turning your camera on for office hours, I hope you will do so. It will help me better understand you. 2021-01-28 04:58:41 @linasigns That is tough. I wonder if you can make a general statement to the students about the situation, that still acknowledges that they might not want to put video on. > 2021-01-28 04:54:46 @boknilev Yeah -- it kinda feels like they're asking the academic community to do their work (testing their models) for them for free. 2021-01-28 03:22:46 @EmilyRemirez This work by Alicia Beckford Wassink (2020) might be an ex: Where Sociolinguistics and Speech Science Meet: The physiological and acoustic consequences of underbite in a multilectal speaker of African-American English. The Routledge Companion to the Work of John Rickford. 2021-01-28 01:13:51 (Haven't --- to my knowledge --- made this mistake yet in 2021, but I almost did and was viscerally reminded of having made it in previous years.) 2021-01-28 01:13:17 PSA to anyone putting events on their calendar for Feb and/or March ... not a leap year this year, so the lay out looks the same. "Okay, so Tuesday the 2nd..." is dangerous unless you double check the month you're looking at in the calendar. 2021-01-27 23:11:16 @mariusmosbach @csdoctorsister @timnitGebru One of the patterns of racism is that people on the receiving end of it spend lots of time & 2021-01-27 23:10:09 @mariusmosbach @csdoctorsister @timnitGebru It further compounds the harm because asking is denying someone's lived experience (gaslighting) and because it is demanding further labor from the person who spoke up about how they were mistreated. > 2021-01-27 23:08:25 @mariusmosbach @csdoctorsister @timnitGebru And so when someone speaks up about a particular instance BECAUSE IT IS PART OF A PATTERN saying "But couldn't there be another explanation?" "But what about?" just negates their experience and further compounds the harm. > 2021-01-27 23:07:47 @mariusmosbach @csdoctorsister @timnitGebru The general principle, however is this: racism (and sexism and ableism etc) is way harder to perceive when one isn't on the receiving end of it AND racism is endemic and so those on the receiving end of it are well positioned to see what is part of the pattern. > 2021-01-27 23:06:24 @mariusmosbach @csdoctorsister @timnitGebru This doesn't feel like an "honest Q", with the framing "only explanation", because no one is asserting that racism is the only explanation here. Just that it is a factor. > 2021-01-27 22:45:11 @Yulongchen1010 This might be helpful: https://t.co/7zL8lJJ3c7 2021-01-27 21:05:24 @yoavgo @csdoctorsister @timnitGebru Yoav, if you're interested in combatting racism, here's a tip: If someone says they've experienced racism, it never helps to say "no you didn't". You can always choose to keep that thought to yourself. (And, better still, examine why you feel compelled to argue about it.) 2021-01-27 18:48:07 @yoavgo @dlowd We sure didn't *ask* Google to put their PR might into promoting the paper. We did circulate the submission draft to specific people for feedback, none of which included things along the lines of "OMG!!1! you can't translate 'both genders'" nor "you should define 'hegemonic'." 2021-01-27 18:46:57 @yoavgo @dlowd Not preaching 2021-01-27 17:47:35 @yoavgo @dlowd So yeah: like writing about NLP for NLP people. 2021-01-27 17:47:11 @yoavgo @dlowd The paper is published at #FAccT2021, and that was the venue we wrote it for in the first instance. 2021-01-27 17:45:29 @yoavgo @dlowd @databoydg You seem to be taking the view that anything that is identified as possibly harmful is therefore to be 100% prohibited? 2021-01-27 17:44:10 @yoavgo @dlowd If folks don't understand a paper, they can ask questions rather than going on Twitter rants characterizing the paper as "extreme" and "one sided". 2021-01-27 17:37:44 @yoavgo @dlowd If only you were able to read the paper without getting instantly defensive (or maybe with sufficient understanding of what terms like "systems of oppression" mean), you might see that we said what we meant. 2021-01-27 17:35:23 @yoavgo @dlowd And in the case of MT/ASR, I want a system that doesn't produce "both genders" when it's not a faithful representation of the input. 2021-01-27 17:34:02 @yoavgo @dlowd When LMs are run as generators (stochastic parrots) and come up with such this, yes, that's harmful, and it shouldn't be done. But that's different to using LMs in MT or ASR systems where some person has said something and the system's job is to produce a transcript/translation 2021-01-27 17:28:30 @yoavgo @dlowd Nowhere are we saying that ASR or MT should fail on those phrases. 2021-01-27 17:19:05 @yoavgo And sometimes, yes, the answer is "don't do/build the thing", but not always. And in either case, understanding the potential for harm is critical. 2021-01-27 17:18:36 @yoavgo The point of talking about "potential for harm" is not to say: you may not do anything that has any potential for harm, but rather to understand what those potentials are so that they may be mitigated. > 2021-01-27 17:17:47 @yoavgo There is no possible way to create completely 'clean' data sets or fully 'debiased' models. We are advocating for working at scales where the data can be documented, understood, curated. > 2021-01-27 17:15:03 @yoavgo Well, it's neither our fault nor our responsibility to cater to those who feel that any loss of unearned privilege is oppression. If you find the paper "extreme", that's on you, not on us. I think there is tremendous value in boldly claiming anti-racist (etc) stances as normal. 2021-01-27 17:13:45 @yoavgo We aren't actually prescribing any particular proportionality --- we are investigating the status quo and pointing out, inter alia, that it is not proportional. 2021-01-27 17:12:42 @yoavgo We aren't saying "don't build LMs". We are saying: consider these things as you decide what to build & 2021-01-27 17:11:45 @yoavgo We aren't censoring anything, though. We are pointing out that the hegemonic view is overrepresented, that there are potential harms that should be considered, and that if the training data isn't even documented, there is no recourse for addressing those harms. 2021-01-27 17:05:29 @timnitGebru Yeah, it's really striking (and awful) to watch how differently people are speaking to each of us. 2021-01-27 17:03:25 @yoavgo I guess this is what you consider "extreme"? Is it "extreme" to take it for granted that all people are created equal, deserving of equal rights, and that furthermore this is not the current state of play (and thus needs addressing)? 2021-01-27 17:02:50 @yoavgo We outline in detail (grounded in various fields of research) how encoding these in LMs and then letting them take action in the world (generate text but also as components in other systems) does harm by reinforcing & 2021-01-27 17:02:38 @yoavgo Well, we wrote a whole paper about what we're trying to get at. We call the worldview "hegemonic" not because it's "average American" but because it encodes status quo systems of oppression (racism, misogyny, cisnormativity, ableism, etc etc). > 2021-01-27 15:46:23 @ShlomoArgamon @yoavgo Actually, we don't use the phrase "hegemonic language". @yoavgo made that one up. We talk about "hegemonic viewpoints" and "hegemonic worldviews" though. 2021-01-27 05:33:56 @jaschasd @timnitGebru @JeffDean It makes no sense at all for Google to fire @timnitGebru ostensibly over a paper talking about limitations of LLMs and then invite her (and her erstwhile team, reportedly under much stress as a result of her firing) to contribute to this benchmark. 2021-01-27 04:00:59 @ptullochott @timnitGebru @JeffDean IOW: The problem with our politics isn't lack of unity, it's racism. 2021-01-27 04:00:25 @ptullochott @timnitGebru @JeffDean 1)That's actually the converse of what you said above (PD gets airtime b/c country is divided v. country is divided b/c folks like PD get airtime) 2) It's still not relevant. 3) "Divided" is very both-sides-y. 2021-01-27 03:52:28 @ptullochott @timnitGebru @JeffDean " racists like that asshole are the reason this country is so divided." ... we weren't talking about why the country is divided??? 2021-01-27 03:52:03 @ptullochott @timnitGebru @JeffDean I would have thought so to, which is why I'm asking why you're hijacking this thread to turn it into commentary about "political division" rather than the actual case of harassment at issue. 2021-01-27 03:50:39 @ptullochott @timnitGebru @JeffDean Not saying Canada doesn't have issues with racism & 2021-01-27 03:48:58 @ptullochott @timnitGebru @JeffDean "The political" and "political division" seems like a non-sequitur to this discussion, is what I'm saying. The topic of discussion is the specific harassing by Pedro of women scholars on Twitter. Your tweet shifts focus away from that. 2021-01-27 02:35:16 @ptullochott @timnitGebru @JeffDean Also "this country"? Your Twitter bio has you in Canada. Which country are you talking about? 2021-01-27 02:34:47 @ptullochott @timnitGebru @JeffDean Agreed it's outrageous that they sought his (totally irrelevant) opinion and gave him so much space. But your pivot to "this country being so divided" strikes me as odd. There are far greater harms from racism than political division. 2021-01-27 01:08:41 RT @lucy3_li: Please consider submitting to our workshop!! We allow two kinds of submissions: teaching materials and regular papers (opin… 2021-01-26 22:45:33 @timnitGebru Super creepy indeed. I'm really sorry you are being put through this. 2021-01-26 21:30:52 Called it wrong. He went DARVO instead. 2021-01-26 20:55:49 @nsaphra Instant classic! 2021-01-26 20:34:55 Cue "But I'm just standing up for science" or similar BS in 3... 2... 1... 2021-01-26 20:32:05 The mystifying question is: why? What has Pedro so incredibly threatened that he is motivated to keep attacking Timnit like this? 2021-01-26 20:31:13 I've reported this tweet as targeted harassment and suggest that others do the same. * Trawls through a board where people post anonymously and without any accountability * Uses sock puppet to repost to Twitter * Users own platform to boost bullying post on Twitter. https://t.co/NpMERZRdz1 2021-01-26 19:10:22 @totopampin @timnitGebru At any rate, I'm confident about the relative long-term prospects for a paper accepted at #FAccT2021 as compared to a self-published screed. 2021-01-26 19:07:18 @totopampin @timnitGebru I don't know how frequently Google Scholar indexes things like my publications page, but I do know that they do, because the search for "Stochastic Parrots" also turns up slides from my recent talk at MPI Nijmegen where I cite the paper: https://t.co/SF2uYt9U2s 2021-01-26 19:06:29 @totopampin @timnitGebru I suspect that's largely due to the fact that Lissack wrote his "critique" based on the submission version of our paper (which we did not put out on the web) and has had it posted for longer that our pre-print (which we posted once the camera-ready was complete). > 2021-01-26 19:00:35 @UnderdogGeek It surely has been a surreal experience (esp. when the paper wasn't actually out but was the subject of media attention), but of all of the authors, I have it the easiest (being tenured). I am very inspired by and grateful to my co-authors! 2021-01-26 18:54:11 #SCiL2021 schedule is up! https://t.co/ozRZtjo511 Note that registration is FREE for students (but you do have to register). #Linguistics #NLProc 2021-01-26 18:31:55 Thank you, @UnderdogGeek https://t.co/3lGXDtYm9T 2021-01-26 16:36:45 RT @RadicalAIPod: *How* do we measure "success" in AI? *Who* is measuring? *What* are they measuring? *At what cost*?? Listen to @Dylandoyl… 2021-01-26 14:24:23 @NLPnewb @chrmanning @Diyi_Yang @rctatman @JayAlammar @cocoweixu There are surely modeling steps in visualization (the visualization is a visualization of a model of the data), not all models are ever visualized or well suited to visualization. 2021-01-26 14:23:06 @NLPnewb @chrmanning @Diyi_Yang @rctatman @JayAlammar @cocoweixu I take modeling to be building a model of some phenomenon, which is then either used to study the phenomenon or in production to mimic the phenomenon. Visualization is about presenting information in a visual modality for human consumption. > 2021-01-26 13:26:34 RT @aclmeeting: Given the glitch of softconf on Jan 25, we extend the abstract submission deadline by 12 hours: the due time would be ***no… 2021-01-26 04:06:40 @manaalfar Also, it's super weird to engage someone who is apparently paid to be on social media defending the reputation of a large company, and also didn't seem to be very interested in taking the input I was providing (beyond agreeing that one particular example is bad). 2021-01-26 04:05:38 @manaalfar I skimmed them. They marketing disguised as information. When he finally pointed me to the guidelines, that did start to answer my questions, and not in a particularly reassuring way. 2021-01-26 03:55:19 Word to the wise: if you get into a discussion on Twitter with a Google PR person, you're going to get referred to a million glossy blog posts. 2021-01-26 03:51:42 @dannysullivan @Google "Ordinary people around the world": so, crowdworkers. In other words, Google isn't actually investing in the expertise that would be required to fix this. 2021-01-26 03:49:29 @dannysullivan @Google On the "Jump back in time" the earliest period offered is the colonial period. And that is the problem. That is where the erasure is happening. 2021-01-26 03:47:47 @dannysullivan @Google Oh yes, the "solve" is more general, but I don't think it's a technical solution that you're missing. I think it's a question of how you allocate resources (what expertise you value & 2021-01-26 03:46:37 @dannysullivan @Google And so I ask again: Is there anyone involved with this quality assurance task who is prepared to recognize that fact and its implications? Or is this a system run by engineers + crowd workers? 2021-01-26 03:46:08 @dannysullivan @Google Have you looked at the page? Click here: https://t.co/L7JcvgGR5u And then select "Jump back in time". Time starts in 1492, according to that page. It is ABSOLUTELY erasing the pre-contact Indigenous history of the Americas. 2021-01-26 03:45:05 @dannysullivan @Google Which leads me to ask: Who are the search raters? Are they information professionals who would have appropriate expertise to be able to judge these things? Or is this a task for crowdworkers, who would only have such expertise by chance? 2021-01-26 03:44:08 @dannysullivan @Google The site that it is a part of isn't specific to colonial America --- it presents itself as a site about American history, which it then only shows starting in 1492. > 2021-01-26 03:36:43 @dannysullivan @Google By "stakeholders" above I mean people who are the ones who experience the harm in cases like this. It doesn't sound like you're actively seeking out such stakeholders. 2021-01-26 03:35:28 @dannysullivan @Google Are your guidelines sophisticated enough to train your raters to look out for erasure of Indigenous people? 2021-01-26 03:34:42 @dannysullivan @Google The site behind the bad snippet in the "people come to America" example is a .gov, so presumably reliable, but its notion of American history apparently starts in 1492. https://t.co/b6fTQZ2qaS 2021-01-26 03:33:51 @dannysullivan @Google "we like to say that Search is designed to return relevant results from the most reliable sources available.": This does not reassure me that you are proactively seeking the perspectives of marginalized populations. Relevant according to whom? Reliable according to whom? > 2021-01-26 03:26:37 @dannysullivan @Google I'm not asking you to proactively run billions of searches each day. I'm asking how you work with stakeholders to understand the possible kinds of harms: https://t.co/OEkLzMw64v 2021-01-26 03:18:06 @dannysullivan @Google I'm glad to hear that. But you still haven't answered the core question: what are you doing *proactively* to avoid these things, rather than reactively when someone surfaces them for you? 2021-01-26 03:16:13 @dannysullivan @Google In this case, the harm was much closer to home for me (so I can report more directly): https://t.co/FzXvRlg6yi 2021-01-26 03:15:14 @dannysullivan @Google I am curious what your process is for determining just how much harm of this type to allow / how to weigh it against the value of the snippets when they work. Are you consulting stakeholders? Who and how? Also, I have another example for you of the snippet causing harm: 2021-01-26 03:06:04 @dannysullivan @Google But this isn't about search results, this is about snippets, which could be restricted to previously seen searches and verified search/snippet pairs. 2021-01-26 02:59:16 @dannysullivan @Google And perhaps most importantly, is there any process in place* for systematically searching for these, or do you just wait to get embarrassed again and then try to fix that one, in a game of whack-a-mole? *That admits the possibility of turning OFF snippets, for example. 2021-01-26 02:58:22 @dannysullivan @Google This is of course very much in the same space as the harms Safiya Noble documented in _Algorithms of Oppression_. Does @Google have any OKRs about not returning racist results? How are they weighed against other metrics? > 2021-01-26 02:56:58 @dannysullivan @Google Also, I think there is room for UX improvement to make the results appear less authoritative / surface the UNCERTAINTY and make the "feedback" button just a little more evidence. 2021-01-26 02:55:45 @dannysullivan @Google I'm glad you're looking into it, but I don't think scale/15% new queries is an excuse here. The snippets don't show up for every search after all. Those could be more curated. https://t.co/6lKqzJl7nE 2021-01-26 02:51:01 Looks like this was from Jan 2019, which seems pretty recent (model cards, data sheets, data statements were all 2018), but hopefully this slide would stand out more now, two years later? https://t.co/dBXBSlsnGk 2021-01-26 02:49:18 This so neatly encapsulates so much of what is wrong with the big data approach to NLP: The creation (and curation) of data isn't valued. It's about quantity, not quality What's clever (& We get data, it doesn't come from people 2021-01-26 02:45:05 From a slide deck I came across recently (undated, but from Devlin, presenting to Stanford AI): https://t.co/6YjmhZybQo > 2021-01-26 00:41:05 @Teejip @Google Dude, I don't need your mansplaining about the way *someone else* formulated a question. It's also entirely beside the point of Hank Green's post and mine. 2021-01-26 00:29:03 @yoavgo @Google @JanelleCShane And yet people isn't bolded in the first one, which is actually the one that is the problem. 2021-01-26 00:27:09 @yoavgo @Google @JanelleCShane Also, if you look at the actual search results for "when did people come to America", there are lots of more appropriate pages to pull answers from. Something is ranking which one to pull the text box from... 2021-01-26 00:25:46 @yoavgo @Google @JanelleCShane The second paragraph has both "human" (not "humans") and "peopling", which I would expect gets lemmatized to "people" as much as "humans" gets lemmatized to "human". 2021-01-26 00:01:56 @dannypgh @timnitGebru @mmitchell_ai Yep! https://t.co/ZI6LLG3UO2 2021-01-25 23:46:37 @Google And yeah, I suspect @JanelleCShane is right that this is an artifact of how the authors of the text behind LMs use different words to refer to colonizers and Indigenous people. https://t.co/zmjtQ4GkzX 2021-01-25 23:42:08 .@Google this is harmful (and embarrassing). What message are you sending to an Indigenous kid searching for this information when they see this answer re "people"? How about to non-Indigenous kids? What processes do you have in place to fix these when reported/prevent them? https://t.co/6FKDuEZcCU 2021-01-25 21:00:47 RT @OlgaZamaraeva: Academic friends, especially linguists, especially those who are or have recently been on the academic job market, espec… 2021-01-25 17:54:56 RT @NAACLHLT: The reviewer response form issue has been fixed 2021-01-25 15:32:29 @yoavgo This is a deliberately obtuse misreading of our paper. We aren't talking about using LMs to study the terabytes of data produced by Hemingway nor about LMs used to study reddit. We ground our discussion in the particular trend of large "general purpose" LMs overviewed in Sec 2. 2021-01-25 14:46:34 @egrefen Oh hell no. Research is a conversation, with papers as the speech acts. If they are constantly changing, then the conversation gets unmanageable. On top of that: researchers need to be able to *wrap up* specific papers and move on to the next. 2021-01-25 14:21:51 @j2bryson @DuckDuckGo The link in my original tweet is a https://t.co/6pJHZVBOV2 link, which points to a link on my publications page. 2021-01-25 04:10:40 @DebjitPaul2 @aclanthology For a specific conference, I would guess that the conference schedule on the web page would be one way to go. 2021-01-24 22:14:44 @geomblog @timnitGebru @mcmillan_majora Thank you for this. 2021-01-24 21:21:09 @evanmiltenburg There is some linguistic study! https://t.co/lZH1Jz5KNc But that's not what they're created for, to be sure. 2021-01-24 21:20:04 @yoavgo And yes, his online behavior is very relevant --- it was precisely around promoting this dreck that he was harassing us. 2021-01-24 21:19:25 @yoavgo Pseudo-academic, more like it & 2021-01-24 21:17:49 @yoavgo I am not taking your suggestions for editing our paper at this point. It is complete. We said what we said. Go ahead and publish your own response to it. 2021-01-24 21:15:33 @yoavgo I'd say we do. If you want to make an argument to this point that you find more precise, I think you'll find more success if you can do it without calling our work "political" & 2021-01-24 21:14:27 @evanmiltenburg He seems to have backed off to "the data as it is", which isn't even "the language that was used" but "the language that was collected". I totally agree that there are uses of LM technology in studying what people say, but that's diff to "general purpose" LMs. 2021-01-24 21:13:16 @yoavgo As I said in my thread above (not to mention at length in the paper): size and quality are not independent. 2021-01-24 21:12:22 @yoavgo Also: you're the one who chose to cite Lissack in your piece, despite the fact that he has been harassing Timnit and I on Twitter and that his premise seems to be that it is unscientific to leave it as unspoken that white supremacy is bad. So yeah, I'm not gonna let that slide. 2021-01-24 21:11:12 @yoavgo So public attention on the fact that lots of tech that is being deployed in the world & 2021-01-24 21:09:10 @yoavgo But "the data as it is" is completely incoherent. Data sets only exist when they have been collected, representing a whole series of decisions in terms of what to collect & 2021-01-24 21:04:09 @quantadan @timnitGebru @mcmillan_majora @mmitchell_ai Thank you :) 2021-01-24 21:02:47 The claim that this kind of scholarship is "political" and "non-scientific" is precisely the kind of gate-keeping move set up to maintain "science" as the domain of people of privilege only. /fin 2021-01-24 21:02:41 We draw on scholarship from a range of fields that looks at understanding how systems of power and oppression work in society. > 2021-01-24 21:01:16 And lastly, miss me with the claim that our work is "political" and therefore has a responsibility to "present the alternative views". See also: https://t.co/2rESMx9c1b > 2021-01-24 21:00:58 Furthermore, the "debate" you would like us to acknowledge is based on a false premise. As we lay out in detail in Sec 4, the training data emphatically do NOT represent "the world as it is". > 2021-01-24 21:00:38 As for the claim that our paper is one-sided, this is exhausting. All of ML gets to write papers that talk up the benefits of the tech without mentioning any risks (at least until 2020 w/broader impact statements), but when a paper focuses on the risks, it's "one-sided"? > 2021-01-24 21:00:08 This isn't a presupposition we share. (And please don't misread: I'm not saying it should all stop this instant, but rather that research in this area should include cost/benefit analyses.) > 2021-01-24 20:59:55 Likewise, nowhere do we say that small LMs are necessarily good/risk-free. There, and in your points about smaller models possibly being less energy efficient, you seem to have bought into a world view where language modeling must necessarily exist and continue. > 2021-01-24 20:59:25 Furthermore, I've now had a minute to read your critique, and I disagree with your claim that our criticisms are independent of model size. Difficulty in curating and documenting datasets absolutely scales with dataset size, as we clearly lay out in the paper: > 2021-01-24 20:58:33 Thus, the motivation for writing this paper. We aren't saying "LLMs are bad" but rather: these are the dangers we see, that should be accounted for in risk/benefit analyses and, if research proceeds in this direction, mitigated. > 2021-01-24 20:58:19 *(Exceptions to this: (1) the big body of work--including yours--into whether the models absorb bias and (2) the GPT-2 staged roll-out paper (and references cited in its sec 1.) https://t.co/01JwYFVnfh 2021-01-24 20:57:59 2. Content: I stand by the title and the question we are asking. The question is motivated because the field has been dominated by "bigger bigger bigger!" (yes in terms of both training data and model size), with most* of the discourse only fawning over the results. > 2021-01-24 20:57:41 1. Process: The camera ready is done, and approved by all of the authors. If I make any changes past this point it will be literally only fixing typos/citations. No changes to content let alone the title. 2021-01-24 20:57:29 No, I will not make any such change, for two reasons: https://t.co/jCk6UvT2hA 2021-01-24 17:31:42 @ani_nenkova On the flip side, as a reader specifically when bidding on papers to review, I definitely form opinions based on the title: If I can tell from the title what machine learning algorithm was used but not what #NLProc tasks are studied, I'm not interested. 2021-01-24 17:29:14 @ani_nenkova From recent experience: If you're writing a paper that turns out to be widely discussed (for... reasons), it helps to have included a low frequency bigram in the title. 2021-01-24 17:17:00 @roger_p_levy @timnitGebru @mcmillan_majora @mmitchell_ai Thanks, Roger. 2021-01-24 13:49:07 RT @aclmeeting: Paper rejected by #ICLR2021? You CANNOT resubmit it to #ACL2021NLP BUT if you withdrew your submission from #ICLR2021 on or… 2021-01-23 15:05:04 @carlosgr_nlp @nlpnoah @anmarasovic @timnitGebru @mcmillan_majora @mmitchell_ai Thank you! If I'm allowed an update, this will be part of it too. 2021-01-23 14:53:39 RT @emilymbender: Camera ready complete & On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? https://t.co… 2021-01-23 05:25:06 RT @drjosephjmurray: Cpt Hall's #PledgeofAllegiance in #AmericanSignLanguage on #InaugurationDay has rightfully seized our attention. In th… 2021-01-23 04:58:20 @nlpnoah @anmarasovic @timnitGebru @mcmillan_majora @mmitchell_ai There are one or two other typos I'm tracking, so I might try to sneak in an update before the pubs chairs send the paper along. So, good to know. FWIW --- we checked via Google Scholar, which only shows arXiv + openreview for that paper. 2021-01-23 04:52:44 @nlpnoah @anmarasovic @timnitGebru @mcmillan_majora @mmitchell_ai It might be. Is it worth yours to point me to the papers in question? 2021-01-23 04:51:00 @nlpnoah @anmarasovic @timnitGebru @mcmillan_majora @mmitchell_ai We tried to go find archival versions of all of the things we'd had as arXiv papers. Are there some we missed? 2021-01-23 04:34:22 @LkjonesSOC I have similar issues with debuted. 2021-01-23 04:30:23 @mettle @glupyan @jdp23 @_KarenHao @digitalsista @teamameelio I lived in SV from 1995-2003 and ... yeah the cultural ethos was very much "dot com boom" & I'm fairly sure the folks who are building tech to oppose power structure are doing so in community with the people they aim to serve. That's not who I'm talking about. 2021-01-23 01:59:54 Camera ready complete & On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? https://t.co/CSMZPlJd8h To appear in #FAccT2021 w/@timnitGebru @mcmillan_majora and Shmargaret Shmitchell cc: @mmitchell_ai 2021-01-22 21:51:46 RT @eatkinson1387: Science hivemind! Seeking child participants for a virtual study on sentence interpretation. Looking for 4.5-6.5 year-ol… 2021-01-22 20:50:36 @glupyan @jdp23 @_KarenHao @digitalsista Not saying there aren't people in SV who are working on the latter, but that's not what the "move fast and break things" ethos is about. 2021-01-22 20:50:11 @glupyan @jdp23 @_KarenHao @digitalsista Have you spent any time in Silicon Valley? "Break things" is "disrupt markets" which is "find things to exploit to amass wealth". It's not *ever* about disrupting white supremacy, inequality, etc. 2021-01-22 19:53:37 @rctatman Not unpopular with me. (Paper dropping soon.) 2021-01-22 19:10:20 @HadasKotek But I'm glad you're getting (and enjoying) some rain! 2021-01-22 19:07:34 @HadasKotek West Coast != California For some of us, rain isn't anything exciting 2021-01-22 18:38:33 @iogessi Yeah, that was definitely in the Before Times. I also feel a little bad for @rcalo in that segment, being stuck in the middle of that exchange. But he had plenty of great things to say elsewhere in the panel!! 2021-01-22 15:18:29 @glupyan @_KarenHao aren't* 2021-01-22 14:50:50 RT @ani_nenkova: A fascinating story about how technological interventions that one would argue can only lead to good outcomes actually cau… 2021-01-22 14:15:42 @natematias Did any of it involve helping support other people in doing their work (e.g. students)? If so: vicarious sense of accomplishment :) 2021-01-22 13:08:58 RT @rajiinio: ok, actually happening this time! Hopefully, no more coups will interrupt this. 2021-01-22 05:28:00 Fields that use footnotes rather than "works cited" at the end for bibliography have WACKY abbreviations for journal titles. Just sayin'. #AcademicChatter 2021-01-22 03:54:16 RT @dlowd: Power corrupts. This is why it’s bad to concentrate more and more power in the hands of a few — they will use it to exploit tho… 2021-01-22 01:05:02 RT @rajiinio: I see this argument all the time - it's genuinely incorrect. The difference is data - large ML models harbor more than just… 2021-01-21 23:22:58 RT @raciolinguistic: I am again seeking 20 more US non-White participants for a short survey on professional speech. (I promise I'll stop u… 2021-01-21 23:18:51 @glupyan @_KarenHao is talking about expensive models 2021-01-21 22:56:29 @yoavgo Because in moving fast, people can get hurt along the way. "We're curing cancer, don't slow us down!" is not a pass to get out of considering adverse impacts of the research (process or product). 2021-01-21 21:00:14 @grimalkina You are so right that the accolades and incentives are really misaligned. I hope your brother (and whole family) are holding up okay. 2021-01-21 20:55:21 @jessgrieser @GretchenAMcC Who was calling her Penelope at that LSA panel? Sounded so weid. 2021-01-21 20:53:04 @evanmiltenburg Thanks for that :) 2021-01-21 20:49:44 @evanmiltenburg Here: https://t.co/1yT2KcWxoX 2021-01-21 20:49:14 Ah, thanks @evanmiltenburg for the tip! The relevant bit is at 1:14:47. 2021-01-21 20:33:38 I can't figure out how to search the captions, so I can't quickly find it, but the essence was roughly: Etzioni: We have to go fast, we're solving important problems! Bender: If you don't take the time to talk with stakeholders, how do you know if they're actually solved? 2021-01-21 20:32:40 I had an exchange along those lines with Oren Etzioni in this panel back in 2018: https://t.co/lXtB4SnuVY 2021-01-21 20:31:27 I also encounter similar arguments regarding "speed of progress" ("We're curing cancer! We can't let anything slow us down!") which fall apart in similar ways: https://t.co/BJycYvLfOU 2021-01-21 20:27:09 RT @_KarenHao: I see this argument all the time from tech people: Building gargantuan AI models may be computationally, environmentally, an… 2021-01-21 20:26:04 RT @mmitchell_ai: Hello, world! Similar to @timnitGebru, I abruptly lost access to my professional calendar. Any upcoming talks, deadlines,… 2021-01-21 20:09:54 @alienelf @MasakhaneNLP I really appreciated the careful discussion of roles wrt to resource creation --- and the way you also made clear that the roles are fluid! 2021-01-21 19:42:54 @jessgrieser I think the jacket bio wouldn't confuse that and would definitely signal to people that you want to be called Jessi.... 2021-01-21 19:34:59 @jessgrieser How do you want it to be cited? 2021-01-21 18:12:46 Further evidence that there are absolute gems buried in Findings of EMNLP: Just got around to reading the @MasakhaneNLP paper and https://t.co/zCa8aqm7Dr #NLProc 2021-01-21 15:20:09 @NikhilKrishnasw Yikes. I think something that alerts the AC (who should have noticed already!!) that one of the reviews was incomplete. 2021-01-21 02:10:57 "Being an individual targeted by one of the world’s largest corporations is terrifying, and reinforces the need for unions in the workplace." Thank you, @AlphabetWorkers https://t.co/TSIVT2uC2C 2021-01-21 00:41:15 @kathrynbck I'm guessing its in the training data: so maybe it's the lg ideologies of the transcribers? 2021-01-20 21:35:23 RT @aclanthology: The ACL Anthology is looking for a (paid) assistant to help with routine operations. There will also be time during slow… 2021-01-20 20:04:26 RT @merrierm: We are now accepting Microsoft Disability Scholarship Fund applications, due March 1. Learn more at https://t.co/rZMuYI68Wp  … 2021-01-20 20:00:32 RT @merrierm: @emilymbender @jahochcam Another place to learn more about the complex issues around sign language technologies is in this ar… 2021-01-20 20:00:21 RT @merrierm: @emilymbender And of course the wonderful programs offered by @AccessCompUW and @CRA_WP also offer many scholarship, mentorsh… 2021-01-20 19:45:56 RT @emilymbender: In that connection, I'd like to highlight the Ryan Neale Cross Memorial Fellowship, supporting students studying computat… 2021-01-20 19:45:42 In that connection, I'd like to highlight the Ryan Neale Cross Memorial Fellowship, supporting students studying computational linguistics with the goal of improving accessibility through assistive technology (at UW's CLMS): https://t.co/UYy2z6kioq 2021-01-20 19:44:36 @merrierm Finally, there is a lot of potential for assistive technology, if it is designed with (and by!) the people it is meant to assist. > 2021-01-20 19:43:34 @merrierm also points out that a key component of effective assistive technology is features which convey error (and uncertainty) to users who are relying on the technology when they can't directly verify its output themselves. > 2021-01-20 19:41:30 On the perennial appearance of systems claiming to "translate sign language", please see this recent open letter lead by @jahochcam https://t.co/r8wLqYu0dK > 2021-01-20 19:39:19 @RadicalAIPod @merrierm The way we talk about technology matters, and there are harms from e.g. claiming that systems "translate sign language" or that pattern recognition is "intelligence". > 2021-01-20 19:38:00 Loved this episode of @RadicalAIPod with @merrierm So many things @merrierm said really resonated, but here are some particular highlights: https://t.co/0usczmx3xh 2021-01-20 19:03:10 RT @csdoctorsister: Do you see an antiracist society in US in 2030? Choose to build it. Racism won't be rooted out w/ 1 person flying a… 2021-01-20 18:05:58 RT @emilymbender: Q for #linguistics #lazytwitter --- Clark 1996 and others talk about language use as a joint activity. Is there any good… 2021-01-20 16:22:30 @evanmiltenburg Thanks. 2021-01-20 15:44:00 @evanmiltenburg I mean, aside from that video, which is great, but a little hard to cite. (I suspect the problem is my lack of good search terms.) 2021-01-20 15:43:31 @evanmiltenburg Hi Emiel --- could you share a pointer into this discussion that is in print? So far, I've been able to turn up pieces saying how great it is that big tech is using renewables, but nothing about how they're displacing local use of the same renewables 2021-01-20 14:26:14 Gonna need a little TRANSPARENCY and ACCOUNTABILITY about these camera-ready deadline shenanigans, #FAccT2021 2021-01-19 22:15:35 @wildfonts @_alialkhatib It's here: https://t.co/xThJXK2kv4 (I don't know if Dr. Jones is on Twitter.) 2021-01-19 21:05:05 Q for #linguistics #lazytwitter --- Clark 1996 and others talk about language use as a joint activity. Is there any good work to cite on how we engage in joint activities asynchronously and with people we don't even know when reading, watching videos, etc? 2021-01-19 21:02:20 @_alialkhatib I'm reminded of this blog post from Dr. Leslie Kay Jones: https://t.co/xThJXK2kv4 2021-01-19 20:15:59 @Laserhedvig @becauselangpod @tdanielmidgley It's ranking reflects the effort of the previous editors still, I believe. If you're looking at issues from before the switch, then it makes sense to keep it there. But if you're looking for papers as they come out, I think it doesn't belong. 2021-01-19 20:07:22 @Laserhedvig @becauselangpod @tdanielmidgley Kinda surprised to see Zombie Lingua on that list. 2021-01-19 19:25:33 @LeonDerczynski @jenniferdaniel @GretchenAMcC @dirk_hovy And how will this new approach (described in the blog post) with fall backs to emoji sequences interact with emoji meaning making? 2021-01-19 19:24:57 @LeonDerczynski @jenniferdaniel @GretchenAMcC @dirk_hovy The sociolinguist in me is super curious to know how this uneven roll out affects the ways in which meanings attach to emoji. Do the ways that the people who get them first have a lasting impact on how they are used? Can we observe their meanings getting renegotiated over time? 2021-01-19 18:57:42 @csdoctorsister @DigCivSoc @StanfordPACS @stanfordccsre @BlkWomenInData Congrats!!! 2021-01-19 14:59:44 @evanmiltenburg @annargrs @srchvrs "Lingchick" sounds like what someone might come up with if they've forgotten the word "wug" 2021-01-19 14:55:00 RT @AnthroPunk: Every #AI #ML researcher please 2021-01-19 14:47:10 @evanmiltenburg @annargrs @srchvrs Yeah, I was talking to folks who cringe at stories of others' bad behavior and then worry: do I do that? But also, I think "lingchick" would only be parallel to "techbro" in a counterfactual world where power relations were different. 2021-01-19 06:17:52 @IgorBrigadir @aclanthology Thanks ... they need to fix that URL though! (Should be neurips by now.) 2021-01-19 06:04:05 No bibtex yet for #neurips2020 because the only publication is still the preproceedings, despite the conference being over a month ago?? 2021-01-19 06:03:21 Every time I have to go find bibtex entries for something published at NeurIPS, ICML and the like I realize anew how wonderful spoiled we are in #NLProc by @aclanthology 2021-01-18 16:47:15 RT @jahochcam: This open letter to the Springer Editors has been emailed to the editors in response to a chapter that included offensive la… 2021-01-18 14:21:16 @nsaphra Most frustrating to me are situations where I'm deliberately holding back to make room for others, only to have all that space sucked up by a floor hog. (Esp. in situations where I don't know the others well enough to smoothly hand them the floor w/o putting them on the spot.) 2021-01-18 14:20:00 @nsaphra I, too, tend to jump in at turn changes quicker than most (quick for a west coaster, at least!) and have to keep an eye on this. But there's a world of difference between someone whose hogging the floor and someone who's got a lot to say but also draws out others in conversation. 2021-01-18 06:47:41 @BayesForDays @EmmaSManning Now worrying that someone is going to come across this and miss the sarcasm.... 2021-01-18 06:26:14 @BayesForDays @EmmaSManning The version in Nature will include a ML study claiming to be able to predict whether the English variety being spoken is the economy-enhancing kind or the economy-depressing kind on the basis of the facial expressions of the speakers 2021-01-18 03:56:23 @yuvalmarton So antibody neutralization seems like labels to me. For surrounding sequences ... well that's where intuitions about language tell us nothing about what's going on in DNA. Does order matter? Over how large a window? 2021-01-18 03:13:45 This!! https://t.co/9F9eSnA968 2021-01-18 01:47:09 @BrianHie @KevinKaichuang Thanks! I hope the paper itself is clearer ... at least for its intended audience. (Which I don't think I'm really in.) 2021-01-18 01:46:41 @KevinKaichuang Just griping about a) very vague "AI is magic" science writing and b) analogies to language that don't make sense to me as a linguist. 2021-01-18 01:46:19 @KevinKaichuang So it's not at all clear to me why one would expect that difference to come out in the embedding space, which (I'm guessing) is about distribution of gene sequences. But again, I know nothing of bio/comp bio! > 2021-01-18 00:52:22 @BrianHie @KevinKaichuang Sorry for the super naive questions, but what does "CSCS" mean? Also, does your system involve any supervised training? What's the input at test time and what are you comparing to as a gold standard? 2021-01-18 00:49:48 @KevinKaichuang Basically, I have no grounds on which to quibble about the underlying research! My complaint here is that the AI reporting is basically saying: Look, genomes look like languages, so our same "magic" from NLP also works here. Which isn't informative. 2021-01-18 00:48:58 @KevinKaichuang But for the second point, I'm still not following, again likely because a) I know basically nothing about computational biology and b) the article doesn't give the info, but if the training data is all "infectious", then what "difference" is "functional difference" measuring? 2021-01-18 00:46:19 @KevinKaichuang So the proteins-are-sequences and LSTMs & 2021-01-17 23:44:31 @michaelzimmer I'd be surprised if there isn't, but I haven't looked into it. 2021-01-17 23:37:14 @michaelzimmer Hello from the quarter system where Tuesday begins week three of the term... 2021-01-17 23:00:33 RT @mmitchell_ai: Another set of good examples of how minorities are treated in CS. (1/n) 1. They are told what they should and should not… 2021-01-17 21:59:49 RT @MikhailovDanil: This great thread from @emilymbender should be required reading not just for those doing tech but those funding / inve… 2021-01-17 18:59:11 Now* 2021-01-17 18:30:32 And more frustratingly, I can't tell from this MIT Tech Review piece what their system is actually "predicting". What are the inputs at test time and what gold standard outputs are they comparing them to? /fin 2021-01-17 18:29:49 No, LSTMs might be a great tool for investigating mutations in viruses. But the whole "biology is written in words" thing is utterly ridiculous. 4/ 2021-01-17 18:29:11 However, as a computational linguist, it makes no sense to me and furthermore, the LSTMs they're using aren't doing NLP the way they seem to think they are. 3/ 2021-01-17 18:28:31 And they seem to be taking inspiration from their analogy to language, which if it's helping them make progress, that's probably for the better. 2/ 2021-01-17 18:27:29 First, I know nothing of computational biology, but I'm glad the field exists and it sounds like they do important work. 1/ 2021-01-17 18:27:04 Here's the article that motived the poll in my previous tweet () https://t.co/AbTL2dsxjW 2021-01-17 18:16:27 Popular press article about recent "AI" advance starts with "Galileo once observed that..." How does this affect your impression of the article? 2021-01-17 15:48:38 Just turned up this thread from ~a year ago, and it still feels quite relevant, esp point 5. https://t.co/ocXOwVvRnq 2021-01-17 13:03:25 RT @BastingsJasmijn: A lot more care needs to go into how datasets are created/constructed, and that includes those used for language model… 2021-01-17 13:02:58 RT @annargrs: Great point. And a direct consequence of how modeling work became the "science", and work on data to train said models - some… 2021-01-17 03:41:48 @databoydg @timnitGebru Bingo. 2021-01-17 03:40:41 @karger @timnitGebru It is. We have not shared it broadly, though we did circulate it to specific colleagues for comments. Someone posted a bootleg copy to Reddit though (someone from Google, I assume) and then this guy wrote his critique of our unpublished ms. 2021-01-17 01:16:44 Threading this in here, too: https://t.co/bM5qahvzmW 2021-01-16 22:04:34 RT @willie_agnew: Buried in the recent trillion parameter language model paper is how the dataset to train it was created. Any page that co… 2021-01-16 19:41:54 RT @mmitchell_ai: Hi world! I'm delegating! Has anyone else made a collection of all notable public statements, news, and tweets abt @timni… 2021-01-16 18:13:47 @khia_johnson I've sent you at DM! 2021-01-16 16:34:28 Like with LMs and tasks intended to test for language understanding https://t.co/UYEM9x9r1e 2021-01-16 13:50:51 @tanmit 2021-01-16 05:18:37 Twitter has been hiding his replies from me, which is really for the better, but I was looking through “show more replies” and found this one. Could he possibly really not know that “the quiet part out loud” is about racism? https://t.co/mSEkqVFFOg 2021-01-16 04:53:57 RT @alexhanna: Next Friday! We're talking organizing in and around tech, "diversity" in the tech workforce, and Big Tech's concentration an… 2021-01-16 03:01:05 RT @rctatman: Linguistics MLK Colloquium: Uneven success: racial bias in automatic speech recognition: Alicia Beckford Wassink, University… 2021-01-16 02:13:14 RT @aclmeeting: Ready to submit to ACL-IJCNLP 2021? You can go to: https://t.co/JFMq7Ejk5T The abstract due is Jan 25, 2021 and paper due F… 2021-01-16 00:27:28 @jane_kjut I've DMed you! 2021-01-16 00:03:41 Also, if you have questions about how a linguist would fill this role (and you are interested in applying), please get in touch! 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired