Inioluwa Deborah Raji

Profil AI Expert

Nationalité: 
Nigeria
AI spécialité: 
IA Ethique
Occupation actuelle: 
Chercheur, Google
Taux IA (%): 
42.02'%'

TwitterID: 
@rajiinio
Tweet Visibility Status: 
Public

Description: 
Raji a le désire profond de faire que l' éthique en IA soit une pratique accessible. Ses recherches en éthique concernent les biais raciaux causés par des Intelligences artificielles dont l'entrainement s'appuie sur des données non représentatives des minorités. Elle a fait des contributions majeures dans ce domaine, menant ainsi les grands acteurs de l'IA et universités américaines, à revoir leur processus d' audit avant déploiement commercial. L'expert pense qu'un des aspects ignorés des modèles de langages majeurs est la menace que représente la fuite des données d'entrées.

Reconnu par:

61 AI Experts l'ont reconnu

Les derniers messages de l'Expert:

Tweet list: 

2024-11-28 18:08:16 RT @mayameme: ⁦@Abebab⁩ has an AI accountability lab now and is Hiring | AI Accountability Lab https://t.co/Qi2PEkTceQ

2024-11-28 16:02:36 So happy for Abeba! A huge milestone for her AI accountability work! https://t.co/z4NgWEFfyp

2024-11-28 00:12:28 @HellinaNigatu @alsuhr @sarahchasins @monojitchou Congrats!! Lol dying at the iconic African presentation template

2024-11-26 23:51:55 RT @Knibbs: Exclusive: A new analysis found that more than 50% of LinkedIn blogs are written with AI. For anyone who spends time on LinkedI…

2024-11-26 20:44:59 Excited to see this - a solid hire for US AISI! https://t.co/tAZjMxq9z3

2024-11-24 01:26:19 RT @sebkrier: Do you think external third party model testing is important? Do you have experience working on frontier safety (e.g. CBRN),…

2024-11-24 01:14:31 Work with Marissa! Genuinely one of the best people to learn from on how to translate data science work into legitimate impact! https://t.co/SipdKX62nq

2024-11-24 01:13:21 RT @mkgerchick: ACLU is hiring an Algorithmic Justice Fellow to work on cutting edge projects focused on digital rights — come work with us…

2024-11-23 15:51:44 @nrmarda Hm yeah I get that but why can't "AGI" be a network of less capable models? Or a more accessible, more usable lower capability model, etc? Like imo even by their own definitions it's worth monitoring model &

2024-11-23 15:41:46 @YJernite Wow, what? Adoption could be as simple as number of consumer and enterprise users

2024-11-23 01:47:12 .. why can't AI product risk categories operate the same way? Clearly the risk of ChatGPT and the like is linked to the scale of its adoption, which domain it gets deployed into, etc. Genuinely curious about why this happened - wonder if this is one of those arbitrary anchors.

2024-11-23 01:37:54 Don't get why AI Safety Frameworks only focus on risk being correlated to increases in "capability" (ie how much an individual model can do) vs other things (eg. the scale of adoption/impact, domain of use, etc)? For eg., the DSA classifies risk on platforms by number of users

2024-11-22 19:24:50 RT @KLdivergence: Hi! I'm hiring a Research Engineer to join my team at Google DeepMind for the year. You'd be working with a great, interd…

2024-11-22 19:24:11 @thegautamkamath Fwiw this is really not what ethics review is for

2024-11-22 18:51:55 @jessicadai_ Oh ok one sec one sec

2024-11-20 11:57:05 This is .. alarming to say the least. The bureaucratic over-scrutiny of medical insurance claims (via ~50 algorithms ?!) in order to systematically deny mental health care. https://t.co/NJFtT12zgO

2024-11-20 05:24:46 RT @Manderljung: The EU Commission is looking for a Lead Scientific Adviser for AI. Would strongly encourage technical folks apply. Giv…

2024-11-19 01:00:51 @iajunwa @emory @EmoryLaw Congrats!

2024-11-18 17:30:30 Great to see our paper w @HellinaNigatu (https://t.co/xyPqPF5s6r) mentioned in this @WIRED article: https://t.co/DSYEl8UxX2 https://t.co/DzFtLfXRtF

2024-11-15 00:50:31 Anyways, I am also in the other place (bsky!) - same username @rajiinio :)

2024-11-15 00:47:56 Whoa - the examples in this thread are kind of concerning. Is X trying to encourage actual ad purchases by making it seem like more accounts are advertising on this platform than there actually are? That would be so strange &

2024-11-15 00:41:08 @adash0193 Yeah, np - thanks for flagging!

2024-11-15 00:28:48 @adash0193 Whoa, that's super weird... Yeah I most definitely didn't buy an ad

2024-11-15 00:15:39 Ah, so excited to see that this paper won an Outstanding Paper Award at EMNLP! I've learnt so much from @HellinaNigatu about how to think about the complex politics of "low resourced" languages

2024-11-13 02:16:10 @MFGensheimer @zakkohane @AMIAinformatics @CALonghurst @UCSDHealth @doc_b_right @CedarsSinai @UCBerkeley @NEJM_AI I disagreed with that too actually. I think AI products are much more similar to medical devices than drugs, &

2024-11-12 16:27:31 Excited for this! It's too easy to see "values" in ML design, development &

2024-11-11 03:15:09 RT @zakkohane: How To Put The Missing Human Values Back Into AI: Looking forward to our panel @AMIAinformatics #AMIA2024 Tuesday https://t.…

2024-11-08 01:04:38 RT @_ahmedmalaa: Please retweet: We're recruiting PhD students at UC Berkeley and UCSF! Please apply if you are interests in machine lea…

2024-11-06 22:00:37 @Ket_Cherie At the time it made sense but ultimately a new crop of rules will need to come from legislative interventions in order to be harder to reverse long term. We can't rely on executive interpretation as the main mechanism for defining new AI guardrails. + hope you're well, as well!

2024-11-06 21:56:58 @Ket_Cherie Yes, all regulation is dependent on the executive branch for enforcement but there was a lot of rule-setting happening at that level for AI policy in particular. Different agencies and the WH were re-interpreting or updating existing rules to cover the needs of dealing with AI -

2024-11-06 19:27:58 AI policy is, at present, way too dependent on a cooperating executive branch. Part of this trend was pragmatic (ie agencies hold the tech expertise, legislation is slow &

2024-11-01 01:36:51 RT @sarameghanbeery: FAT BEAR WEEK!!!! Happy Halloween from the BEAR-y Lab https://t.co/UbLcYTT2CT

2024-10-29 18:24:34 @QVeraLiao @UMichCSE Congrats, Vera!! Excited to see what you do in this new role!

2024-10-29 18:18:37 @zephoria Congrats, danah!! Your students will be so lucky to learn from you

2024-10-29 16:11:00 @mathver Lol the way I held my breath

2024-10-29 16:01:03 RT @mathver: Today the European Commission proposed how Art. 40 of the Digital Services Act (#DSA) could work in practice. In a worldwide f…

2024-10-29 15:46:20 RT @FAccTConference: We've released the CFP for #FAccT2025, which will be held in Athens, Greece! Abstracts are due on January 15th, pap…

2024-10-28 17:25:40 RT @ShayneRedford: Webinar on The Future of Third-Party AI Evaluation starting soon! At 8 am PT / 11 am ET join the zoom link here: ht…

2024-10-28 15:00:04 RT @kevin_klyman: Starting in half an hour - check out our workshop on the future of AI evaluation! Co-organized with @ShayneRedford, @saya…

2024-10-28 14:12:53 RT @2plus2make5: Please retweet: I am recruiting PhD students at Berkeley! Please apply to @Berkeley_EECS or @UCJointCPH if you are intere…

2024-10-28 14:10:32 @_JacobRosenthal @LiamGMcCoy I agree, actually, and, if you haven't yet, I encourage you to check out the linked article! So many of these issues can be avoided with greater caution in deployment &

2024-10-27 18:30:34 RT @AP: Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said https://t.co/mRjYfdxWgR

2024-10-24 15:01:31 @IanArawjo @jeffbigham Lol yeah I used to have convos where ml researchers were clearly collecting human subject data, sometimes even doing *interviews*, and still insisting they did not require an IRB Also explaining that "I've never done it before" is not a reason to keep not doing it

2024-10-24 14:54:03 @jeffbigham depends on venue or the type of paper, but iirc the checkbox items are of the sort "did you do an irb and mention this in the paper"? Esp for data related work, I think it shifted norms to be as explicit as possible (ie. No one wants to deal w it coming up in ethics review lol)

2024-10-24 14:17:52 @jeffbigham fwiw in the ml context (eg. Neurips, ICML, ACL), before ethics reviews / checklists were implemented, no one mentioned IRBs because no one was doing them lol so perhaps a good problem to have? Aha

2024-10-24 14:14:05 RT @kanarinka: I’m so thrilled and honored that Counting Feminicide won the @amerbookfest award for best book in the Women’s Issues/Women’s…

2024-10-24 14:11:29 @kanarinka @AmerBookFest Congrats!!

2024-10-22 16:23:15 RT @canfer_akbulut: I'm presenting our work on the Gaps in the Safety Evaluation of Generative AI today at @AIESConf ! We survey the state…

2024-10-22 14:33:08 RT @CamilleAHarris: I’m here at @AIESConf presenting on my thesis work for the 6pm poster session, if you’re here come say hi! #AIES2024 ht…

2024-10-21 23:16:47 RT @leahanelson: No one: Me: I have done a Science! My first-ever conference paper is now live at the AAAI/ACM Conference on Artificial In…

2024-10-21 22:45:40 @RishiBommasani Oh, nice! Yeah I recall that there were a few things we couldn't add to Model Cards bc of the legal context of Google at the time. @mmitchell_ai probably has more to say on this but it's been great to see efforts evolve to focus on other priorities beyond responsible innovation.

2024-10-21 22:28:35 This is super great to see - Model Cards was published by a team at Google, Datasheets published by Microsoft, Factsheets came from IBM, etc. While undoubtedly useful as AI transparency mechanisms, it's useful to reflect on these origins as they evolve into policy doc templates! https://t.co/y76WcYr6qE

2024-10-17 03:35:36 RT @amifieldsmeyer: Let's work together: I'm researching a new U.S. tech policy agenda that closes the gap between a few big companies and…

2024-10-16 01:01:28 @Wenbinters @ConsumerFed Whoa, congrats, Ben excited to see what you do there

2024-10-15 15:34:22 @deepfates Lol @sebkrier this is the 30% of adults / users you are defending

2024-10-15 14:32:15 @thegautamkamath @NYU_Courant And don't worry, I hear the pizza is not too bad in NYC of all places

2024-10-15 14:28:05 @thegautamkamath @NYU_Courant Congrats, Gautam!

2024-10-15 06:18:59 @Miles_Brundage Omg, congrats Miles!!

2024-10-13 21:22:55 @boazbaraktcs @MelMitchell1 Yeah I agree the actual wording of claims in the paper is not great

2024-10-13 16:30:12 @boazbaraktcs Imo what this paper is saying is not that there don't exist any tricks to get your model to solve the math problems at hand

2024-10-13 16:27:33 @boazbaraktcs I don't doubt that model perf can improve w prompting but when anyone says "we do this well on this benchmark &

2024-10-13 16:24:14 @boazbaraktcs Ops I missed this earlier - but I don't quite agree. I think user testing over a representative distribution of prompts is one thing

2024-10-13 16:02:29 @sebkrier @sleepinyourhat I just don't think those achievements are what people expect

2024-10-12 17:57:29 How has this been 2-5 years away for like 5-7 years now? Even longer if you forget the large language model thing and anchor on previous definitions of "AGI" (cnns &

2024-10-12 15:42:51 "A.I. cut the number of students deemed at risk.." Not the nyt finally using the active voice... for A.I. https://t.co/jZ5a6fClWL

2024-10-11 16:34:46 RT @sayashk: How can we enable independent safety and security research on AI? Join our October 28 virtual workshop to learn how technica…

2024-10-11 16:29:52 Having a static text question-answer pair for LLM evaluation increasingly makes no sense - what matters is what models do when important features (i.e. key inputs) &

2024-10-11 16:18:33 @boazbaraktcs Like, sure, we could do some prompt hacking to get to the right answer eventually but it's a bit unsettling that the baseline performance is kind of fundamentally misreported/ unpredictable, and certainly fails in ways we'd never expect a human to fail

2024-10-09 18:19:16 @PeterHndrsn Thanks, very helpful context!

2024-10-09 16:26:54 @PeterHndrsn What do you think of these remedies? I feel like providing external access to AI products is a very small fraction of anti-trust concerns, and was really surprised not to see more on the exploitation of their disproportionate control/ self promotion in advertising &

2024-10-04 14:27:29 RT @mona_sloane: Yesterday was a big day for #AI procurement, one of the most important ways in which accountable tech can be enforced (in…

2024-10-01 18:41:19 @mmitchell_ai Oh no, I'm so sorry to hear this what a loss for the community, I remember how much energy he had at every gathering

2024-10-01 16:12:02 Yay, @ruha9!! Very well deserved https://t.co/9HcwCdbCZb

2024-09-30 14:51:15 @KellerScholl Aha, no worries -- and I hope your family is alright!!

2024-09-30 14:23:03 @ShakeelHashim @LocBibliophilia @GavinNewsom Yeah I think you can be skeptical of the letter but also if his goal was corporate signaling, he has no reason to not frame his letter to appeal to that crowd. The fact that it isn't framed that way says that wasn't necessarily his only or even main audience.

2024-09-30 14:21:04 @ShakeelHashim @LocBibliophilia @GavinNewsom There was a diverse coalition (inclu open source folks, academics) that did not support this bill. The outcome wasn't necessarily a capitulation to industry - many of those non-corporate opponents had legitimate reasons to object, which are named as part of Newsom's rationale.

2024-09-25 19:44:00 I'm so glad to see the FTC leaning into this as a strategy since their stern "warning" shot last year (https://t.co/OYLLTcvXol). It'll be interesting to see how these particular investigations play out over the next few years...

2024-09-25 19:40:15 False advertising is such a powerful argument for removing harmful AI products from the market. In 2015 (!), at peak computer vision hype, this strategy led to the removal of skin cancer detection apps plagued w robustness, accuracy &

2024-09-25 16:26:17 I keep seeing AI policy takes from folks that have clearly not read the bill text. Which, honestly, I can understand - bill drafts are boring! And long! But the core of policy debates are anchored to specific details...which you're likely to overlook if you don't just read it.

2024-09-25 16:14:47 This is incredible - each of these cases are AI scams that have been alarmingly normalized in the past couple years (including DoNotPay, a "robo-lawyer"

2024-09-24 19:10:44 RT @SenMarkey: I’m live from the Capitol to introduce the Artificial Intelligence Civil Rights Act. It’s time to ensure that the AI Age doe…

2024-09-24 19:08:10 RT @mlittmancs: I got to help shape this document, providing guidance about how AI researchers collaborate globally. It was unveiled at the…

2024-09-24 19:07:38 @mlittmancs Wow, this looks incredible - thanks for your work on this!

2024-09-24 19:05:25 RT @geomblog: It's great news that the AI and Civil Rights Act has been introduced. Kudos to @SenMarkey and all the cosponsors. This has pe…

2024-09-24 15:03:52 @aylin_cim Congrats, Aylin!! Well deserved

2024-09-23 05:18:47 RT @geomblog: Great piece by @SerenaOduro from @datasociety on the importance of an expansive notion of AI safety that includes pressing co…

2024-09-19 14:20:48 RT @karen_ec_levy: Returning from perpetual Twitter hiatus to spread the word: @CornellInfoSci is hiring! Tenure-track hires at all levels…

2024-09-18 14:53:16 I really love this - it captures what most frustrated me when I took this class. Some problems are easier to formally model - these are the scenarios in which optimization methods "work". But there's so many other types of problems where we're pretty much just fooling ourselves. https://t.co/AYyN9LvuGI

2024-09-18 04:22:36 RT @mmitchell_ai: Can you imagine working in a company that not only supports you, but celebrates you? Feeling all kinds of gratitude for…

2024-09-18 00:56:54 @mmitchell_ai @huggingface Yay, Meg! Excited to see this

2024-09-13 22:14:08 RT @thegautamkamath: Have a nice paper on secure and trustworthy ML? Consider sending it to SaTML! Note that the new deadline is one day a…

2024-09-13 22:08:23 RT @megyoung0: Mike led our work in Seattle with community-based organizations like @ACLU_WA @DenshoProject @CAIRWashington. To honor Mik…

2024-09-13 22:08:08 @megyoung0 @MikeKatell Oh no, so sorry to hear this, Meg

2024-09-13 04:42:35 RT @charlesxjyang: And its live! Our Request for Info on DOE's Frontiers in AI for Science, Security, and Technology (FASST) initiative, wh…

2024-09-12 18:10:57 RT @mmitchell_ai: Honored to participate in Senators Blumenthal &

2024-09-11 23:31:52 RT @nmervegurel: Several new dataset and benchmark papers have been accepted to the DMLR Journal recently! Follow @DMLRJournal for updates

2024-09-09 23:37:32 RT @verityharding: Very cool press fellowship opportunity from @techpolicypress who do fantastic AI journalism—check it out: https://t.co/

2024-09-09 19:52:03 RT @alokpathy: Hi all prospective grad students! Our Equal Access to Application Assistance (EAAA) program for @Berkeley_EECS is now accept…

2024-09-09 17:55:20 This is such a unique opportunity for anyone working at the intersection of CS, policy &

2024-09-09 15:38:38 RT @esme_harrington: So wonderful to attend this Data Fluencies workshop in NYC, exploring the data politics at the heart of AI! A wonderfu…

2024-09-06 15:35:37 RT @mozilla: While we couldn't save @CrowdTangle, we're happy to see that @Meta has now eased its Content Library API access requirements,…

2024-09-05 18:22:50 RT @minilek: https://t.co/lzX6PGyYiI Sep 16th application deadline. UC Berkeley "seeks applicants for four tenure-track (Assistant Profes…

2024-08-15 05:32:40 This reveals so much about how little we meaningfully discuss data choices in computer science education. Data are at the locus of pretty much every tech policy issue - labor, bias, environmental, copyright, privacy, security, toxicity, safety, etc. It is literally politics! https://t.co/EnbfBqKoVy

2024-08-14 20:02:01 + of course, I learnt so much working with @judyhshen &

2024-08-14 19:58:30 Anyways, it was a joy to get to finally dig into a topic like this that I've been curious about for a while now! Practically, I feel like data scaling is so much more complicated a phenom than "more data = better" &

2024-08-14 19:56:53 @Aaron_Horowitz Blame Reviewer number 2 you gotta give the gatekeepers what they want lol

2024-08-14 19:51:31 In those settings, there's a trade-off btw a perf dip due to increasing distribution shift &

2024-08-14 19:47:10 Or at least, it isn't *always* true.. there exist situations where adding more data can lead to *worse* model outcomes! We called this the "data addition dilemma" &

2024-08-01 20:25:32 RT @HellinaNigatu: Excited to be featured by CDSS!

2024-07-31 13:51:34 RT @weidingerlaura: Had an exciting day seeing the @WhiteHouse from the inside to talk about sociotechnical AI safety research! A star-stud…

2024-07-31 13:51:27 @weidingerlaura @WhiteHouse Incredible, Laura! Lol you're wearing a collar and blazer aha very proud

2024-07-26 14:35:25 RT @ChrisCoons: Yesterday @SenBillCassidy and I, along with 15 of our colleagues from both chambers of Congress, sent a bipartisan letter t…

2024-07-25 19:52:22 RT @BerkeleyISchool: HIRING: The University of California, Berkeley seeks applicants for four tenure-track (Assistant Professor) positions…

2024-07-19 05:02:46 RT @DrMetaxa: #FAccT25 will be happening in Athens, Greece! The GCs (myself included) are looking for PhD students interested in paid…

2024-07-11 11:42:32 Such a nice and comprehensive resource for policy-makers trying to make sense of LLM limitations in multi-lingual contexts. This impacts not just international user experiences, but also diaspora and immigrant experiences within the US (eg. https://t.co/o4aE2WIxuo). Important! https://t.co/zGmmqeKbjd

2024-07-11 00:55:15 RT @sarahookr: Does more compute equate with greater risk? What is our track record at predicting what risks emerge with scale? I don't…

2024-07-10 10:00:15 @KellerScholl The nurses were striking (amongst other things) over concerns for patient safety - I think that's a serious disconnect if one crowd thinks this will save us all and the workers involved are saying it's causing more harm.

2024-07-09 20:07:13 @Aaron_Horowitz Yeah, I could write a whole separate thread on the specific thing they're advocating for &

2024-07-08 17:33:11 RT @dfreelon: If you study TikTok, have a look at my newly updated Python package Pyktok--I just added a few features you might find useful…

2024-07-08 15:46:55 RT @GabeNicholas: New op-ed from me in @ForeignPolicy! The premise: to regulate AI effectively, we need information about how people ac…

2024-07-03 12:24:54 RT @charlesxjyang: For anyone interested in critical and emerging tech policy, my DOE office is hiring a fellow! Can't say enough good thi…

2024-07-01 17:53:35 RT @PeterHndrsn: Super important! And to be clear it's not just Loper Bright (the Chevron decision). Several other cases in the last week,…

2024-07-01 14:42:17 RT @tribelaw: The 6-3 Corner Post opinion by Justice Barrett multiplied the harm done by Chevron’s overruling by effectively holding that t…

2024-06-30 14:43:12 RT @pulitzercenter: Apply to be part of the third cohort of the AI Accountability Fellowships. Don’t miss this opportunity to report in-de…

2024-06-30 01:39:21 RT @reshmagar: Chevron has been overruled by #SCOTUS. This is a dark day for public health &

2024-06-24 17:35:19 RT @CohereForAI: Tomorrow check out @HellinaNigatu and her presentation with our community-led Geo Africa Group! Learn more: https://t.co/

2024-06-24 16:37:55 RT @kevindeliban: Overdue focus on how low-income folks lose Medicaid and SNAP—with all the attendant devastation to their health—because a…

2024-06-24 14:35:03 Interesting to see an in-the-wild study on the use and impact of model cards! Even though there's clearly still a lot to do, it's great to see how far AI documentation has come. Very grateful for the leadership of @timnitGebru @mmitchell_ai in leading these efforts at the time https://t.co/4pgJ3egh8R

2024-06-23 16:27:24 RT @NeurIPSConf: NeurIPS 2024 is looking for AI Ethics Reviewers for submissions regarding risks and harms of the work. If you are inter…

2024-06-22 00:43:39 What I learnt from this (&

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-22 19:43:56 @yonashav Yeah, happened in front of me in person at least twice .. I'm afraid that kind of behavior is fairly normalized within a certain kind of tech crowd

2023-05-22 17:50:10 Can't believe we live in a world where some would rather see an AI system as human before acknowledging the humanity of the marginalized actual people around them.

2023-05-19 22:08:33 RT @russellwald: There were multiple Senate AI hearings today. But only one focused on federal use of the tech. Congrats to my @StanfordHAI…

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-04-21 00:00:01 CAFIAC FIX

2023-04-18 07:07:57 RT @suryamattu: I am thrilled to finally announce this new partnership between Digital Witness Lab and @pulitzercenter.  https://t.co/UN

2023-04-16 14:35:30 RT @AlexCEngler: Good and interesting new letter from @AINowInstitute, @DAIRInstitute, and others on general purpose AI (GPAI) in the EU AI…

2023-04-14 20:05:36 @dcalacci Could not have said this better myself!

2023-04-14 17:10:23 +this reminds me of a hilarious recent interaction. Someone came in with the familiar argument: "This system is too big, too complex to audit" The rebuttal was gold - "wait, so why is it being deployed in the first place?" If a system can't be reliably evaluated, why allow it?

2023-04-14 17:10:22 I highly doubt anyone is advocating for companies to do nothing &

2023-04-14 17:10:21 It seems like there's some mainstream confusion on what algorithm audit policy actually involves - many "audit mandates" are really independent review mandates, meaning that the org produces an internal audit report that's shared with a hired third party or regulator to confirm

2023-04-14 13:26:16 I kinda see this as a false choice. Of course the onus should be on companies to provide data details &

2023-04-13 19:21:09 RT @AJLUnited: The government is still using IDme to access tax accounts after promising to stop after many complaints. Read @jovialjoy 's…

2023-04-13 18:58:10 RT @ambaonadventure: "GPAI models carry inherent risks.... (which) can be carried over to a wide range of downstream actors and applicatio…

2023-04-13 18:57:19 RT @ghadfield: OpenAI will pay you to join its ‘bug bounty program’ and hundreds have signed up—already finding 14 flaws within 24 hours ht…

2023-04-12 22:32:06 RT @b_schwanke: Still buzzing from yesterday’s @PittCyber’s convos w/ @NTIA and the really rich panel with @ellgood, @Wenbinters, @rajiinio…

2023-04-11 20:27:12 RT @PittCyber: Following comments from @DavidsonNTIA, a panel of experts including Ellen P. Goodman, @Wenbinters, @rajiinio, and Nat Beuse,…

2023-04-11 15:30:19 Excited to be on this panel today! Should be a great discussion about practical paths forward for auditing in AI regulation https://t.co/U1xhbC53rB

2023-04-11 15:29:27 RT @Wenbinters: Request for Comment on 'AI Assurance' (audits, impact assessments+++) from @NTIAgov is out!! https://t.co/lvued3yuXz 60…

2023-04-11 15:07:46 @emmharv @CornellInfoSci @allisonkoe @whynotyet Congrats, Emma! So excited for you

2023-04-11 03:36:30 RT @dcalacci: Friends! I'll be defending my dissertation tomorrow at noon. The talk is open to the public on zoom or in-person at the M…

2023-04-10 03:24:51 Also sometimes the best thing that can happen to a paper is to not get accepted! I've personally experienced this process of maturity via critique. IMO the quality of reviews were higher this year &

2023-04-10 03:17:49 RT @evijitghosh: In the wake of FAccT decisions, I’ve seen a few tweets similar to “If <

2023-04-09 17:01:45 RT @conitzer: I've been using a US version of this example (bar exam) but now this is being pursued in court in my native Netherlands! Als…

2023-04-09 17:01:39 @conitzer Thanks, Vincent!

2023-04-07 17:56:25 RT @mozilla: Securing a property can be a daunting task for renters, and many tenants face discrimination, keeping them from landing their…

2023-04-06 15:20:00 Interesting paper on an important topic! I first learnt about digital copyright by reading "The End of Digital Ownership" by @Lawgeek &

2023-04-04 20:23:03 RT @rachelmetz: A really smart, nuanced piece by ⁦@SashaMTL⁩. As she notes, ⁦@timnitGebru⁩, ⁦@ruha9⁩, ⁦@rajiinio⁩ (and many more!) have pus…

2023-04-04 20:20:55 RT @dlberes: Love this story by @Saahil_Desai, which examines the unique human value of political polling and the limits of AI + big data.…

2023-04-02 15:09:44 @pcastr I know this isn't what you asked for but you can actually convert your Chromebook into a Linux machine pretty easily - I did this in uni &

2023-04-02 13:51:21 @scottniekum @pcastr @sethlazar Anyways, at minimum, you're right that people shouldn't be using "AI safety" as a pejorative - other people will also use "wokeness"/"AI ethics" as an insult in the same way, and I've never seen that kind of discussion yield anything productive. Truly sorry that happened to you!

2023-04-02 13:45:14 @scottniekum @pcastr Though tbh, practical collaboration between the two groups will be hard &

2023-04-02 13:41:37 @scottniekum @pcastr Though you're right that the reason they do this (ie. in order to unilaterally prioritize problems of system control) is something generally neglected by the AI ethics folks &

2023-04-02 13:39:23 @scottniekum @pcastr Yeah, I agree with this! I also don't like that "AI safety" now ~ AGI folks, &

2023-03-31 19:29:45 @Miles_Brundage LOL

2023-03-31 16:46:53 RT @RMac18: In Nov, a GA man was arrested for a crime in LA, a state he said he'd never been to. He spent 6 days in jail. We found his arr…

2023-03-30 19:57:30 @Aaron_Horowitz @mmitchell_ai "I'm sure they will be fine"

2023-03-30 18:15:00 @ImpossibleScott @mmitchell_ai Yeah, I say cynical because it's not the most generous characterization of those folks. It's not just billionaires that believe in the AGI doomsday scenario but also anyone that they've successfully convinced

2023-03-30 17:39:32 I've said this before but I really do hope for their sake, that there can emerge some safe forms of internal accountability within that group. Sometimes it can feel like watching the pied piper - it is fundamentally risky to not be able to question the decision-making of leaders.

2023-03-30 17:28:23 + as usual, even stranger than the offense is the community's complicated justifications &

2023-03-30 17:17:09 If you are casually advocating for air strikes in response to *anything*, then you clearly underestimate the tragedy of war. In the midst of all that's been happening in Ukraine/Russia, Israel/Palestine, it was truly horrible to read something like this: https://t.co/lxa38lvYPE https://t.co/oES494XsdT

2023-03-30 17:04:23 @cristina_elisav @mmitchell_ai Yep - and the same also applies to "AI safety", which can be broken down into even smaller sub-groups &

2023-03-30 15:45:33 @mmitchell_ai Again, this is my most cynical take lol, I get that it doesn't apply broadly. But I've had convos w/ participants in the AGI crowd where I've had to gently remind them that Black ppl, disabled ppl, etc exist, &

2023-03-30 15:41:00 @mmitchell_ai In my more cynical moments, I think there's another dimension of this as well - AI ethics folks typically talk about minority/marginalized populations that some technologists don't even want to acknowledge exists, while the AGI doomsayers are only really talking about themselves.

2023-03-30 15:36:05 @mmitchell_ai it's because of the functionality fallacy imo - AI safety fears are anchored to the myth shared by companies that the technology works and will only get better

2023-03-23 16:37:07 RT @PittCyber: Looking forward to a compelling conversation with @NTIAgov's @DavidsonNTIA and experts like Nat Beuse, Ellen Goodman, @rajii…

2023-03-22 22:28:55 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference (3) empowering civil society to scrutinize &

2023-03-22 22:26:32 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference For completeness, a summary of what I had said - there are potentially several ways for increasing participation in audit work: (1) involving broader perspectives in defining standards &

2023-03-22 22:12:35 @mer__edith lol truly wild

2023-03-22 22:11:40 RT @iamdaricia: The @mozillafestival has assembled a mix of truly engaging sessions this week but I want to highlight this panel on the alg…

2023-03-22 21:50:14 @mer__edith wow this is so interesting... how do you see the role of tech cos change over time from your view? Were they ever considered good guys? Or was the shift more from "under-estimated" to problematic?

2023-03-22 21:45:05 @mer__edith fwiw my reference is mid-2000s Global Network Initiative (GNI) type papers, where the main narrative was on government authoritarian abuse of the Internet. It's not that these issues don't exist, but even today, those movements have a lot of faith in the tech companies as allies

2023-03-22 21:41:30 @mer__edith That's interesting..some of my reading was that there was also a lot of concern for governments taking control of the internet in ways we don't really discuss today, ie. "China" vs. "US" narratives on internet ownership - some people seemed to see tech cos as allies in that fight

2023-03-22 21:38:29 @baricks @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference @bgeurkink

2023-03-22 21:37:58 @baricks @MarcFaddoul @dcalacci @seanmcgregor @Abebab @DJEmeritus Thank you! How am I not already following everyone? lol

2023-03-22 21:37:13 RT @Borhane_B_H: Fantastic end to the @mozillafestival OAT session: “The pain points for algorithmic audit tools to address are far from pu…

2023-03-22 21:36:44 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference also LOL @sherrying we will go to an art museum soon

2023-03-22 21:35:56 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference ops, got caught up in the discussion but a full recording of the discussion can be found here: https://t.co/TL4ZEoJN8L

2023-03-22 21:27:31 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference "There was a storytelling campaign about what people were experiencing on the paltform", Becca adds. "I'm interested in - 'How do we develop audit ecosystems that are more of a feedback loop? How do we make things more participatory and bi-directional?"

2023-03-22 21:20:43 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference Brandi talks abt how advocacy vs. academia reaches different audiences due to different "communication strategies" and how that impacts how things are being used. She mentions that the Mozilla youtube study was one of the few cited in legal work, despite academic work being avail

2023-03-22 21:19:06 @MarcFaddoul @dcalacci @seanmcgregor @Abebab He discusses how presenting the community keynote @FAccTConference allowed his team to interact and connect with not just CS folks but also lawyers, community leaders, etc.

2023-03-22 21:18:07 @MarcFaddoul @dcalacci @seanmcgregor @Abebab Victor asks, "how do we translate audit outcomes into actual accountability?" Dan responds "Make it embarrasing" lol

2023-03-22 21:17:14 @MarcFaddoul @dcalacci @seanmcgregor @Abebab "The ideal thing to do for online platform audits would be to have something integrated into the browser for incident reporting,.. this would be something that scales completely."

2023-03-22 21:14:39 @MarcFaddoul @dcalacci @seanmcgregor @Abebab "Open source *audit* tooling is something we looked at in particular since it's something that has been relatively underserved up to date. It's an important area but often overlooked." He mentions how many of the projects are still works in progress &

2023-03-22 21:12:56 @MarcFaddoul @dcalacci @seanmcgregor @Abebab Now Mehan is discussing MTF and how "open source auditing and open source tools being used for auditing" is one way to address these serious issues of trust. "The challenges are not necessarily unique to audit tools, but reveal problems with open source development as a whole,"

2023-03-22 21:10:58 @MarcFaddoul @dcalacci @seanmcgregor @Abebab Becca Ricks adds, "As these researcher APIs and public APIs are being crafted in response to the DSA, we need to understand what effect this can have on the quality and type of audit work that comes out.." She discusses using the youtube API, but "not in the way they intended"

2023-03-22 20:39:23 HAPPENING NOW! https://t.co/wogfMnplSY

2023-03-22 18:05:43 This is actually so important. I was talking about this just the other day - the net neutrality movement was so worried that certain governments would attempt to take over the Internet, but all along.. it was the companies. A small set of cloud providers already own the Internet. https://t.co/OtpQXxZ6g5

2023-03-22 09:59:46 RT @schock: Freaked out by #GPT4? Wondering how to reign in powerful new #AI technologies? Proud to share I'm a coauthor, w/ @jovialjoy &

2023-03-21 20:48:43 OAT team members @OjewaleV &

2023-03-21 19:52:06 RT @OjewaleV: Looking forward to the panel session on Navigating the open-source algorithm audit tooling landscape at #MozFest tomorrow!…

2023-03-18 02:51:01 @DanielTrielli Congrats, Daniel!!

2023-03-17 15:50:21 Working with Abeba for OAT has been amazing - she has the kind of insights that make you pause everything and start over lol Could not recommend anyone more! https://t.co/ZRIIR1YesE

2023-03-17 13:58:35 RT @augodinot: @rajiinio @mozilla @trackingexposed @ErwanLeMerrer Thanks ! Then you might also like https://t.co/DbdhR0Tgh7

2023-03-16 13:03:34 @RemmeltE @NathanpmYoung @xriskology @ruha9 @timnitGebru @emilymbender @safiyanoble hm fwiw they do cite "Algorithms of Oppression" &

2023-03-16 13:00:00 @npparikh Sure - but just so you know, FRVT (started since at least 2013) is much older than Gender Shades (published 2018). And FRVT only started measuring demographic effects in 2019, citing the followup work to Gender Shades as direct inspiration for that. Age != influence &

2023-03-15 18:35:27 Simple risk assessments are still some of the most catastrophic AI deployments in the country today. Yes, most of these are nothing more than simple linear regression ++, but their influence on decision-making drastically alters the lived experience of millions of Americans. https://t.co/2yfFLrZxX1

2023-03-15 15:02:51 RT @Aaron_Horowitz: New AP news story out based, in part, on our work. It's a good reminder of why we spent so long auditing AFST- real fam…

2023-03-15 14:55:18 RT @merbroussard: I haven't talked about it much until now, but I had #breastcancer recently. I'm fine now thanks to excellent, hi-tech med…

2023-03-14 22:45:23 RT @_anthonychen: Under-rated is how hard it is to create datasets that stand the test of time. And DROP from my labmate @ddua17 has done j…

2023-03-14 22:30:58 @nsthorat ohhh - oh yes, this is a great point. I used to think those using these models downstream would naturally be doing this kind of local testing but in esp any low tech / low resource setting that doesn't seem to be the case. It's hard to design and build a meaningful benchmark :(

2023-03-14 22:13:40 @npparikh Not claiming GS was state of the art (it wasn't designed to be)! But FRVT evolved post-GS to include demographic analyses + a lot has happened since then on evolving testing procedures to reveal previously ignored weaknesses in the tech. An illustration of how imp benchmarks are.

2023-03-14 22:11:14 @alexhanna my nightmare!!

2023-03-14 22:10:54 @nsthorat how to..? cliffhanger lol

2023-03-14 21:28:34 I feel the same way about these large language models. Let's be serious - we won't be using these models to pass the bar, and that's not even what they're pitching them for. The actual applications are much more complicated and completely untested for with the current benchmarks.

2023-03-14 21:26:34 For a while, facial recognition was pretty much considered a solved problem because the benchmarks at the time made it seem like a solved problem. Then, challenges like Gender Shades and such came along, revealing the problem was actually a lot more complicated &

2023-03-14 21:13:43 LOOL https://t.co/wNarXpTi5m

2023-03-14 14:28:08 @merbroussard Congrats!!

2023-03-12 19:56:02 @augodinot @mozilla @trackingexposed @ErwanLeMerrer Nice resource! Thanks for sharing

2023-03-12 19:55:30 RT @augodinot: @rajiinio @mozilla Nice to see @trackingexposed in the list ! Might want to add some of these in https://t.co/K1GzD3vHyj @Er…

2023-03-11 23:43:19 ICYMI @mozilla has announced the amazing cohort of grantees for the Mozilla Technology Fund! Such a diverse set of audit tooling projects being supported through this program. https://t.co/8F4Azr0Mse

2023-03-11 03:45:01 @russellwald Yeah I agree there's a practicality to the approach but also wondering if we can get more ambitious about how to get to meaningful oversight! Lots of other technical industries (eg automobile, medical devices) are less reliant on industry cooperation. Though some (aerospace) are.

2023-03-10 15:29:48 @BlackHC Dang, sorry to hear - this is all kinds of disappointing.

2023-03-10 00:29:00 @BlackHC Wait, might be a silly question but why not work together on the idea?

2023-03-09 23:42:03 RT @dinabass: "“We’re talking about ChatGPT and we know nothing about it,” said @huggingface's Sasha Luccioni, who has tried to guesstimat…

2023-03-09 07:21:07 @andrewthesmart Ohh hm this looks interesting - thanks for sharing, will check it out!

2023-03-09 03:53:45 @ziebrah I wonder if we are reviewing the same paper rn lol

2023-03-09 03:53:08 @__lucab @UCBerkeley @GoldmanSchool @CITRISPolicyLab Wahoo! Let's get a coffee whenever you're around &

2023-03-08 18:09:51 @ecrws Yeah, this is an interesting point - I think there was the same critique levied at the use of ethical licenses for open source AI projects. A step in the right direction but a limited intervention, for sure.

2023-03-08 18:07:02 @jdp23 lol exactly. Lots of great work happening internally ofc, but also lots of corporations that can't be trusted at face value to provide reliable info on these things.

2023-03-08 18:04:47 @ziebrah why are we talking about regulation at all if incentives are so aligned??

2023-03-08 18:04:26 @ziebrah lol this is it!!

2023-03-08 18:03:52 @BlancheMinerva Also, you won't get any resistance from me on the "we need external evaluation" front, but afaik, this isn't what OP is proposing. Here, as w/ most industry consortium attempts, they talk about "consulting" independent researchers to set standards, not allowing them to get access

2023-03-08 17:45:31 @jdp23 Yeah it's not like Facebook hasn't already mis-represetned data presented to external stakeholders in the past or anything... lol https://t.co/S43y4wkFoQ

2023-03-08 17:33:25 Tech policy proposals that depend heavily on the voluntary cooperation of the tech companies being regulated are so frustrating to me. I get that there are many cases where "incentives align" but without meaningful external oversight, I'm immediately suspicious.

2023-03-08 17:21:09 @ecrws Curious what you mean here? As in "of course - this is the bare minimum" type thinking or something else?

2023-03-08 17:18:42 @BlancheMinerva lol it's not a take - it's just that people kind of tried the "self-regulatory consortium" thing before but when it happened no one really paid attention to what they had to say. My take is that regulators and civil society should be involved in setting actual legal guardrails.

2023-03-08 15:47:13 Most people forget that this already kind of happened last summer - Cohere, OpenAI, and AI21 Labs released a joint statement on guidance for large language models but it mostly slid under the radar: https://t.co/FFzR6Lblvm https://t.co/8gvmywYt27

2023-03-08 01:07:15 RT @glichfield: Today @WIRED runs the final two instalments in "Suspicion Machine," our joint investigation with @LHreports into how algori…

2023-03-08 00:35:20 @OjewaleV @brianavecchione @mozilla

2023-03-07 01:46:01 @acidflask @NeurIPSConf Yay! Congrats Jiahao

2023-03-06 23:56:19 RT @gabriels_geiger: In June of 2021, I sent a public records request to the city of Rotterdam. I wanted the code for an algorithm the city…

2023-03-06 14:33:46 I was more excited about Victor's acceptances than my own So excited to see that our research assistant for the @mozilla OAT project, @OjewaleV is headed from Nigeria to do a CS PhD in the US! One to watch!! https://t.co/qLsG6V9oUA

2023-03-06 02:57:48 Big fan of Rishi's work on this! If your data happens to be mislabeled/misunderstood by one foundation model that's used widely, then you're kind of screwed. https://t.co/5SE16OzvlN

2023-03-06 02:13:28 RT @irenetrampoline: One paper to recommend on societal bias in ML, health, and science? - @judywawira: "Reading race" Banerjee et al - @…

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-28 03:04:42 @hlntnr

2023-02-28 02:59:07 RT @random_walker: The FTC says it will ask a few q's about AI-related claims: –Are you exaggerating what your AI product can do? –Are yo…

2023-02-28 02:56:56 Ah, great news! https://t.co/6RCaiFcElg

2023-02-27 01:00:00 CAFIAC FIX

2023-02-20 15:53:51 @kdpsinghlab @RoxanaDaneshjou Curious - what do you consider reasonable use cases?

2023-02-19 15:07:56 Some version of "build an FTC office to focus specifically on tech issues" has been pitched in many of the current bills on algorithmic oversight - Algorithmic Accountability Act, PATA, Digital Services Oversight &

2023-02-19 14:14:11 RT @stephtngu: We are proud to announce the creation of the Office of Technology at @FTC, a team that will provide technical expertise acro…

2023-02-17 17:15:27 RT @geomblog: So much tech/policy news: the latest is @FTC setting up a new office of technology to help with FTC actions. This is amazing.…

2023-02-17 15:30:52 RT @harlanyu: This is a big deal: today's new @WhiteHouse EO on racial equity instructs federal agencies to affirmatively address emerging…

2023-02-15 20:16:03 @akanazawa Congrats, Angjoo!

2023-02-15 16:30:45 RT @conitzer: Hurray, the call for papers for the AI, Ethics, and Society conference @AIESConf is out! Deadline March 15, conference August…

2023-02-15 15:31:15 @chrmanning @npparikh Also...didn't you &

2023-02-15 15:28:23 @chrmanning @npparikh GPT-3 was a *controlled* release, mediated via the Open AI API, and I do believe that oversight prevented many inappropriate uses. IMO the concerns about GPT-3 weren't overblown - I think they informed a caution and concrete measures that protected us from those potential harms.

2023-02-15 00:34:48 @zacharylipton @andrewgwils @STS_News has actually written about this. He called it "criti-hype": https://t.co/0rWwo5zzHp

2023-02-14 14:34:10 Been thinking recently about expectations of evidence for hype vs critique. AGI people are literally operating in a realm of pure speculation yet they are so easily believed. Others will spend months on the ground, only for the concerns they surface to be dismissed as anecdotes. https://t.co/ZqArPDRg1z

2023-02-14 13:38:38 @RWerpachowski @mikarv The subsequent legal documents are all directly or indirectly derivative from that early work in 2020 - more importantly though, I shared the article bc he discusses an unchanged culture in EU policymaking of relying on a certain set of partial &

2023-02-14 13:25:49 @RWerpachowski Lol, please don't take my word for it. @mikarv looked into the "expert group" that shaped that directive - far from a qualified &

2023-02-14 13:16:51 @RWerpachowski Hm, not true from my experience. I attend many policy roundtables &

2023-02-14 13:13:36 Now, this is not to say methodology and rigor do not need to improve - I will be the first in line to challenge the quality of evidence we typically tolerate - but it's clear to me that this isn't a community to be dismissed - their perspective is an essential counter-balance.

2023-02-14 13:05:44 Something many often don't consider when discussing "Ethical AI" is the power differential - there is a multi-billion dollar apparatus marketing this technology as flawless and only recently has a critical mass of scholars &

2023-02-14 12:54:10 A false dichotomy. "Generative AI" can be fun &

2023-02-11 04:54:29 @NicolasPapernot @satml_conf @carmelatroncoso Congrats on putting this together!

2023-02-10 16:38:54 RT @MichelleCalabro: Thinking about shared responsibility between people and the systems we create. “It’s so much easier to point to an al…

2023-02-09 17:45:54 RT @dfreelon: As announced last week, Twitter will eliminate free access to its APIs this Thu (Feb 9). This thread collates alternative sou…

2023-02-08 19:56:40 RT @random_walker: Fascinating audit of social media "raciness" classifiers that don't understand context and are massively biased toward l…

2023-02-08 17:47:30 AI fairness for social bad https://t.co/meCkTTh7mK

2023-02-08 17:46:17 RT @b_mittelstadt: New piece in @WIRED on the harms of algorithmic fairness in #AI &

2023-02-07 21:35:36 RT @togelius: Nice article in @TheAtlantic about AI game playing and what it's for, including quotes from @polynoamial, @rajinio, @yannakak…

2023-02-05 22:28:19 Lol from my experience, this isn't only happening with students... https://t.co/aluYmJeBgx

2023-02-05 15:11:45 @yoavgo "Human-level" intelligence as a goal is strange though - there's many useful things humans are horrible at &

2023-02-05 14:51:11 Most incredible thing about having Alondra Nelson at the helm of OSTP was the impact of her socio-technical expertise. Here was someone that took time to deeply understand the science *and* the people - even as an outsider, I could see how much that benefitted her policymaking. https://t.co/OOr2fxsdVT

2023-02-05 14:37:14 @AlondraNelson46 @WHOSTP @POTUS @VP Thank you so much for your service!! An inspiration for years to come

2023-02-04 16:40:32 RT @jimtankersley: NEW: Black taxpayers are 3-5x more likely than everyone else to be audited by the IRS, a product of algorithmic discrimi…

2023-02-04 16:32:55 A pioneer! Thank you so much for your contribution to bringing methodological rigor, and a relentless perseverance to the tech accountability space! https://t.co/96A5c9AlcN

2023-02-04 09:43:22 RT @LauraEdelson2: The deadline to apply for TechCongress has been extended to Feb. 16! This program is doing so much to bring technical ex…

2023-02-04 09:10:43 RT @BelferSTPP: Thanks to @rajiinio for joining our AI Cyber Lunch on Wed. Her talk highlighted the urgent need for oversight of widespre…

2023-02-02 23:49:54 @ziebrah Lol didn't you write a thoughtful blog post on exactly this topic?

2023-02-02 23:46:57 RT @yaleisp: Thank you so much @rajiinio for sharing your wonderful work on audits and accountability for automated decision systems with u…

2023-02-02 11:09:30 One of the tools in the current Mozilla Technology Fund cohort. Very cool! https://t.co/K5yb3GIsUx

2023-02-01 22:39:05 @nsaphra @vonekels @ryanbsteed @emilymbender @mmitchell_ai @SashaMTL @enfleisig Lol thoughtful twitter takes are an unappreciated art these days

2023-02-01 22:31:58 @vonekels @ryanbsteed @emilymbender @mmitchell_ai @SashaMTL @enfleisig @nsaphra Also almost forgot the incredibly thoughtful @ria_kalluri has also done some recent work on this as well- clearly lots of great folks to highlight in this space! https://t.co/Cb5cZx9v4c

2023-02-01 22:27:36 @vonekels @ryanbsteed And @emilymbender @mmitchell_ai @SashaMTL have been warning about LLMs for a very long time. @enfleisig &

2023-02-01 22:20:33 @vonekels And @ryanbsteed wrote about the over sexualization of generative AI models long before Lensa was even a thing: https://t.co/BFKYxAsZDE

2023-02-01 22:15:23 By the way - if you are a journalist looking to make sense of the bias issues with generative AI, I highly recommend speaking to those that have been thinking about this much longer than I have: @vonekels for example has an excellent paper on bias in face generation models. https://t.co/joQ1z7x3CH

2023-02-01 22:06:11 Giving a (hopefully shorter ) version of the talk at Yale tommorrow as well for those that happen to be around! https://t.co/nxIPiypgo5

2023-01-31 02:32:53 RT @FAccTConference: To all the PhD students and researchers working on fairness, accountability and transparency (or related topics) in re…

2023-01-30 01:00:00 CAFIAC FIX

2023-01-13 23:05:12 RT @alesherasimenka: New Research Just out in Journal of Communication: One of the first academic studies uncovering the economy of d…

2023-01-13 23:05:01 RT @CatalinaGoanta: Fascinating research on the monetization of misinformation, which zooms into public health misinfo to unveil economic i…

2023-01-12 23:39:17 RT @jachiam0: Somehow it doesn't seem to occur to them that these beliefs are offensive because they're not only wrong but also immensely d…

2023-01-12 18:35:08 RT @jlkoepke: the EEOC's Draft Strategic Enforcement Plan squarely focuses on the use of algorithmic systems throughout the hiring process…

2023-01-12 18:32:23 RT @NicolasPapernot: Only a few seats left for SaTML 2023! Join us to listen to our keynote speakers @timnitGebru &

2023-01-12 18:17:35 @Akumunokokoro @sshwartz @chrmanning Hm do you have any insight into why they are so uncooperative with regulators? That behavior is so unusual and aggressively defensive, and is what raised my suspicions about them years ago

2023-01-12 17:47:02 @Akumunokokoro @sshwartz @chrmanning Interesting - though I'm not sure all drivers are aware of their liability to the extent they'd need to be to properly supervise. Also even a non-automated vehicle manufacturer still has requirements. I don't know if this completely excuses the more outrageous Tesla car failures.

2023-01-12 07:20:37 Final word: the fact that pretty much everyone agrees that this is an incomplete, partial apology, but the divide is between "yes, that is unacceptable" and "let me try to convince you that your race is intellectually inferior" is really throwing me for a loop right now.

2023-01-12 07:05:00 @TheKoopaKing1 I'm not sure what you're expecting, but I won't be debating with someone about the supposed intellectual inferiority of my race. Bostrom is not talking about education access, you know that. Feel free to agree with Bostrom, but for many these beliefs are prejudiced &

2023-01-12 06:54:12 @TheKoopaKing1 @jordan_uggla Jordan did not come into this thread to argue with anyone but was helping to translate the text for those with screen readers. He chose not to type out a slur and that's a completely reasonable thing for him to do.

2023-01-12 06:42:25 @RockstarRaccoon I posted the crop not just to point to the fact that the email is horrible, but to highlight that this is not just about language. I'm quote tweeting the original post, it won't be hard for folks to find his comments as well?

2023-01-12 06:36:33 @TheKoopaKing1 @jordan_uggla Please ignore this - and thank you @jordan_uggla for writing this alt text to make the conversation accessible.

2023-01-12 05:01:01 @MichaelD1729 Thank you for saying this.

2023-01-12 04:48:14 @flotsam70272377 @nsaphra @thebirdmaniac Also, saying this before I log off - you can be racist and donate money to Black people or pity them or even be nice to them. The only criteria for racism is seeing a fundamental difference and choosing to imagine one group as superior to another because of their supposed race.

2023-01-12 04:45:36 @flotsam70272377 @nsaphra @thebirdmaniac I understand that word is socially charged and this may be upsetting to you, but if you share those beliefs, you need to understand that those are by definition prejudiced beliefs. And for you + others in an EA community, that's something you need to either denounce or admit to.

2023-01-12 04:42:17 @flotsam70272377 @nsaphra @thebirdmaniac Racism is about believing that one race is superior to another. Bostrom's stated beliefs, which he still does not explicitly denounce, are about racial differences in intellect, implying that he believes some races are more intelligent than others. That is racism, quite literally

2023-01-12 04:32:16 Adding this to clarify that my goal is not some unjust character assassination of Bostrom. It's upsetting that someone would write this at all but what is *most* upsetting is how he currently remains equivocal about beliefs that are harmful &

2023-01-12 04:26:51 @flotsam70272377 @thebirdmaniac @nsaphra This is what I find very unsettling. If EAs are also equivocal about the statement "blacks are stupider.." then that is good for all of us to know. If they do not believe this, they need to denounce this and understand his apology is incomplete.

2023-01-12 04:24:30 @flotsam70272377 @thebirdmaniac @nsaphra This is not about his past comments but his present ones. In his present day apology, he does not denounce the first statement of the original email and remains equivocal about something that is understood to be prejudiced and harmful.

2023-01-12 04:07:53 @flotsam70272377 This is understood to be a harmful and prejudiced belief. I won't say more than that, but if this does not represent what most EAs believe then you need to denounce this. If it does, then that is good for all of us to know.

2023-01-12 04:06:16 @flotsam70272377 If this community won't hold him accountable for that, I'm not sure if there's anything left to say here. If that first statement is something members of the EA community actually believe then I am here to inform you that it's understood to be a prejudiced and harmful belief.

2023-01-12 04:04:32 @flotsam70272377 Now that he does apparently know better, he still currently does not apologize for the initial statement, and is in fact quite equivocal about it in his statement.

2023-01-12 00:53:56 Anyways, this is my cue to log off for a while. I literally can't stomach seeing something like this, and I have no interest in engaging with whatever excuses him and his followers come up with. That first statement is *racist* - not to mention deeply hurtful and dehumanizing.

2023-01-12 00:53:04 This is the old email that Nick Bostrom, a leader in Effective Altruism, is now apologizing for. Horrifying, yes, but I assure you his "apology" is worse - he walks back on his "invocation of a racial slur" without addressing the initial statement of a false &

2023-01-11 17:49:33 @RoxanaDaneshjou lol love this

2023-01-11 16:41:33 I could go on and on honestly. Most recently, NTSB is still fighting them to address the safety recommendations from over *five years ago*: https://t.co/uJFUFts5Fy

2023-01-11 16:41:32 One of Tesla's big arguments at the time was that "no one could prove autopilot was on at the time of collision", and ofc a few years later we find out this: https://t.co/76DhBenYW8

2023-01-11 16:41:31 But articles giving users tips on how to "work around" Autopilot's clearly dangerous failure modes is starting to sound like those advocating for ad-hoc car adjustments to fix the 60s Covair steering issues. At some point, it's clear that the problem is the car, not the driver.

2023-01-11 16:41:30 Like, yes, human users do hold some serious responsibilities, esp in the context of AI use. If you're going to turn an automated feature on, you typically need to monitor it and should not be negligent. @aselbst has actually written about this here: https://t.co/HWyAFX9BdZ

2023-01-11 16:41:29 The talk about these crashes is frustrating. Tesla is not a neutral actor &

2023-01-07 09:27:54 @KLdivergence Congrats, Kristian!

2023-01-07 09:09:04 @kashhill Congrats - I can't imagine how difficult it must have been to work on this story!

2023-01-05 21:46:09 @AmandaAskell @wsisaac @iamtrask Fair enough!

2023-01-05 21:45:17 @wsisaac @athundt @AmandaAskell @iamtrask Hm, I see what you mean - I'm not sure I agree but I also don't have a complete picture either. I guess I'm leaning towards being more cautious without the evidence, but understand those that see things differently.

2023-01-05 20:24:41 @athundt @AmandaAskell @wsisaac @iamtrask Also, I'll add that in my experience from an academic context, text does not need to be copied verbatim for it to count as plagiarism - in fact, in many failed attempts to cover their tracks, plagiarists will try to weakly re-phrase the text, though the content is the same.

2023-01-05 20:21:11 @wsisaac @iamtrask @AmandaAskell If they hadn't done anything, there would be a lot of cases of plausible deniability, a lot of "I didn't realize this was plagiarized or counted as plagiarism" and "I don't see anything about this technically being against the rules", so I understand their move to draw red lines.

2023-01-05 20:18:51 @wsisaac @iamtrask @AmandaAskell Also, by saying something about it, people now know which uses of LLMs are not endorsed - ie. that generating original text for papers is not something they should consider lightly, and that there are serious risks/consequences associated with the use of these tools in particular

2023-01-05 20:16:22 @wsisaac @iamtrask @AmandaAskell I feel like it isn't immediately clear to those using the LLMs that they are on the hook if their tools lead them to plagiarism (eg. using an LLM, they may not know of or recognize the source). This policy clarifies that this is a risk &

2023-01-05 19:07:29 RT @johnfsymons: It has happened. Just rejected a paper where format of large chunks of text indicated sloppy use of #LLM by the authors. C…

2023-01-05 19:06:06 @wsisaac @iamtrask @AmandaAskell +I'd argue that it was wise to get ahead of things &

2023-01-05 19:00:08 @wsisaac @iamtrask @AmandaAskell Hm, I'd argue it is an ethics matter - there's a research integrity issue at play here if people are generating content in papers from a large language model and potentially plagiarizing, compromising on correctness, etc.

2023-01-05 18:37:41 @KordingLab Hm what do you mean by "cross-cutting" thinking?

2023-01-05 18:09:47 This was such an interesting conversation and it's great to see it organized this way - ultimately, articulating clear community expectations around the ethical use of these LLM tools is important, and I'm glad to see ICML starting that discussion: https://t.co/vwTYrRgeGp

2023-01-04 16:44:25 @leonieclaude @RepublikMagazin @syllabus_tweets @AnnaNosthoff oh, congrats lol glad I could play a small part in your success here

2023-01-04 13:03:24 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran yeah, I've learned a lot from this thread on how people are using ChatGPT - I hadn't previously realized how much non-native speakers were already finding it helpful as an enhanced Grammarly

2023-01-04 13:00:17 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran curious - how are you thinking they should amend the policy? (in the context of this year, with weeks to the final deadline)

2023-01-04 12:58:19 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran We have proof that ChatGPT generates spam, though we're unsure how likely that spam is to fool reviewers, etc. I understand why you may not agree, but I do think ICML organizers giving themselves more time to prepare for and discuss how to handle LLM-enhanced papers is reasonable

2023-01-04 12:55:23 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran But I think my position is still the same. LLMs can be used for a variety of things outside improving text, adding that context is helpful but it's unclear what the reviewer/ACs, etc are supposed to do with that info &

2023-01-04 12:54:06 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran Ops, yeah you're right, I think I misunderstood his tweet.

2023-01-04 12:46:27 @RWerpachowski @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran There's only a couple weeks before the submission deadline! And there's a lot of work still left to do to recruit reviewers, set up bidding, etc. I agree something more democratic would have been the right approach, but they didn't have time and had to make a call very quickly.

2023-01-04 12:43:33 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran Yeah, I see what you mean, but I admit I still worry about the chaos that would unleash, possibly giving an implicit green light to applications beyond the good use cases... Thanks for sharing thoughts on this though, it gave me a lot to think about!

2023-01-04 12:37:31 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran Because since it wasn't designed for X, it does many other things, some of which are actively harmful. I don't have a problem with anything - I'm not denying that this can be a useful tool, I just understand the perspective of those that choose to be cautious.

2023-01-04 12:33:22 @RWerpachowski @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran ok I definitely did not say this - I said improving communication skills in English will be helpful for getting more comfortable in English-speaking research communities. Even ChatGPT doesn't change this unfortunately, and this is why Grammarly is designed as an education tool.

2023-01-04 12:22:52 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran I like @boazbaraktcs 's proposal, but that would require months of setup &

2023-01-04 12:20:13 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran Yeah, I understand that. But I'm curious what you're thinking would have been a better position for them to take this year, under the short notice. Deadline is in just a couple weeks - would setting no rules not have led to chaos? Is there another approach that would fare better?

2023-01-04 12:15:01 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran Yeah, this is my suspicion of what I think might be best for language learning, but I definitely didn't mean this as advice. Anyone can do what they please! My point was that Grammarly is designed explicitly as a learning tool

2023-01-04 12:10:37 @boazbaraktcs @ducha_aiki @_onionesque @PreetumNakkiran Though I'll say I have no idea how anyone would restrict the use case for the current version of ChatGPT - ie. enforce using it for x but not y. That uncertainty &

2023-01-04 12:05:43 @boazbaraktcs @ducha_aiki @_onionesque @PreetumNakkiran Ok I see what you mean. I agree with that! Main point is that there should be some level of consequences for spammers once caught - though you're right that current policy does not differentiate adequately from other, more benevolent LLM use

2023-01-04 11:58:00 @thegautamkamath @_onionesque @boazbaraktcs @PreetumNakkiran ok, yeah, this is a great point!

2023-01-04 11:57:01 @RWerpachowski @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran Oh, I didn't realize this! That's disappointing to hear. I don't agree with that.

2023-01-04 11:56:05 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran I'm not giving advice, just trying to explain the differences that people see in the two tools. Others made an analogy to Grammarly, and I'm pointing out that this does not always hold. There are differences in these tools and reasons people are more worried about ChatGPT.

2023-01-04 11:52:49 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran Amazing to hear! But is something like Grammarly not providing the same support? Is this just a quality difference? My concerns are of ChatGPT's ability to generate content from thin air - if we could ensure it could be restricted to use as a Grammarly 2.0, that's less worrying.

2023-01-04 11:48:54 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran I get that, which is why I like tools like grammarly but this isn't really the main way I see chatgpt being used: https://t.co/goRvCcYWRA

2023-01-04 10:54:22 RT @FAccTConference: Reminder: deadline approaching for #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan…

2023-01-02 02:21:52 @ameliovr @tdietterich Ah actually seeing the other replies, I think others have addressed this - interesting discussion!

2023-01-02 02:19:26 @ameliovr @tdietterich Yeah this is my understanding as well - curious if you're seeing things differently @tdietterich ?

2022-12-29 18:30:07 @Abebab I've been thinking a bit about this lately - content moderation avoidance is very much a thing for malicious actors hoping to spread misinformation: https://t.co/In0mMqaM7y

2022-12-28 14:15:01 @yoavgo @evanmiltenburg sorry, not sure what this is referring to?

2022-12-28 09:53:47 @RWerpachowski @yoavgo @evanmiltenburg I agree with this, but not everyone is on the same page here. I've already heard of startups trying to use ChatGPT for mental health counseling and medical advice, both very high stakes applications. When released publicly without guardrails, that kind of thing will just happen.

2022-12-28 09:51:11 @RWerpachowski @yoavgo @evanmiltenburg Depends on your moral philosophy! Personally, I'm opposed to a strictly utilitarian view because consequences for that one person could be quite severe or unjust (eg. medical misinfo leading to death, etc.) and those experiencing harms are already the most vulnerable out there.

2022-12-28 09:42:21 @RWerpachowski @yoavgo @evanmiltenburg And the problems we see now are valid justification to delay widespread release. I personally don't think that's an unreasonable ask and don't quite understand the strong resistance to that position.

2022-12-28 09:40:45 @RWerpachowski @yoavgo @evanmiltenburg Fair! I'm perhaps conflating your position with others I've encountered recently - but imo the difference between some of the recent releases and what we'll see in deployment is not that great. What we see as issues now will only get worse given wide public releases of the tools.

2022-12-28 09:36:51 @RWerpachowski @yoavgo @evanmiltenburg Yeah, we're on the same page there - this is why I work on audits, to collect empirical evidence of real harms. I also know some warnings are being heeded &

2022-12-28 09:32:18 @RWerpachowski @yoavgo @evanmiltenburg My understanding is that the named products are beta releases - they release the products as such so that they can effectively stress test the product before wider release. I think you're anticipating huge changes for an "actual product" but historically that's not been the case.

2022-12-28 09:28:33 @RWerpachowski @yoavgo @evanmiltenburg Oh, I really didn't mean things that way! My point is that you don't seem like you're going to change your mind with the information we have, so best to end the conversation here, since it doesn't seem productive. You &

2022-12-28 09:24:48 @RWerpachowski @yoavgo @evanmiltenburg Trust me, Google does know - it is just not public.

2022-12-28 09:22:59 @RWerpachowski @yoavgo @evanmiltenburg "We" don't know anything? This is not how technology works - one can't make assumptions about the expertise of every user, esp when a tool is broadly available? I can counter your anecdotes w/ those of novice programmers unable to identify serious silent bugs generated via codex.

2022-12-28 09:19:43 @RWerpachowski @yoavgo @evanmiltenburg Either way, you seem committed to your position in spite of the available evidence, so I'm going to tap out of the conversation and wish you the best!

2022-12-28 09:18:10 @RWerpachowski @yoavgo @evanmiltenburg Not sure what you're expecting to see as a difference in the deployment of this product as a "finished product" vs a "research preview" lol - the same user interactions occur in both cases &

2022-12-28 09:13:00 @evanmiltenburg @RWerpachowski @yoavgo lol thanks. Though it's mostly the work of others as well - the cautiousness of the field isn't a given, and is the result of advocacy &

2022-12-28 09:07:21 @RWerpachowski @yoavgo @evanmiltenburg Codex is deployed via Github, Chat-GTP is deployed and we know these are problems clients actually experience. I'm not sure what you're expecting will magically be different in deployed products but it's not a remarkable change in circumstances - the harms are clearly still there

2022-12-28 09:04:46 @RWerpachowski @yoavgo @evanmiltenburg Only BERT seizure went viral on Twitter - issues with negation continue to happen today in search, but not reported as publicly. Also "not of deployed products" is false - gtp-x is deployed, Galactica was deployed and both have been found to obviously have these serious issues.

2022-12-28 08:54:29 @RWerpachowski @yoavgo @evanmiltenburg + we don't need to deploy something to anticipate realistic harms that can arrive as a result - that's safety 101. Those building these systems know of misinfo, bias, etc (see: https://t.co/PQFob7seSQ). Pretending this won't have disastrous consequences upon deployment is naive.

2022-12-28 08:50:19 @RWerpachowski @yoavgo @evanmiltenburg There's quite a lot of evidence of the harms they've already caused - especially given the actual use of BERT to some degree in Google search. We mention quite a few in here: https://t.co/FYlGlWilLg

2022-12-28 08:39:11 @yoavgo @evanmiltenburg Though personally, my take has always been "this should be built differently" or "this should not be deployed without being evaluated for x or y or z" - people are just worried about the harms that come from careless deployment, I doubt many take the stance of "never build this".

2022-12-28 08:35:52 @yoavgo @evanmiltenburg Yeah, I think there's arguments of the kind "perhaps our energy is better spent elsewhere / on different types of problem, since it doesn't seem like this is a good idea to build" and I wonder if impossibility proofs are necessary to make such arguments persuasive (probably not).

2022-12-27 22:06:57 RT @rmichaelalvarez: Next month we will launch a new initiative at @Caltech, the Center for Science, Society, and Public Policy. I'm excit…

2022-12-26 12:14:21 @yoavgo @CriticalAI @emilymbender @EmilyBender I find this line of reasoning v strange - at minimum, the paper at the core of the article quite clearly outlines the involved argument, ie. there are known modes of engagement in user interactions for information retrieval &

2022-12-23 13:25:42 @mchardcastle @bayesianboy +1, the product liability lens is present in the current EU AI Act draft but missing in a lot of US policy discussions which disproportionately focus on bias. That being said, there's definitely some room to consider functionality under disparate impact: https://t.co/PSNtGap1S5

2022-12-23 13:20:10 @Miles_Brundage @zhansheng @tshevl @AllanDafoe @Abebab LOL

2022-12-21 23:57:35 @littlebitofawk Completely entitled to your perspective - I was very careful in that tweet not to tell people how to vote! We can acknowledge wins regardless

2022-12-21 18:26:23 @realCamelCase lol is this a joke

2022-12-21 18:24:16 Kind of unreal how much the union has won for UC student workers through this strike - if the current contract is ratified, in a couple years, it will result in an over 50% wage increase! Very grateful to those that have been tirelessly organizing &

2022-12-21 13:17:15 RT @mozilla: Today’s social media status quo isn’t cutting it, so Mozilla is exploring an alternative. In early 2023, Mozilla will be testi…

2022-12-20 03:07:55 RT @msbernst: "Let's think step by step" increases the bias of large language models. Avoid if your task involves social inferences! Work…

2022-12-17 19:20:35 @NerdyAndQuirky @pcastr also, for ranting's sake: my issue isn't that RL benchmarks are *simple*, it's that they seem completely *disconnected* - they don't even pretend to be abstractions of real world problems So yeah I'm critical of eg. Meta's Habitat - nice graphics don't fix task design issues!

2022-12-17 19:15:52 @NerdyAndQuirky Not sure what a good reference to this problem is, because no one likes talking about this in machine learning. I wrote a position paper about the issue once: https://t.co/hMqXsydjar Wonder if there's anything RL specific? @pcastr probably has a clue of where that convo is at!

2022-12-17 19:13:08 @NerdyAndQuirky But I think the bigger issue they have is in (2) task design. Like, the benchmarks the community obsesses about making improvements on are completely arbitrary, typically just any random game with a clean set of rules, rewards and fixed actions (eg. Chess, Go / Atari, Dota, etc.)

2022-12-17 19:10:14 @NerdyAndQuirky Sure. (1) Most of RL papers are not reproducible research, and I believe that's what's concretely holding them back the most: https://t.co/J2Xw4jRJM4 There's been some recent progress on getting things to a better state, but long road ahead - see: https://t.co/G9F8JGtRzC

2022-12-17 19:03:54 @beenwrekt lol but meaningful, low stakes applications do not make for nice demos, Ben!

2022-12-17 18:35:08 Unpopularish opinion but I don't think it's mainly the sim2real problem that stunted RL's impact - that community tends to focus on the wrong problems. And I can see a similar issue blocking LLM's future impact. https://t.co/13YQhFw5i8

2022-12-17 02:58:45 @colin_fraser Totally agree and also a pattern that's evident with YouTube - a lot of why there's so much misinfo on there is because content creators who intend to deceive face no repercussions and in fact game the platform's features (inclu the algorithm) the most: https://t.co/2W57qA2pa5

2022-12-17 01:39:11 @JubaZiani @Aaroth @Adam235711 I'd agree with this! + FAccT tends to include a lot more empirical work (eg. audits, data releases, experiments, etc.) + AIES includes more participants from a philosophical/policy/law perspective. Though there's lots of cross-pollination, so may not matter too much actually

2022-12-17 00:59:46 There are other aspects of these platforms though - user interfaces, actual content format, nature of user interactions, etc - that *does* have a huge impact on these downstream behaviors &

2022-12-17 00:57:30 Research points to this: https://t.co/PUhrPeRGbH It's pretty much known at this point that targeted ads/recs don't actually work as well as we assume they do in influencing downstream behavior. If the algorithm can't even get me to buy a sofa, how can we say that it sways votes?

2022-12-17 00:50:59 Appreciate this. In fact, there's something I've been calling the "algorithmic irrelevance" theory, where I suspect that most of what is problematic about online platforms (ie. addiction, misinfo, radicalization) is actually mostly due to design elements outside of the algorithm. https://t.co/XlevqHLphe

2022-12-17 00:43:37 @followlori @brianavecchione Hope you have a great holiday as well!

2022-12-17 00:35:26 Glad to see OAT team member @brianavecchione receive some recognition for her role in this project! It's been a pleasure to work with her on this so far! Details here: https://t.co/d505vH7uUH https://t.co/5Kwo5J3hrs

2022-12-15 21:03:07 @overlordayn @neuralreckoning @pfau Intuitively, for this reason things should be opt in but things are pretty complicated...even Creative Commons has been really confused about what guidance to provide: https://t.co/Z7BMK5iUiy

2022-12-15 21:01:21 @overlordayn @neuralreckoning @pfau Fwiw this same issue came up with IBM's "diversity in faces" dataset &

2022-12-15 20:53:10 RT @NicolasPapernot: The list of papers accepted @satml_conf: https://t.co/23PLF2bqIh I'd like to extend a big thank you to all the PC me…

2022-12-13 16:18:55 @KordingLab @beenwrekt There's an interesting point here about accessibility though - ie. OpenAI has an API anyone can use, meaning the model's impact increases &

2022-12-13 16:16:22 @KordingLab @beenwrekt Sure, but I mean they drive research at least - and the way Deepmind rolls them out, they still make the news and break into mainstream consciousness.

2022-12-13 16:14:00 @beenwrekt @KordingLab Both of them have ridiculously flashy demos and that's been hugely influential - don't you remember? there was a whole *movie* on AlphaGo! And I think people severely underestimate how excited people used to be about BERT - it was everywhere! That set the blueprint for OpenAI imo

2022-12-13 16:08:35 @beenwrekt @KordingLab Yeah, I don't disagree - but in terms of "who started this madness" I still feel like it's Google/Deepmind? Even the original PR machine for AGI etc was coming from Deepmind before Open AI was even founded.

2022-12-13 15:31:00 @beenwrekt @KordingLab Hm - Deepmind/Google remain a pretty consistent source of flashy demos to this day, powered by GCP credits, TPUs and... nature pub hype. BERT wasn't the best ofc but it was the pioneer imo. I'm not arguing that gpt-x isn't an improvement, but it wouldn't exist without BERT.

2022-12-13 15:06:42 @beenwrekt @KordingLab A fun synopsis on that era here: https://t.co/jECntrLcqE

2022-12-13 15:04:53 @beenwrekt @KordingLab fwiw I think Google is really the first mover here, with BERT and the subsequent sesame street models. That is really the origin of this madness lol

2022-12-13 14:27:23 RT @sharongoldman: ***BREAKING UPDATE***: Enforcement of NYC's AI employment law is being delayed until April 15, 2023. It was supposed to…

2022-12-13 04:59:59 This over-sexualization of female subjects in generated images is something we've known since at least 2020 (see @aylin_cim &

2022-12-13 04:23:39 RT @Melissahei: I tried the viral Lensa AI portrait app, and got lots and lots of nudes. I know AI image generation models are full of sexi…

2022-12-12 21:18:38 This paper keeps coming up over &

2022-12-12 18:23:39 Excited to join this panel today! David's book was such a lovely and informative read - highly recommend. https://t.co/Qh6QU7mjOP

2022-12-12 13:16:53 @deingaraus @Abebab Thanks - will flag this for the copyeditor!

2022-12-10 02:21:50 @JoannaBlackhart @DocDre @Abebab Hm not sure - I can't even read it without signing in, sorry

2022-12-09 18:50:46 @andywalters @WIRED @Abebab @huggingface @SashaMTL @mmitchell_ai In response to this, we cite several instances where Meta leadership blame the *users* for what happened with Galactica - even though the scenario that played out was to be fully expected, given what we know about harms. This is where we came from - ofc, you don't have to agree

2022-12-09 18:48:10 @andywalters @WIRED @Abebab @huggingface @SashaMTL @mmitchell_ai I'm not talking about the quality of the technological output - I'm talking about the nature of the handling of involved harms

2022-12-09 18:34:21 @andywalters @WIRED @Abebab @huggingface @SashaMTL @mmitchell_ai But I don't think it's unreasonable to point out that we're not as far forward as we think - and that critics acknowledging these limits are still being dangerously dismissed. Years after Tay, and we're still choosing to blame users for what happened with Galactica? Frustrating.

2022-12-09 18:31:32 @andywalters @WIRED @Abebab Hm - I don't agree with this, though I see where you're coming from. We could definitely have done more to acknowledge some of the progress that's been led by eg. @huggingface folks like @SashaMTL, @mmitchell_ai etc.

2022-12-09 18:28:03 RT @struthious: 'it seems to be the job of the marginalized to “fix” them... The weight falls on them, not only to provide this feedback, b…

2022-12-09 15:51:36 RT @Abebab: "We critique because we care. If these companies can't release products meeting expectations of those most likely to be harmed…

2022-12-09 15:51:11 Me &

2022-12-09 08:32:55 @itsHabeeb_AB @OpenAI Nope, I did not!

2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!

2022-12-08 19:26:55 @jjvincent omg, congrats!!

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-15 18:00:11 RT @lkirchner: Great to see my work with @MattGoldstein26 at @nytimes and @themarkup cited in this new @CFPB report out today on errors in…

2022-11-15 17:49:44 @thatMikeBishop I notice a particular dogmatism common in that crowd but also, I can see this isn't something I'll be successful in convincing you of, which is fine. Wish you and your peers the best as you process your emotions!

2022-11-15 17:48:14 @thatMikeBishop I don't agree with this - I'm in a CS PhD program, I interact with non-EA "specialists" all the time, and have no problem having meaningful, respectful disagreements with them.

2022-11-15 16:52:47 RT @transparenttech: It's launch day for the Coalition for Independent Technology Research! Society needs trustworthy, independent resear…

2022-11-15 15:52:55 @thatMikeBishop Many accepted norms by EA folks (ie. malaria nets, AI x-risk, etc.) were just seen as given priorities that were difficult (at least for me) to meaningfully push back on - the pitch was to donate to supposed trustworthy actors, of which only the natural outsiders seemed to doubt.

2022-11-15 10:50:05 @AdtRaghunathan @_christinabaek @jacspringer Hey!!

2022-11-14 19:22:43 @ellis2013nz yeah, tbh I don't fully understand their logic here either - but for me, its another example (like their bet on crypto) that they've looked at at least one other situation and followed the "ends justify the means" argument before! It's been part of their playbook long before SBF.

2022-11-14 19:11:41 @ellis2013nz Oh, I believe there is a pretty direct link - EAs see advanced AI as an existential risk

2022-11-14 19:01:20 @ellis2013nz Think of GTP-3, DALL-E, etc at OpenAI as examples. Many have warned about the dangers of developing such under-specified &

2022-11-14 18:57:01 @ellis2013nz model = large machine learning models being presented as "AGI" by people in the effective altruism community

2022-11-14 18:56:10 I really hope, for the sake of the wellbeing of those still involved in EA, that their leaders take responsibility, rather than attempt to circumvent it. It's clear that changes need to be made in many ways, rather than just ejecting this one person as some anomaly when he's not.

2022-11-14 18:42:31 Example: We've been pointing out for years that the blind development of large "general" models pose a threat to real people. The EA-funded efforts to continue building such models despite known harms have been justified by the exact kind of reasoning that led to SBF's downfall.

2022-11-14 18:42:30 This is a textbook example of the "No True Scotsman" fallacy:https://t.co/n0NqOwlnuZI get that MacAskill &

2022-11-14 18:17:18 RT @FAccTConference: We're so excited about next year's #FAccT23 Conference! Taking place in Chicago, in mid June, the General Chairs are A…

2022-11-14 16:15:25 @yonashav All bystanders involved - esp those w/ institutional power - contribute to that environment. You can't point to a bad actor &

2022-11-14 16:10:25 @yonashav I'm quite familiar with this "bad apples" argument. What I've learnt from other contexts - eg. violent cops, abusive academic advisers, etc. - is that bad apples can only cause harm once enabled by an environment void of accountability.

2022-11-14 15:01:01 This is a catastrophic collapse of a community that clearly meant a lot to some, and I feel for them. But I will say this: the whole premise of EA, from the beginning, has been "trust us" - they need to acknowledge the value of the poc &

2022-11-14 13:44:14 @BetsyDupuis @avt_im @BlackHC @timnitGebru Also DAIR operates under a completely different inventive structure from academia. I've never had an encounter with her or anyone else there where they cared in the slightest bit about citation numbers or who is quoting them - they're certainly not scared to critique OpenAI lol

2022-11-14 13:39:05 @BetsyDupuis @avt_im @BlackHC @timnitGebru This isn't true? Timnit's regularly commented on the copyright issues involved with all the generative models developed by OpenAI, including Co-pilot (esp their use of open source code). Her lack of response to you specifically is likely just due to basic capacity constraints.

2022-11-14 13:17:46 RT @conitzer: New tenure-track position in Ethics &

2022-11-11 18:38:29 RT @agstrait: ALERTALERT@AdaLovelaceInst are hiring a Visiting Senior Researcher in Algorithmic Auditing - if you're interested in spe…

2022-11-10 16:27:42 RT @ellgood: Thanks for shout out @StanfordHAI: My paper with @juliatrehu on AI Audit Washing and Accountability. "This is an important pie…

2022-11-10 14:10:16 RT @schock: Okay @UCSD! "The Designing Just Futures Cluster Hire seeks to recruit diverse faculty engaging in innovative and interdisciplin…

2022-11-10 09:56:22 RT @benzevgreen: Deadlines coming up soon for two faculty jobs at Michigan focused on the intersection of technology and policy:1. Pr…

2022-11-10 09:54:11 RT @wihbey: Apply! @Northeastern Faculty position in AI &

2022-11-09 16:27:14 RT @natematias: How can software systems support citizen scientists to do causal audits of algorithm decision-makers?Excited to join CSCW…

2022-11-09 05:03:18 RT @sayashk: Our paper on the privacy practices of labor organizers won an Impact Recognition award at #CSCW2022! Much like the current m…

2022-11-08 20:14:08 RT @federicobianchy: Text-to-image generation models (like Stable Diffusion and DALLE) are being used to generate millions of images a day.…

2022-11-08 14:14:55 @voxbec Oh, amazing! Appreciate this so much

2022-11-08 09:41:55 @athundt Hey - we accepted everyone that sent us a request? Are you still waiting for a slack invite? If so, we must have missed you, please shoot us another email!

2022-11-07 23:36:52 @emilymbender Don't think anyone on our team sees ethical considerations as secondary to technical merit - in fact, @SashaMTL in particular fought hard for ethics reviews to factor meaningfully into the author/reviewer discussion period because of her belief in it as a primary consideration!

2022-11-07 23:33:22 @emilymbender I realize this wasn't the best wording for that though but the general idea was to avoid setting "red lines" via the ethics review process and to do that via norm-setting practices instead (such as community deliberation on the Code of Conduct, etc.).

2022-11-07 23:32:02 @emilymbender - it was meant to comment on the fact that legal &

2022-11-07 23:30:55 @emilymbender I understand your perspective here, and I realize how it could be read otherwise but AFAIK this paragraph was not meant to present a false dichotomy between technical merit &

2022-11-07 19:26:03 @justinhendrix Yeah, been feeling the same way lately. Wasn't built for this purpose but some of us are here if this overlaps with your interests: https://t.co/2Tr2BQFClg

2022-11-07 16:36:21 @suryamattu So excited to hear about this, Surya!+ you might be interested to join our slack community for those doing algorithmic audit work, as another way to stay in touch: https://t.co/2Tr2BQFClg

2022-11-07 15:35:04 RT @suryamattu: I am excited to officially announce the launch of the Digital Witness Lab, a new research lab I am starting @PrincetonCITP…

2022-11-07 14:17:03 Our thinking about the diversity of ethical challenges in ML research has also matured a lot over the years. There's an increasing awareness of how ethical oversight is meant to be integrated into the research process &

2022-11-07 14:17:02 To me, it's remarkable just how much the conversation has evolved in just a few short years - feels like just yesterday that @IasonGabriel pioneered the effort with broader impact statements for NeurIPS 2020 &

2022-11-07 14:17:01 Hard to believe but the @NeurIPSConf Ethics Review process is over - and has completed its third year! In a blog post, with co-chairs @SashaMTL, @wsisaac &

2022-11-06 13:39:51 Already amazed at who has joined this So incredible to see the diversity &

2022-11-06 13:30:35 @CatalinaGoanta Also feel free to link me to your papers

2022-11-06 13:29:32 @CatalinaGoanta Do you have some examples of this? It seems to lure folks into Youtube Red for example, they provide professionally produced content (ie. Youtube Red TV shows &

2022-11-06 13:20:29 @CatalinaGoanta Like, no one would pay a subscription for user generated content, right? (at least I can't think of a situation where this is the case...) Which is why those platforms tend to brand as social media platforms &

2022-11-06 13:16:11 @CatalinaGoanta Yeah, for sure! Though I think perhaps my intuition of a difference is more tied to diffs in content creation practices - in netflix/spotify, they operate as distribution platforms for professionally produced content vs. youtube etc where it's user generated content? Not sure tho

2022-11-06 13:12:29 @agstrait @carlykind_ @K_singh_P Thank you! Looking forward to checking that out!

2022-11-06 04:29:06 @yoavgo @K_singh_P ... possibly higher tolerance in the latter scenario!

2022-11-06 04:28:30 @yoavgo @K_singh_P yeah, exactly - since the friction to just hop off the platform is much lower than what's required to unsubscribe

2022-11-06 04:13:36 @K_singh_P Interesting. How is quality typically measured here?

2022-11-06 04:11:01 @841io Do you have a sense on how this impacts content creation, though? Clear differences of quality/flexibility in content under the sub model &

2022-11-06 04:07:52 @841io Oh nice - that's really interesting, thanks for sharing! Yeah, someone else also suggested that comparing the freemium / paid models on the same platform would be the kind of investigation you'd want to do on this (ie. Youtube vs. Youtube Red).

2022-11-06 04:04:49 @natematias Oh nice! Excited to check out that article once it's out!

2022-11-06 04:04:09 @K_singh_P Aha, I have all these possible intuitions but I'm genuinely not sure, which is why I asked aha

2022-11-06 04:03:45 @K_singh_P Also each has a very different mechanism for content creation (ie. more professional / less dynamic &

2022-11-06 03:58:49 @K_singh_P I'm not sure - both are trying to keep users on the platform, but for different reasons. One is about minimizing cancellation rate of subscribers &

2022-11-06 03:47:21 A random question but has anyone done research on the differences between the recommendation ecosystems for subscription-based media platforms (ie. Spotify, Netflix, etc.) vs. ad-revenue based user content platforms (ie. YouTube, etc.)? Often conflated but feels very different.

2022-11-04 17:52:08 @mmitchell_ai @KLdivergence +1! You all have our full support, Kristian. Let us know whatever you need!

2022-11-04 17:43:55 RT @WIRED: Breaking: As part of an aggressive plan to trim costs that involves firing thousands of Twitter employees, Musk’s management tea…

2022-11-04 17:24:39 RT @KLdivergence: All of twitter’s ML Ethics, transparency, and accountability team (except one). was laid off today. So much for that resp…

2022-11-04 17:20:01 RT @jackbandy: A sample of the team's contributions to platform transparency and responsible machine learning:"Candidate Set Imbalance an…

2022-11-04 17:16:35 RT @SashaMTL: Interested by the @NeurIPSConf ethics review process? Take a look at the blog post below and, more importantly, come to our…

2022-11-04 15:43:14 @KLdivergence I'm so sorry, this is awful Hope you're doing ok

2022-11-04 13:59:18 Man, I'm gutted about this Twitter META news - those guys were the reason I had such a blast @FAccTConference this summer! Truly amazing people, recruited &

2022-11-04 13:48:54 Hi people - so sorry for the delay

2022-11-03 17:23:37 RT @NatureNV: As part of @nature’s special issue on #racisminscience, @abebab looks at the massive effect that the #gendershades study had…

2022-11-03 17:23:12 RT @statnews: Opinion: STAT+: HHS’s proposed rule prohibiting discrimination via algorithm needs strengthening https://t.co/6CpDMlwTAV

2022-11-03 17:08:47 RT @christelletono: NEW REPORT ALERT: @ystvns @MominMMalik @SonjaSolomun @supriyadwivedi @sambandrey and I analyze the Canadian government…

2022-11-01 16:56:20 @timnitGebru @AJLUnited @jovialjoy

2022-11-01 14:30:12 RT @Wenbinters: new report out from @EPICprivacy Screened + Scored in D.C.https://t.co/7fDrM2LNhPthree main goals:-birds eye view of…

2022-11-01 14:19:21 RT @sarahbmyers: Excited to moderate a conversation on Automated Decisionmaking Systems this morning with @random_walker, @mikarv and @raji…

2022-11-01 14:19:06 RT @ambaonadventure: .@FTC #PrivacyCon22 is live! We're starting with two stellar panels on surveillance &

2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD

2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…

2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…

2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &

2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…

2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that

2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.

2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.

2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position

2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!

2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)

2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)

2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.

2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists

2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…

2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing

2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition

2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR

2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing

2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?

2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <

2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.

2022-10-20 17:32:27 RT @ruchowdh: The 8 bit bias bounty is now live!! Thank you @Melissahei for the article on what the bounty program means in context of the…

2022-10-18 21:56:01 I agree with this. But also, there's so many cases where we can put together an oversight board and just ...make standards.. I don't know why it is always presented as this impossible task. That's literally what happened in every other industry with a mature audit ecosystem. https://t.co/RMynT318ro

2022-10-18 21:49:36 RT @kanarinka: Landlords increasingly use Tenant Screening Systems to make decisions about prospective renters. In this paper @wonyoungso…

2022-10-15 17:57:52 @zacharylipton Phat! Lol who even taught him these words

2022-10-14 12:35:14 @a_bacci @hrw @F_Kaltheuner @AmosToh @deblebrown @kerinshilla Congrats Anna!

2022-10-14 11:11:16 @HaydnBelfield @mmitchell_ai @carinaprunkl @jesswhittles Was just about to link to this. My least favourite version of this discourse is the "technical vs non-technical" safety, "long-term vs short-term" harms talk, etc. Completely ridiculous false dichotomies.

2022-10-12 16:59:26 @daniellecitron @macfound Congrats @YejinChoinka!! So happy to see you on the list - well deserved

2022-10-11 18:40:09 @npparikh Most audit policies just happen to be incomplete, not accounting for the full range of factors it would take for the audits to not become shams. There's a lot we can learn about this from other industries, and imo it is worth getting this right.

2022-10-11 18:37:30 @npparikh I can see where you're coming from, but I don't agree. There's lots of precedent of third party audits playing a critical role in accountability in other industries, especially when standards are set for auditor conduct and audit expectations. Check out: https://t.co/nXiIE6ELAn

2022-10-11 18:30:30 @npparikh Yeah, I can also see this being a great space to discuss critiques as well!

2022-10-11 18:26:09 @npparikh I'm thinking both! We need better norms but also sometimes people need help on specific cases

2022-10-11 16:35:45 RT @BedoyaFTC: 35 years ago last night, my mother, brother and I landed at JFK on a long Lufthansa flight. My father had gone ahead of us

2022-10-11 13:51:29 @wsisaac @agstrait lol yeah I thought of you William - wondering if there can be a way to do this via REALML, but something open and low effort (like opening up a section of the Slack or something?)

2022-10-10 17:46:37 @_jasonwei hm, I don't think so - as far as I can tell, her first tweet was about having no evidence for claiming LLMs can get to above-human-level understanding/reasoning, which doesn't omit the possibility of LLMs doing cool things, and possibly beating humans on certain practical tasks.

2022-10-10 17:43:01 @sleepinyourhat I'm also unsure about how much we should actually expect in terms of differences from BERT failures &

2022-10-10 17:38:59 @sleepinyourhat Sure - but I don't know if there's actually any signal to any of those hints

2022-10-10 16:52:35 We want to potentially create a space for folks doing or interested in algorithm audit work. There's so many of us across disciplines (journalism, HCI, regulators, law, etc.) and not a lot of coordination, would be great to have some communal space to discuss &

2022-10-10 16:38:31 RT @DrMetaxa: A tweet for Algorithm Auditors &

2022-10-10 15:56:45 @raphaelmilliere @sleepinyourhat Hm but even the @sleepinyourhat paper points out that adversarial design isn't the same as working twds principled abstractions of the linguistic competences we hope models have. It's a step forward to incorporate robustness measures but this doesn't guarantee meaningful tasks.

2022-10-10 15:34:25 @raphaelmilliere @sleepinyourhat Great point! I agree you can't know one way or the other, but task artificiality isn't just about how "easy" the test is - but also how carefully constructed the test is. I notice a lack of principled justification in a lot of LLM task design &

2022-10-10 13:06:24 There are many tasks that LLMs are doing great at and for which scale helps a lot, but it's really questionable to claim these models are achieving some generalized "human-level" linguistic competency, esp when the vast majority of such tasks don't measure anything close to that.

2022-10-10 13:06:23 I don't think anyone disagrees that you can get an LLM to beat a human rater on some set of arbitrary challenges and in this case, yes ofc those wins can be attributed to scale and some of those tasks quite practically interesting &

2022-10-07 16:21:20 RT @StanfordHAI: Last chance to submit ideas by Oct. 10 The $71K #AIAuditChallenge invites individual researchers or teams to submit…

2022-10-05 15:19:44 @leeahdg It's a new guideline from the WH. Details here: https://t.co/samDHveWsd

2022-10-05 04:37:51 A great thread on the latest FDA guidance from @kdpsinghlab, who has himself been involved in calling out the risks of some of these AI/ML-enabled healthcare products that have been on the market for far too long without any proper regulatory scrutiny. https://t.co/1fpOQlpslf

2022-10-04 21:08:19 @MarkSendak Fair - but it's an important shift in comms imo. There was never going to be a one-size-fits-all model for this problem, and I think the investigations cornered them into admitting this.

2022-10-04 20:38:01 The new AI Bill of Rights is exciting - it's been difficult to get those in power to make such strong commitments.However, I admit I'll be *most* excited to see the first instances of recall, first successful cases for recourse, etc. Those concrete actions will be the real win!

2022-10-04 20:20:06 This is still my favourite kind of AI-related news: companies being cornered into taking concrete actions (ie. updating or recalling products/comms) in response to AI accountability efforts - in this case, empirical investigations into Epic's Sepsis tool.https://t.co/e6M9tYK8Iy

2022-10-04 19:44:52 @hannahsassaman Wow, this is so great to see - go Fabian!

2022-10-04 19:37:03 @Aaron_Horowitz @Wenbinters lol you and your supreme court rants yeah this is objectively interesting though - the question of intent seems pretty central to US anti-discrimination law, what a horrible precedent

2022-10-04 19:32:40 @schock lol love your hot takes :)

2022-10-04 19:17:21 @Wenbinters @Aaron_Horowitz Thanks!

2022-10-04 19:05:05 @Aaron_Horowitz What case is this about?

2022-09-30 22:19:26 @LeonYin @LoebAwards @adrjeffries @elarrubia @JuliaAngwin @themarkup Whaaatt- this is huge, congrats!! Very well deserved

2022-09-29 20:11:48 @_alialkhatib @schock @jovialjoy Thanks for hyping up the work, Ali! Glad you enjoyed the paper.

2022-09-29 20:10:02 RT @timnitGebru: "Open source software communities are a significant site of AI development, but “Ethical AI” discourses largely focus on t…

2022-09-29 20:09:52 @timnitGebru @FAccTConference lol @Abebab this is the Facct paper I was just talking about! Was just about to send it to you aha

2022-09-27 01:57:18 @adjiboussodieng @sh_reya Yeah perhaps we're talking over each other - though at least the "externally managed" bit of things seems to be what Kaggle does... Ie. https://t.co/6dDjRua7yN But yeah if I'm misunderstanding, let's just leave things here lol

2022-09-27 01:35:38 @adjiboussodieng @sh_reya But they also observed this with a lot of other well-used Kaggle datasets in the Roelofs et al paper... Not sure if it's exactly what you're looking for, but probably worth checking out as a starting point!

2022-09-27 01:30:01 @adjiboussodieng @sh_reya Yeah that's what you'd observe if you were "overfitting" performance on a given benchmark. We don't see that experimentally happen in this case, since the ranking of models on the validation set ultimately still matches the model ranking on the test set (at least with ImageNet)

2022-09-27 01:22:48 @adjiboussodieng @sh_reya Yeah, the Roelofs etc al paper is speaking to that case - I'd start there!

2022-09-27 01:20:16 @adjiboussodieng @sh_reya That being said, the Roelfs et al paper is discussing that static case, about when a benchmark is no longer useful (ie when we overfit to a static data benchmark)

2022-09-27 01:18:10 @adjiboussodieng @sh_reya As in you want to improve the static case? "Externally managed and updated often" is often only applicable when the data changes, no?...

2022-09-27 01:16:09 @adjiboussodieng @sh_reya They talk in those papers about over fitting from test set reuse? Perhaps I'm not getting what you're talking about?

2022-09-27 01:14:52 @adjiboussodieng @sh_reya A lot of the conversation on streaming eval in ML Ops can be seen as an alt to the static data benchmark paradigm. It's different from the adversarial benchmark setting of something like Dynabench.

2022-09-27 01:01:43 @adjiboussodieng @sh_reya Ultimately though, there's evidence that benchmarks are at least internally valid measurements &

2022-09-27 00:58:37 @adjiboussodieng There's a lot about the ML evaluation paradigm that can be improved - we've written about it here: https://t.co/XwEsptquC8+ I like @sh_reya's take on one way forward here: https://t.co/zUkRmCYvJO

2022-09-25 09:03:28 RT @natashanyt: A fascinating new study in Science details how LinkedIn ran social experiments on 20 million users over 5 years.It shows h…

2022-09-21 19:13:54 h/t @MicahCarroll for flagging this for me, and congrats to the team @mozilla for such a impactful audit study!More details here: https://t.co/telvA80zof

2022-09-21 19:13:53 Analysis of 567 million YouTube video recs from ~23k users revealed that participatory controls (e.g. dislike button, "not interested" pop-ups) are effectively useless - the most one can do to remove unwanted recs is...removing a video from watch history!https://t.co/d6e8HSHLXc

2022-09-21 00:22:28 @ZeerakTalat @mayameme @drlulzzz Wowowow! Congrats!

2022-09-20 19:36:55 @KLdivergence @RiceUniversity @Dr_TalithiaW forever young ~

2022-09-20 19:01:25 ICYMI Mozilla is funding the development of AI audit tools! For those in the algorithm audit space, this is a great way to access the resources (financial, and otherwise) to build or develop your projects.Please Apply! Applications close on October 5thhttps://t.co/lOlIV9HilH

2022-09-20 14:15:05 @KLdivergence @RiceUniversity @Dr_TalithiaW Congrats, Kristian! lol you ARE young what aha

2022-09-20 14:08:47 RT @annargrs: #NLPaperAlert #COLING2022Machine Reading, Fast And Slow: When Do Models "Understand" Language? TLDR: instead of claiming…

2022-09-19 00:08:50 RT @SymposiumML4H: @beenwrekt speaks at @SymposiumML4H this yearCommon ML assumptions do sometimes end up de-facto ML laws. Ben's track-r…

2022-09-16 19:55:14 @LauraEdelson2 Congrats!

2022-09-11 02:35:42 RT @kurtopsahl: The WSJ has written up a nice obituary for Peter Eckersley, recognizing his great work encrypting all the things, and being…

2022-09-09 01:51:23 RT @neerjathakkar: Our ECCV ‘22 paper “Studying Bias in GANs Through the Lens of Race” is now out! https://t.co/NWojyXq2yV This work was do…

2022-09-08 20:58:26 @Miles_Brundage So sorry for your loss Hope you can get some rest and the space you need

2022-09-07 21:02:55 Maybe it's a coincidence but in at least both of those cases, the tech was gravely under-vetted, failing to hold up to even the most minor form of external scrutiny. You would think for something so critically important to so many people, there would be more effort in evaluation.

2022-09-07 20:57:39 Such a great &

2022-09-07 19:27:01 @mdekstrand @jjoque @1roboter @Abebab @FrankPasquale @kaiy1ng @alexhanna @WolfieChristl @sayashk @random_walker @jw_lockhart @ShobitaP @LinaDencik @az_jacobs @stalfel @hypervisible @benzevgreen @geomblog @gleemie @danmcquillan @ProfFerguson @AngeleChristin +1, thanks for sharing - this looks great!

2022-09-07 19:25:39 RT @1roboter: Accuracy claims are also rhetorical tools to convince others that opaque algorithms work. In a new paper, I unpack how high a…

2022-09-07 14:09:26 RT @phillipdawson: For years I've been trying to get any proctoring company to agree to a study where I try to cheat. None have agreed. I'v…

2022-09-06 16:25:29 @mer__edith @signalapp Congrats, Mer! Such a good fit for your skillset!

2022-09-03 22:44:06 It's crazy to think that so many of the things we talked about then are making their way into the real world now. And I know as a fact there was so much more he still wanted to *do*...My condolences to his family and loved ones - this certainly feels like he's gone too soon

2022-09-03 22:38:09 I'm shocked &

2022-09-02 08:50:03 @Miles_Brundage @natolambert @_joaogui1 @negar_rz @bakztfuture lol I think this is the paper you're referring to: https://t.co/nXiIE6ELAn

2022-08-31 23:29:20 RT @Carlos_MFerr: If machine learning models and code are two different things, why should the former be governed by licensing mechanisms d…

2022-08-31 15:26:39 @mattierialgirl @timnitGebru @MilagrosMiceli Yeah, Timnit's advice is the best I've heard for dealing with this: "live as though this is the rest of your life" - ie. "Would you want to live the rest of your life this way?" That advice woke me up from over-doing it while I was at a startup, guess it's time to re-visit that

2022-08-31 14:42:51 @timnitGebru @mattierialgirl lol step by step

2022-08-27 20:54:48 @morgangames @andrewthesmart Lol I'm not actually talking about only myself here or any personal concerns re:productivity - a lot of students struggle to figure out a way to step away from things responsibly. What concerns me is how difficult it can be for many of us to navigate such requests in academia.

2022-08-27 19:10:34 @IAmSamFin For sure, but I really don't think toxic advisors are the main reason most people struggle with this. Like I said, my advisor is great! It's just genuinely much harder to set boundaries in an unstructured environment. It just takes a lot of communication to navigate responsibly.

2022-08-27 19:02:47 @IAmSamFin *env, as in environment

2022-08-27 19:02:26 @IAmSamFin Another challenge here is the pseudo-voluntary nature of everything. Technically anything is permissible but it's not all equally acceptable or well recieved. You do have real responsibilities, not just to your advisor but many others as well. It's just a tricky wnv to navigate.

2022-08-27 18:43:22 @IAmSamFin Yeah, I also figured that perhaps what you were really asking more about was how consequences would differ from industry vs what are the consequences in general

2022-08-27 18:39:15 @IAmSamFin In academia, externally imposed and less flexible deadlines tend to make things harder to navigate. And the lack of structure puts a lot of emphasis on personal responsibility, which can make it seem like some kind of personal failure to take a step back, even for good reason.

2022-08-27 18:13:45 @IAmSamFin I'm also not talking about taking a day or an afternoon off or working away or from home - ofc many have that flexibility there. It's about needing to halt project contributions completely for an extended period of time and needing the grace to miss the many, frequent deadlines.

2022-08-27 17:57:38 @IAmSamFin Yeah, I also have a great advisor - but not everyone else does. And even then it can be tricky to communicate about this to the plethora of other stakeholders you're involved with. Also depending on what stage you're at, it's hard do so without facing professional consequences.

2022-08-27 17:46:48 RT @dcalacci: one part of the gigaverse: auditing the pay algorithms that gig platforms use. listen to the latest @radiolab ep to hear how…

2022-08-27 17:41:35 @sh_reya Yeah - I worry so much about being perceived as disrespectful, lazy etc especially in moments when I know I'm struggling to communicate. It's so hard to get people to understand that you're actually unavailable, and not just trying to escape responsibility.

2022-08-27 17:30:07 @sh_reya Wow, love that you've been able to figure out what works for you -something else I notice about your approach is that you're on when you're on and off when you're off. That's something I want to lean more into and I think it'll help with setting boundaries &

2022-08-27 17:24:33 Personally still learning how to navigate such moments responsibly, but also deeply uncomfortable with current norms, many of which are institutionally reinforced. People shouldn't have to rely so much on the empathy and kindness of individual actors to get the space they need.

2022-08-27 17:24:13 @KarlTheMartian Yeah, ideally it does get somewhat easier over time though - as I get more familiar &

2022-08-27 16:50:45 Honestly, the scariest thing so far about academia for me has been how difficult it can be in many cases to truly take time off. And not just for fun vacation purposes - but even for important life events, family emergencies, health reasons, etc.

2022-08-25 15:51:44 @random_walker Congrats to you both! This book is sorely needed.

2022-08-25 15:49:07 RT @jshermcyber: Highly interesting and important paper by @schock, @rajiinio, and @jovialjoy published June 2022 on the idea of algorithmi…

2022-08-22 21:33:40 RT @emilymbender: Soo.... the Stable Diffusion model is now available (incl weights) for download from HuggingFace. On the plus side, it's…

2022-08-22 11:59:53 RT @danish_c: The exercise of developing a RAIL license at the @BigscienceW opened up interesting real-world questions -- what is the artif…

2022-08-18 11:48:09 RT @ang3linawang: This summer I went to my first two in-person conferences in grad school, FAccT and ICML, and you’ll never believe what ha…

2022-08-16 07:57:32 RT @brandonsilverm: This is a *great* write-up and a really useful overall frame for thinking about different transparency options for lawm…

2022-08-16 03:10:25 Yikes. https://t.co/VjekzrauSR

2022-08-14 22:15:23 @klakhani @janusrose @andrewthesmart @andrewthesmart made them for FAccT 2020 and gave a couple out (glad I have one!) - not sure what the status is now though, seems like there's so many copies floating around at this point

2022-08-14 20:51:17 @janusrose @andrewthesmart please, you've got to start selling these!! it goes viral every few months

2022-08-14 20:31:55 RT @kenarchersf: This is spot on. The “Closing the Accountability Gap” paper (@mmitchell_ai, @rajiinio, @timnitGebru et al) calls for FMEA…

2022-08-12 23:43:18 RT @RoxanaDaneshjou: I've talked a lot about the lack of representation in AI datasets in dermatology and the concerns around algorithm bia…

2022-08-12 06:16:04 @SubhankarGSH Yeah, I should have clarified I meant *external* funding options in my original tweet - folks will often apply to external fellowships to increase their independence and stipend

2022-08-11 22:44:40 @richardson_m_a Not sure if this is what you're looking for, but we attempted to taxonomize algorithmic failures in practice observed here: https://t.co/zrQTyxxBwC

2022-08-11 16:44:02 RT @sayashk: On July 28th, we organized a workshop on the reproducibility crisis in ML-based science. For WIRED, @willknight wrote about th…

2022-08-10 16:13:31 @AmandaAskell Though I don't know a lab or student doing AI work right now that would refuse an offer for free or discounted compute :) - it's just not the typical reason people seek out a fellowship for.

2022-08-10 16:11:37 @AmandaAskell Compute is usually managed by the lab and so the PI raises for that typically. In CS, people tend to seek fellowship funding in order to cover living costs and tuition independently of their PI, so they can have some more flexibility to work on and explore their own projects.

2022-08-10 16:02:57 @Javi_Rando Thanks for sharing!

2022-08-09 17:04:56 @iddux I am an international PhD student in the US and this isn't true? Funding can come from a variety of sources, including fellowships, though there are various nationality restrictions depending on the source.

2022-08-09 16:31:16 @SpeenDoctor_ Depends on the culture of the discipline and the department - of course your school provides some funding, but its common to apply for fellowships in CS in order to gain some research independence. Many international CS phd students feel like their only options are from tech cos.

2022-08-09 16:26:21 @tdietterich This is a reference to the fact that many funding options from foundations are either phd fellowships that are exclusive to US citizens / permanent residents or are geared towards practitioners and so only provide funding for the short term (ie. one or two years).

2022-08-09 16:24:27 To clarify: I'm talking about external fellowship funding options for international students in the U.S. Major government or foundation fellowships that fund multiple years of your phd are exclusive to U.S. citizens or permanent residents, the exception being tech fellowships.

2022-08-09 16:22:23 @tdietterich I mean multi-year funding for the duration of a phd program (ie. longer than one year or two)

2022-08-09 16:00:25 It's frustrating that the only really viable long-term funding options for international CS PhD students in the U.S. seem to be the fellowships coming from tech companies. Makes things especially difficult for anyone trying to do meaningful tech accountability work.

2022-08-08 16:04:04 RT @emilymbender: Why is "AI" the only thing we describe that way? No one says: This airplane has a superhuman flying ability! This jackham…

2022-08-07 13:18:30 RT @AIResponsibly: NEW WORK: Our interdisciplinary #audit of #hiring #AI is out! Watch: https://t.co/OdOcYGRPlT and read our @AIESConf pa…

2022-08-04 06:37:31 RT @jackbandy: It me! #AIES

2022-08-04 06:37:09 RT @shakir_za: Closing day of @AIESConf Amazing set of lightning talks from students covering all topics from fairness, debiasing, to edu…

2022-08-03 09:41:28 RT @NicolasPapernot: 3 weeks to go until the abstract registration deadline for the first IEEE conference on Secure and Trustworthy ML (SaT…

2022-08-03 01:01:20 RT @BenDLaufer: An amazing keynote by ⁦@karen_ec_levy⁩ at ⁦@AIESConf⁩“Automation and surveillance aren’t substitutes. They are complement…

2022-08-02 11:53:02 @MarisaTPP @KerryMackereth @DrEleanorDrage @AIESConf Yes, loved this work as well! Raises so many important questions!

2022-08-02 09:31:03 RT @MarisaTPP: Hiring Tech - interesting study on how companies market their products. They promise objective hiring for a more diverse wor…

2022-08-02 09:30:26 RT @ziebrah: #AIES paper presentation today! this is work done while at @itsArthurAI last summer. we frame it as an "aligning of conversati…

2022-08-02 09:29:03 This conclusion was a shout out to @Abebab's great paper "Algorithmic injustice: a relational ethics approach". So sad she couldn't be here! https://t.co/hdHXEN1q4S

2022-08-02 09:27:10 RT @mjpaulusjr: Great opening talk at #AEIS by @rajiinio on algorithmic accountability and the role of audits as part of the practical shif…

2022-07-23 23:30:48 @zacharylipton @shiorisagawa I think @rtaori13 &

2022-07-23 22:36:07 Yep. I find it hilarious when people try to blame the "data" for harmful outcomes (eg. bias, inaccuracies)...As if the data is some disembodied object and not in fact the direct result of the many choices made by those very engineers and researchers. Just take responsibility! https://t.co/i6tihptAPv

2022-07-23 22:30:29 RT @iAyori: Hosted a spirited panel on this years ago. The number of engineers, data scientists and researchers who felt confident blaming…

2022-07-22 23:50:36 RT @tzushengkuo: Couldn't ask for a better way to wrap up the #DataPerf workshop with a panel on the future of data-centric AI!Thanks to…

2022-07-22 19:43:01 @npparikh yeah, actually just added it to the reading list lol

2022-07-22 19:05:27 @npparikh nice!

2022-07-22 13:07:11 @struthious Thanks for reading &

2022-07-22 12:53:00 @victorveitch @thejonullman yeah, I agree, honestly. Criticism doesn't have to be cruel. If communicated appropriately and kindly, Twitter is fine.

2022-07-22 12:37:31 Details of the workshop can be found here! Grateful for the organizers for creating a space to discuss this topic. https://t.co/CLCxlTB9Jc

2022-07-22 12:20:55 Data should not be considered a given, an afterthought or "someone else's problem" in ML - it's part of what the field needs to be actively thinking about. And I mean beyond hijacking it for optimizing performance - lots of issues beyond that to address!

2022-07-22 12:06:35 I'd be lying if I said this isn't at least a little personal. I'm frustrated - it's been years of discussion on this and ML people will still resist taking basic responsibility for the ethical decisions they make as researchers working with human data.https://t.co/0VklaKDeiO

2022-07-22 12:01:55 Giving a talk later today at the DataPerf workshop @icmlconf.ML researchers often view themselves separately from the eng issues they perceive as the cause of downstream harms - in reality, their decisions, esp when it comes to data, are just as responsible for these problems. https://t.co/uRXwyyZZVz

2022-07-22 11:46:11 So much "AI is unlocking enormous opportunities", "AI’s tremendous potential" for "societal benefits"

2022-07-20 21:28:37 RT @mgahntz: Hi @OpenAI, now that you're rolling out DALL•E at scale, how about a bias/toxicity/harmful content bounty program to go along…

2022-07-18 19:02:50 RT @jackbandy: Anyway if you know of any jobs starting Fall 2023, let me know!Also if you know of any land and/or a house I could have st…

2022-07-18 18:01:21 RT @random_walker: ML is being rapidly adopted in the sciences, but the gnarly problem of data leakage has led to a reproducibility crisis.…

2022-07-18 17:52:24 RT @random_walker: So we’d anticipated a cozy workshop with 30 people and ended up with 1,200 signups in 2 weeks. We’re a bit dazed, but we…

2022-07-18 15:36:48 @mmitchell_ai @huggingface @mkgerchick @_____ozo__ Wow, amazing work!!

2022-07-18 12:39:57 @PolisLSE Source here: https://t.co/LEeB0NoxlN

2022-07-18 12:38:58 Keep getting reminded about the importance of data journalists in algorithmic audit work. For instance, they regularly design &

2022-07-18 10:50:36 RT @rajiinio: So proud of @paula_gradu for all the work she's been doing to bring @WiMLworkshop to @icmlconf this year. If you're atten…

2022-07-18 10:46:36 @jackclarkSF So sorry you went through this! I cannot imagine how difficult it must have been to endure. So happy to hear you had the support of your partner and friends to make it through safely. Our bodies are so important yet so fragile!

2022-07-18 10:34:05 RT @mikarv: No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of…

2022-07-18 10:31:18 RT @OfficeforAI: Establishing a pro-innovation approach to regulating AIA new paper published today outlines the Government’s approach…

2022-07-17 00:55:25 So proud of @paula_gradu for all the work she's been doing to bring @WiMLworkshop to @icmlconf this year. If you're attending, please make sure to check it out! Cannot stress how important it is to have &

2022-07-16 23:35:07 RT @WiMLworkshop: The 3rd WiML UnWorkshop at ICML is just a few days away! All of this is possible thanks to our sponsors @Apple @DeepMindA…

2022-07-16 22:13:50 RT @rosanardila: Important discussion about the reproducibility crisis of ML in science. Particularly when models are later used in medical…

2022-07-14 16:37:21 RT @oliviasolon: Wow. Per this analysis, 30% of a Google dataset intended to categorize emotions in comments (for training AI) mislabeled.…

2022-07-13 03:31:42 @danish_c @Ket_Cherie omg it took me a minute to realize what the confusion was - think Cherie quite reasonably thought this was an alias for a contract worker from Denmark aha

2022-07-13 03:28:22 RT @YJernite: Responsible AI Licenses (RAIL) rely on behavioral use restrictions to provide a legal framework for model developers to restr…

2022-07-13 00:54:06 It's also incredible to see how much the RAIL team has evolved their approach and refined the license over the years. I remember when it was just a draft markup file - now it's a whole organization. Those guys really took in all the feedback &

2022-07-13 00:51:47 Licenses have always struck me as an interesting approach to articulating and possibly enforcing some clear boundaries around what the model should be used for. It's a way for model developers to express their intent and have some legal leverage in the case of misuse.

2022-07-13 00:49:10 It's been interesting to read about Bloom, the open source 176B para large language model that was just released today. Rather than controlling use via an API product, they released the model w/ RAIL (ie. the "Responsible AI License") to minimize misuse:https://t.co/OxpnIv6k4e https://t.co/cSotWeMlqi

2022-07-11 18:21:47 @umangsbhatt @HCRCS @hseas @hima_lakkaraju @MilindTambe_AI you and @hiddenmarkov should hang out!

2022-07-11 17:08:59 RT @sebkrier: Thrilled to announce the @StanfordCyber and @StanfordHAI $71K multi-prize #AIAuditChallenge, designed with @MarietjeSchaake…

2022-07-11 14:32:08 @hiddenmarkov So sorry :( Hope your family is staying safe!

2022-07-09 22:38:52 @IEthics @Aaron_Horowitz Wow, this is an incredible effort. Hope this has been going well!

2022-07-08 19:15:23 @luke_stark lol thank you for your service tho, now we finally have something to cite instead of repeating the same points over and over

2022-07-08 17:52:38 @ruthstarkman Thanks - this is incredibly kind!

2022-07-08 17:48:57 @andrewthesmart @Aaron_Horowitz Yep but also philosophers and lawyers and social scientists not interested in sitting with the technology to learn how it works. The gap goes both ways imo.

2022-07-08 17:46:56 @CGraziul Yep I think it's more about having productive collaborations &

2022-07-08 17:42:32 @Aaron_Horowitz Totally agree, which is why it's helpful to have venues like @FAccTConference &

2022-07-08 17:34:25 Sat through so many meetings like this. It's incredibly frustrating how bad the field is at actual interdisciplinary engagement because AI's problems will require actual dialogue between disciplines to solve, not one group trying to absorb a SparkNotes understanding of the other.

2022-07-08 17:29:46 @mikarv @random_walker The "they are not reading legal scholarship on the topic" point is definitely true, and a more generally true point as it relates to interdisciplinary engagement in CS. When someone complains about social science work or regulation, I always ask "Did you read it?" Spoiler: no.

2022-07-08 17:25:30 When consulted on policy, technologists bring in proposals that are unrealistic or ineffective as it relates to how law actually works, while lawyers come in with a distorted &

2022-07-08 17:15:09 RT @random_walker: We like to complain that lawmakers don’t understand tech, but let’s talk for a minute about technologists who don’t unde…

2022-07-08 17:12:47 @DrZimmermann @UWMadison @LeonieEMSchulte Yay, Annette! Congrats!!

2022-07-08 04:56:08 RT @weidingerlaura: JOB ALERT Very, *very* excited that we're hiring for a new Ethics Research Associate at DeepMind - join our team of…

2022-07-08 04:48:20 @LeonDerczynski This should be reported directly to @icmlconf @NeurIPSConf cc:@shakir_za

2022-07-07 20:50:10 RT @natematias: Dream job alert for data scientists who want to work on consumer protection

2022-07-06 21:24:50 @realCamelCase lol I feel your pain though - any interdisciplinary endeavor always feels like it requires so much more learning

2022-07-06 21:22:38 @realCamelCase not to be that girl but being at the intersection literally means you do both https://t.co/KITHdHgaCH

2022-07-06 20:24:37 RT @kchonyc: “What I cannot review, I do not understand”#NeurIPS2022 14.39%

2022-07-06 14:37:48 RT @EPirkova: We just published a very first introductory guide into the #DSA! If you wonder who or what the law will regulate, how indivi…

2022-07-05 21:53:49 RT @brianavecchione: I'm on the job market!!! Looking for industry or foundations that intersect AI auditing/accountability, their social…

2022-07-01 18:38:29 First they came for... https://t.co/M28wpLHGLe

2022-07-01 18:34:57 @sh_reya Go Shreya!!

2022-07-01 18:33:51 RT @random_walker: There’s a reproducibility crisis brewing in almost every scientific field that has adopted machine learning. On July 28,…

2022-07-01 18:26:41 Ugh, officially losing control of my email if I owe you a reply from the last 2-3 months, I'm so sorry

2022-07-01 06:22:20 RT @FAccTConference: Ok #FAccT22 attendees we want to hear from you! Fill out our survey and help us figure out what worked and what didn't…

2022-06-30 23:50:05 RT @brandonsilverm: I've been offline for most of the last week but thought I'd jump in with a few thoughts about the article below. belo…

2022-06-30 15:08:45 RT @STS_News: Enjoyed this paper, "The Fallacy of AI Functionality," by @rajiinio, @ziebrah, @Aaron_Horowitz, and @aselbst. Too often criti…

2022-06-30 04:01:42 RT @STS_News: I enjoyed this WSJ piece, "Tech Giants Pour Billions Into AI, but Hype Doesn’t Always Match Reality"This excerpt is the hea…

2022-06-30 01:35:35 @realCamelCase ahahahah he needs to be stopped for real

2022-06-30 01:34:36 @sh_reya

2022-06-30 01:33:47 @evijitghosh Nothing will ever compare

2022-06-30 01:31:43 @undersequoias @andrewthesmart @KLdivergence @Aaron_Horowitz

2022-06-29 18:12:49 @Abebab It is always ok to get rest and set boundaries Your health and well-being will always be more important than whatever is being demanded of you!

2022-06-29 17:03:49 RT @NexusOfPrivacy: Algorithmic Justice League audits the auditors (and why it matters from a privacy perspective)Today's Nexus of Privac…

2022-06-29 16:34:24 RT @_KarenHao: I wrote about a topic I’ve been itching to address for some time: how AI PR hype, coupled with increasingly flashy AI-genera…

2022-06-29 09:21:12 @negar_rz So sorry to hear! Hope you feel better soon!

2022-06-29 04:07:19 @KLdivergence @Aaron_Horowitz lol gotta photoshop Luca in there

2022-06-29 02:57:22 @KLdivergence @Aaron_Horowitz What are you talking about? You always look amazing

2022-06-29 02:26:26 RT @timnitGebru: If you missed @DocDre's keynote at @FAccTConference I highly recommend that you catch up. Belief, and our discourses abo…

2022-06-29 02:23:49 @Aaron_Horowitz @KLdivergence We were so happy and carefree... We didn't know what was coming no regrets tho https://t.co/xuFmGC0mhi

2022-06-28 00:31:30 RT @FAccTConference: We hope you had a great time #FAccT22. We will send out a survey about conference experience soon (including about you…

2022-06-27 22:23:52 @KLdivergence Dang hope you're feeling ok

2022-06-27 00:03:46 RT @macfound: Worth checking out, @AJLUnited's first field scan of the algorithmic auditing ecosystem, complete with recommendations for co…

2022-06-26 16:28:27 RT @justinhendrix: This week's @techpolicypress podcast: Peering Inside the Platforms• A conversation with CrowdTangle founder &

2022-06-26 16:25:25 RT @techpolicypress: This week's @techpolicypress podcast: Peering Inside the Platforms• A conversation with CrowdTangle founder &

2022-06-25 23:23:06 @yelenamejova @RERobertson Also check out @natematias's thread, which goes over the audit study's methodology and hints at some policy implications. The researchers conducted thousands of queries from 476 locations over 14+ weeks to discover this and were incredibly thorough.https://t.co/GXXUNZmi1D

2022-06-25 23:16:30 @yelenamejova @RERobertson Whatever your stance, it's problematic to have Google returning CPCs as the closest result for searches for reproductive care - CPCs are *not* healthcare providers, and *not* abortion clinics. It's a dangerously misleading search result.Details here: https://t.co/7ECX5ZMeW6

2022-06-25 23:12:40 @yelenamejova @RERobertson "Crisis pregnancy centers" are NOT healthcare providers. They lure vulnerable women in &

2022-06-25 23:02:58 Now seems like a good time to remind people about this audit study done by @yelenamejova, Tatiana Gracyk &

2022-06-25 04:58:26 Thank you so much Seth for your heroic service and all the energy you brought to the conference (and to karaoke) #FAccT22 would simply not have happened without you! https://t.co/NLO5apasKz

2022-06-25 04:56:38 RT @KLdivergence: Huge thank you to Seth whose efforts to make FAccT happen this year we’re nothing short of heroic. Legend status

2022-06-25 04:54:40 @__lucab @evijitghosh @seanmmcdonald

2022-06-24 22:41:33 @frobnosticus @seanmmcdonald Ahahha it was a menu item called "world best pizza" and yes it was delicious lool

2022-06-24 22:28:09 I only really did one thing in Korea and that was EAT:@seanmmcdonald https://t.co/3hE46exSYd

2022-06-24 21:54:44 RT @aylin_cim: “Markedness in Visual Semantic AI” w/ @wolferobert3 today #FAccT22The default person in CLIP, the language-vision AI model,…

2022-06-24 21:45:15 @dallascard @FAccTConference Thanks for capturing this, Dallas! And it was lovely meeting you this week

2022-06-24 21:43:42 I had so much fun and learnt more than I could imagine this week! Thank you so much to those that made this happen, those that shared their work, those that commented on ours. Every time I attend this conf, I leave hyped &

2022-06-24 21:32:15 @KLdivergence Thank you for your service - lol now please get some rest

2022-06-24 21:25:55 RT @megyoung0: It is impossible to overstate the triumph that was this year's FAccT conference.THANK YOU and congratulations to @sethlazar…

2022-06-24 21:21:46 RT @schock: I'm on @Marketplace talking about our new study, "Who Audits The Auditors" just launched at #FAccT2022, w/ @jovialjoy @rajiinio…

2022-06-24 04:10:25 @thegautamkamath yeah I was told something about the scale of papers submitted making it difficult to submit each paper to a plagiarism checker

2022-06-24 04:00:02 Jokes aside, plagiarism is actually such a ridiculously prevalent problem in the machine learning community.Conferences should at minimum check for this at submission or prior to publication. https://t.co/BWv7DNs6qc

2022-06-24 02:56:44 RT @RebekahKTromble: Let's be clear. The system proposed to replace CrowdTangle is--so far--terrible. But most importantly, it's inaccessib…

2022-06-24 02:47:28 @_KarenHao @wsisaac @png_marie @shakir_za @FAccTConference Question about dealing with AI hype and @_KarenHao responds by saying researchers with meaningful perspectives should put themselves out there. + about Chinese context: "Researchers are worried about being critiqued in the West but also worried about getting flak from the govt"

2022-06-24 02:41:37 @_KarenHao @wsisaac @png_marie @shakir_za Karen notes on @FAccTConference weaknesses: "There seems to be a lack of Chinese research participants, &

2022-06-24 02:38:47 @_KarenHao @wsisaac @png_marie @shakir_za But clarifies that the government participation in China is "sweet and sour", overreaching in certain ways that are inappropriate, while also providing certain reasonable regulations that have just yet to arrive in Western contexts.

2022-06-24 02:36:49 @_KarenHao @wsisaac @png_marie @shakir_za On China: "There is so much more optimism about what the technology can do for them. Much less skepticism... it's a very different conversation in this context."+"In China, govt is a huge part of the conversation - in the US, we talk about not having enough govt participation."

2022-06-24 02:34:24 @_KarenHao @wsisaac @png_marie @shakir_za @wsisaac notes how journalism is better positioned than even academia to tell these personal stories, and bring some of these observations into mainstream conciousness. More on Karen's reporting here on colonialism &

2022-06-24 02:32:56 @_KarenHao @wsisaac @png_marie @shakir_za She discusses what it meant to sit w/ data labelers in *crisis* in Argentina, who wake up &

2022-06-23 19:37:51 RT @Abebab: A Sociotechnical Audit: Evaluating Police Use of Facial Recognition, Evani Radiya-Dixit #FAccT22audits on:1)Legal standards…

2022-06-23 15:33:02 RT @fborgesius: 'CounterFAccTual: How FAccT Undermines Its Organizing Principles', presented by @bengansky &

2022-06-23 15:30:46 RT @MarthaCzernusze: Tuning in to @AJLUnited’s Who Audits the Auditors at an Internet cafe! #FAccT2022 #FAccT22 https://t.co/1UvJWEtrvy

2022-06-23 15:21:03 @chels_bar Yeah, noticed this as well and reached out to an author, @Aaron_Horowitz about this! I don't think the oversight was malicious - it was mentioned that they actually weren't aware of your paper. Hopefully they can update the text with a citation soon.cc:@KLdivergence, @mmeyer717

2022-06-23 11:29:22 RT @ClarissaRedwine: Holy moly, @megyoung0 gave an amazing talk at #FAccT2022 that had people on their feet https://t.co/M7qQA0TiZJ

2022-06-23 11:20:42 RT @JesseDodge: excellent talk by @mmitchell_ai at @facct on data governance!https://t.co/492r3scuEa https://t.co/4GnNH0tp8o

2022-06-23 06:19:37 RT @rajiinio: @ziebrah @Aaron_Horowitz @aselbst @schock @AJLUnited @jovialjoy @s010n @RosieCampbell + After the events of this week alone,…

2022-06-23 06:07:16 Whoa it's incredible listening to the presentation about this project, which is effectively an implementation of @chels_bar's suggestion in the "Studying Up" paper (https://t.co/QQBBSnoByT), to effectively create a risk assessment of those in power (judges) and not defendants! https://t.co/nhGVAkGwjQ

2022-06-23 05:41:06 RT @KLdivergence: Coming up soon, mikaela meyer’s @mmeyer717 talk in room 202 at #facct22. https://t.co/cRLgC07KT5

2022-06-23 05:40:52 RT @KLdivergence: Risk assessment instruments are used in the criminal justice system to estimate 'the risk a defendant poses to society'.…

2022-06-23 05:39:28 @realCamelCase I'm disappointed your favorite continent was not Africa, though I'm happy for the mention loool

2022-06-23 04:58:35 RT @fborgesius: Really like this panel &

2022-06-23 04:56:54 @ziebrah

2022-06-23 04:42:55 RT @Abebab: The fallacy of AI functionality, @rajiinio &

2022-06-23 01:10:50 Fave #FAccT22 moment #ootd https://t.co/QJW2M0oenR

2022-06-23 01:06:49 @Combsthepoet omg

2022-06-23 01:06:15 Such a good session. Technologists "co-designed a tool - an SMS chat bot - that collected &

2022-06-22 23:36:47 There's already been great discussion about this software from the legal side (see Katherine Kwong's great work in @HarvardJOLT: https://t.co/J5VJQkHrxg)

2022-06-22 23:36:46 Super excited to attend Angela's presentation of a new audit framework for evidentiary statistical software (eg. DNA profiling algos, etc). These models determine the diff between freedom &

2022-06-20 16:34:20 @aylin_cim Sorry to hear hope you feel better soon!

2022-06-19 12:21:34 RT @Borhane_B_H: Folks in the #AIAuditing space, this #FAccT2022 paper by @schock @rajiinio &

2022-06-18 23:15:05 RT @FAccTConference: Our #FAccT CONFERENCE GUIDE is available here: https://t.co/kykwPyeeHq Check it out for useful tips about both the in-…

2022-06-16 18:48:55 @schock Also @schock is so careful with methodology -- I learnt a lot just hanging around and observing the care with which this investigation was approached. Glad to have been able to contribute anything at all

2022-06-16 18:44:24 @Borhane_B_H @schock @jovialjoy Thanks for reading!

2022-06-16 18:44:11 I'm so proud of this work, led by @schock. Tracked down an interdisciplinary cohort of algorithmic audit practitioners to determine what things actually look like on the ground. Unexpected trends were discovered through interviews and survey responses - an essential resource! https://t.co/LtE4fy9HDA

2022-06-16 18:07:36 @tejuafonja @Onyothi So happy to hear, hope she has a great experience at CVPR!

2022-06-16 15:15:04 RT @FAccTConference: REMINDER: our conference platform is live! https://t.co/MWM3ldWDAj Live scheduling begins on June 21 (KST). But please…

2022-06-16 14:12:45 RT @mathver: Google Search, Youtube, Facebook, Instagram, Twitter, TikTok, Microsoft Bing and Linkedin make significant new commitments to…

2022-06-15 20:08:56 @IreneSolaiman @jackclarkSF Lol Irene, so dramatic But seriously, hope you feel better soon, Jack!

2022-06-15 14:43:55 RT @DrZimmermann: Getting ready to to Seoul for @FAccTConference! Can’t wait to FINALLY hang out in person with my amazing Publicity…

2022-06-15 13:40:28 RT @FAccTConference: BOOOOM our conference platform is live! https://t.co/MWM3ldF2IL Live scheduling begins on June 21 (KST). But please he…

2022-06-14 22:58:35 RT @KLdivergence: #FAccT2022 PC Co-Chair here: seeking volunteers to session chair for all sessions on Day 2 of the conference. Responsibi…

2022-06-13 06:21:54 RT @rajiinio: Once we characterize AI as a person, we heap ethical expectations we would normally have of people - to be fair, to explain t…

2022-06-12 03:52:19 This is not a dig on those that work on this - it would just be nice to hear about other things, also.

2022-06-12 03:51:18 I wish we spent even 10% of the time being used to discuss large language models talking about literally anything else.

2022-06-11 21:27:14 @jeffbigham @AiSimonThompson @karpathy Yep, and in the deployment context, these evals are conveniently ignoring the impact of interactions, etc as well. This is one of the things that led me and @beenwrekt to write this on perhaps re-framing to the broader scope of external validity: https://t.co/oJcWQM7KBL

2022-06-11 20:27:48 @annargrs Oh, glad to hear! Excited to check that out :)

2022-06-11 20:27:08 @jeffbigham @AiSimonThompson @karpathy Also there's a big difference between human *performance* (ie. accuracy outcomes) and human *competence* on such benchmarks. For eg. humans are much more robust to distribution shift and this isn't well captured in evaluations at all: https://t.co/f0uNbxvbl1

2022-06-11 20:21:55 @jeffbigham @AiSimonThompson I was shocked to discover from this paper (https://t.co/QfehD0ow3h, where they actually develop a proper human baseline for ImageNet performance) that the former baseline for human performance on ImageNet was...just @karpathy LOL

2022-06-11 20:19:32 @annargrs BIG Bench seems like an incomplete solution -- a sea of under-specified &

2022-06-11 20:16:03 @annargrs There's a separate question to ask about *task design* though, where it's clear that not all datasets are evaluating the same model capabilities &

2022-06-11 20:06:52 @annargrs Interestingly, there's been a lot of recent work revealing that this isn't quite the case - the order of the models' performance is preserved ood (that is, even if we do eval on the same data, the best model is still the best model even on a new dist): https://t.co/CgKkGlTJod

2022-06-11 16:25:29 Many of ML's major benchmarks have already become obsolete. It's getting pretty urgent to re-think ML evaluation. https://t.co/9LkfZow69Z

2022-06-11 16:15:08 @mdekstrand Ah, will check this out! Thanks so much for sharing!

2022-06-11 15:55:52 The difference between statistical inference and prediction is so poorly explained to students in the classroom that the prevalence of these kinds of misconceptions is pretty unsurprising to me. Wondering if there's a good resource that adequately breaks down the distinction. https://t.co/9uth1y7jFN

2022-06-11 01:46:13 RT @yy: Check out "network cards" for documenting metadata (not only stats but also data generation process &

2022-06-10 18:45:53 Will be talking about algorithmic auditing next week! Excited for the conversation, please tune in if interested https://t.co/44Pprr8jAt

2022-06-10 18:45:20 RT @GMFDigital: Register now for our webinar, "Opening the Black Box: Auditing Algorithms For Accountable Tech," happening 6/15 at 11a ET.…

2022-06-10 18:44:31 @emilymbender @timnitGebru I still don't understand why the approach is to replace humans whole-cloth. There's so many sub-tasks that are low stakes &

2022-06-10 16:10:41 @alexhanna @LuizaJarovsky @EmeraldDeLeeuw This website is a good starting point on auditing specifically online platforms: https://t.co/8r3CSbhUcy, tho it's fairly outdated now. @d_metaxa led this more recent effort: https://t.co/DeTKnVLAt2+ @sapiezynski has developed a great syllabus, I'm sure he'd be happy to share.

2022-06-10 12:37:43 @Aaron_Horowitz honestly the most productive thing I've ever tweeted

2022-06-10 12:25:39 RT @n3ijoy: AI as the snake oil of the digital era. Let’s start pointing out the absurdity of many AI-based promises. Thank you @F_Kaltheun…

2022-06-08 15:50:39 RT @jennwvaughan: So excited I can FINALLY share our new work on machine learning practitioners' data documentation perceptions, needs, cha…

2022-06-08 07:40:44 RT @black_in_ai: From now until the 17th of June submit your travel grant applications to attend the Black in AI + Queer in AI Social @icml…

2022-06-07 15:25:49 RT @FAccTConference: Financial support alert we are offering (1) BANDWIDTH grants covering internet access costs

2022-06-03 22:05:18 RT @sethlazar: It has been pretty exhausting for everyone bringing @FAccTConference together but I am so looking forward to it! The program…

2022-06-03 17:38:30 @hipsterelectron lol no, I don't think you mansplained at all -- what you're saying makes sense. I didn't realize it was an actual idea worth implementing. If so inclined, I fully support you building this out somewhere aha

2022-06-03 17:36:47 @KarlTheMartian LOL no one would watch it, but this would give us a clear window into the human condition

2022-06-03 17:35:29 @randtke lol or not - this just happened to me, and I was both thrilled and the most detail oriented and nitpicky I have ever been

2022-06-03 17:19:56 An idea: a conference paper assignment matching system where you get matched to review the papers that cite your work lol

2022-06-02 20:50:09 @random_walker How do you manage this with collaborators in different time zones? I'd like to keep mornings open but find they are the easiest to fill because it's when people are more available to meet :(

2022-06-02 14:20:44 lol instead we do this: https://t.co/3T6XMMWPtr https://t.co/J6YumHENlz

2022-06-02 13:26:01 RT @botherder: We are looking for 5 people working at the intersection of human rights and tech to join our new Digital Forensics Fellowshi…

2022-05-31 16:49:44 RT @lvwerra: Evaluation is one of the most important aspects of ML but today’s evaluation landscape is scattered and undocumented which mak…

2022-05-31 16:41:40 I am so excited to learn from Irene!! https://t.co/qhuDPMye3Q

2022-05-31 16:30:11 This is my go-to cite for why we should actively *vet* AI vendor claims as part of the regulatory process. People are literally out there selling pseudo-science! For this kind of tech, it doesn't even make sense to talk about other problems like fairness. Just throw it away! https://t.co/xwVfS8o1QU

2022-05-31 16:25:45 @irenetrampoline Yayayy!

2022-05-31 16:19:35 @certifiablyrand Also I should probably admit that I was intentionally being a bit cheeky in the OG tweet, aha, and wasn't expecting to be taken as seriously as I was by everyone that replied. Don't mind this outcome though -- ended up being fairly informative for me!

2022-05-31 16:17:45 @certifiablyrand It seems there are many facets to EA, and the community I have the most exposure to in AI is quite forceful about their priorities (ie. "everyone should do x bc it does the *most* good"). Beginning to realize that's not always the case though, so will be thinking more about this.

2022-05-31 16:12:56 @certifiablyrand lol I see your point, and it's well noted. Yeah, I don't think my goal was to criticize the desire to do good, just the notion of optimizing for the "most" good, in a world where it's really hard to just minimize the harm one causes, and do any good whatsoever.

2022-05-31 14:55:37 @certifiablyrand Because I think it's important and interesting work! Like others said, there's nothing wrong with aiming to have positive impact, but the framing of an optimization problem with answers that are meant to apply to what *everyone* "should" be doing is where problems seem to arise.

2022-05-31 00:17:46 @OtterElevator @GiveWell yeah, no worries at all -- totally understood!

2022-05-30 20:22:08 @OtterElevator @GiveWell I'm not annoyed with anyone lol. I think what people here are saying is that there is no neutral objective to optimize. Doing the "most" good = "more saved lives" for you, but others may see it differently. Creating local community support networks, etc. are worthy positive goals

2022-05-30 20:12:09 @AmandaAskell I empathize with this, honestly. I think the problem some have is when the triaging decided upon by one group is imposed on others as the "most" good thing for everyone to be doing. That can become problematic, especially when that group does not adequately represent everyone.

2022-05-30 18:04:07 @IAmSamFin @MarkSendak @timnitGebru @emilymbender It's easier for some rather than others to "believe" in the potential of philanthropy, depending on who they are &

2022-05-30 18:00:52 @IAmSamFin @MarkSendak @timnitGebru @emilymbender Rich people don't pay their taxes, but hoard their wealth to spend as they please instead of contributing to shared resources. Even when "researched", it's an exclusionary and harmful practice -- "Winners Take All" is a critical resource here: https://t.co/042xwNjVzW

2022-05-30 17:57:16 @IAmSamFin @MarkSendak @timnitGebru @emilymbender lol no, you're fine - I think these interactions are quite productive! You've been one of a few to clearly articulate your use for EA in a way I can understand. I think the consolidation of wealth management into the hands of a few is actually *the* main issue with philanthropy

2022-05-30 17:35:55 @MarkSendak @IAmSamFin @timnitGebru @emilymbender Of course anyone can do as they please w/ their money, but there is something unsettling about encouraging / persuading those with these resources to all contribute towards a small number causes, while ignoring the concerns of others that perceive such causes as possibly harmful.

2022-05-30 17:31:10 @MarkSendak @IAmSamFin @timnitGebru @emilymbender (4) is fine imo - my ideal notion of "cost-effectiveness" is determined democratically, esp. in the context of public funds. Little good is done by the assumptions of a few determining how resources should affect the many. When deciding on private funds, things becomes less clear

2022-05-30 17:27:42 @MarkSendak @IAmSamFin @timnitGebru @emilymbender Honestly, I don't know enough about EA to comment on (2) &

2022-05-30 16:35:39 @sadiaokhan Optimizing for doing the most "good" is great but I'm not sure one can do that independently of being aware of not causing new problems.

2022-05-30 16:34:17 @sadiaokhan This is a good point, tho I happen to disagree. I'm coming from a place where some will discount other people's work (ie. climate change) under the pretense of it being less "good" than perhaps what they are working on (ie. AGI), w/o adequately reflecting on the harms they cause.

2022-05-30 16:28:41 @sh_reya Yeah, I feel you. To be fair, I don't think it's just ego, tho. Ego exists but also academia breeds a certain kind of desperate insecurity that makes people act out irrationally &

2022-05-30 16:13:48 @typo_factory @timnitGebru @emilymbender @IAmSamFin Yes, this is a great point! "Winners Take All" by @AnandWrites opened my eyes to a lot of this.

2022-05-30 16:05:26 @timnitGebru @emilymbender @IAmSamFin Hm. The issue for me is the framing of their spending choices as some universally appreciated "good" for the world - it's a fundamental issue in philanthropy, where the harm caused in the acquisition of the funds are discounted, and the perspective of those impacted are dismissed

2022-05-30 15:49:56 @sh_reya Completely agree, and think this is true for all research, actually. Any time "saved" by not *thinking* of the consequences early in the process will be spent many times over *dealing* with the consequences later on.

2022-05-30 15:45:15 @emilymbender I think this depends on who you talk to. Optimizing the allocation of a fixed set of funds makes sense - @IAmSamFin had a decent take on this. But "how do I spend the money I already have?" is very diff from "how do I do the most good?" &

2022-05-30 15:32:14 Why are there even people optimizing to do the "most" good? Gosh, it's hard enough to just live and die unproblematic.

2022-05-30 15:29:27 Wait, what?? No, the answer to "can you do X with deep learning?" is NOT always yes! https://t.co/NQnFBgnuMb

2022-05-25 23:02:44 @dpatil Honestly, they are barely paid enough to teach

2022-05-25 20:12:45 @mmitchell_ai @JesseDodge @kotymg @karlstratos @haldaume3 Congrats, Meg! So well deserved

2022-05-23 15:46:02 RT @timnitGebru: Thank you Time for having me on this list. And I had no idea the one and only @safiyanoble was the one who was going to wr…

2022-05-23 00:15:19 RT @srivoire: was just thinking about this classic Philip Guo article (sorry for the paywall) FOR ABSOLUTELY NO REASON WHATSOEVER https://t…

2022-05-21 16:00:30 @adjiboussodieng @Abebab Lol same here...you are literally the most chill person I know

2022-05-20 16:58:36 @alvarombedoya @FTC @BedoyaFTC @linakhanFTC @RKSlaughterFTC @FTCPhillips @CSWilsonFTC Congrats on the new role! Looking forward to seeing your impact!

2022-05-20 08:11:00 CAFIAC FIX

2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD

2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…

2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…

2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &

2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…

2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that

2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.

2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.

2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position

2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!

2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)

2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)

2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.

2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists

2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…

2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing

2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition

2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR

2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing

2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?

2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <

2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.

2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD

2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…

2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…

2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &

2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…

2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that

2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.

2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.

2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position

2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!

2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)

2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)

2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.

2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists

2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…

2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing

2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition

2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR

2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing

2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?

2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <

2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.

2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD

2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…

2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…

2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &

2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…

2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that

2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.

2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.

2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position

2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!

2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)

2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)

2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.

2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists

2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…

2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing

2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition

2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR

2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing

2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?

2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <

2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-28 18:10:10 RT @JessicaHullman: Call for papers for the 2023 ACM @FAccTConference is now live! https://t.co/cW3o1WFUb8 Abstracts due Jan 30, Papers due…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-11-28 18:10:10 RT @JessicaHullman: Call for papers for the 2023 ACM @FAccTConference is now live! https://t.co/cW3o1WFUb8 Abstracts due Jan 30, Papers due…

2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78

2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL

2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…

2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…

2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &

2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…

2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…

2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…

2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!

2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!

2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!

2022-12-08 19:26:55 @jjvincent omg, congrats!!

2022-12-08 11:22:04 RT @FAccTConference: Submit your excellent work to #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan 3…

2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!

2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!

2022-12-08 19:26:55 @jjvincent omg, congrats!!

2022-12-08 11:22:04 RT @FAccTConference: Submit your excellent work to #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan 3…

2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!

2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!

2022-12-08 19:26:55 @jjvincent omg, congrats!!

2022-03-17 18:07:50 RT @trackingexposed: Today we release a special 24 page report into TikTok’s activities in Russia. Our researchers exclusively discovered t… 2022-03-16 02:08:01 Not to mention all the ethically questionable endeavors... 2022-03-16 02:07:21 Task design is such a crucial part of evaluation. I genuinely think a lot of ML failures are just the result of ML methods being thrown at tasks that are completely inappropriate for ML to solve (ie. data with few examples, sparse features, unknown & 2022-03-15 22:58:04 @PreetumNakkiran @realCamelCase 2022-03-15 14:32:58 Every time something goes wrong with an ML model in deployment, researchers in our field are quick to call it "distribution shift", but that term is both too broad & Excited to blog w/ @beenwrekt about what ML has to learn from other fields on this! https://t.co/1RONiqMNJk 2022-03-14 17:07:36 RT @KateKayeReports: For the third time, @FTC has forced a company to destroy algorithms built with data gathered deceptively. Here's why i… 2022-03-12 18:41:23 RT @sherrying: @crystaljjlee I was just thinking about this. funny how companies chose to say trustworthy AI and responsible AI and none ch… 2022-03-12 18:38:20 @_alialkhatib @crystaljjlee Completely agree with this! I now overwhelmingly use "harm" as well. 2022-03-12 08:11:00 CAFIAC FIX 2022-01-25 10:29:05 RT @alexhanna: @ChristophMolnar We have a whole research program dedicated to understanding this... https://t.co/uJXeJepliQ https://t.co/P… 2022-01-25 10:16:21 @alexhanna @ChristophMolnar Though I'm glad to see the fiasco of faulty covid-19 work has begun to wake a lot of people up to the reality of how bad things are. We need even more people speaking up about this, and calling it out in order for things to actually change for the better. 2022-01-25 10:10:00 @alexhanna @ChristophMolnar Yep, and lots of even earlier conversation here: https://t.co/DriTstdhTl 2022-01-24 22:20:39 RT @daniel_d_kang: ML models are being deployed in mission-critical settings, such as autonomous vehicles. Shockingly, the data used to tra… 2022-01-24 15:35:26 @MarkSendak Yeah, totally agree. I'm of the opinion that ML is actually really useful, but not in the end-to-end way it's being advertised as useful. It's often only likely to solve one specific type of sub-problem. 2022-01-24 15:20:21 @MarkSendak They sold the vision of an AI doctor based on an unrelated game demo - in reality, the problems they would need to solve were more complex & 2022-01-24 15:16:24 @MarkSendak Hm that kind of info recall is great for passing med school but is it all that's necessary to be a good doctor? Workflow integration was a big part of the problem precisely because they failed to honestly break down what ML could or couldn't do, & 2022-01-24 15:11:17 Whenever I bring up issues, I hear "Well ofc nothing works irl." ...What? What is the point of these evaluations in research if not to indicate something meaningful about real world performance? This paradigm gap threatens the integrity & 2022-01-24 14:57:27 @LeonDerczynski Yep + I think it boils down to a combo of 1) underestimating the complexity of the real problem & @ChristophMolnar has been tweeting a lot about the COVID mess recently & 2022-01-24 14:45:35 yes, this is a subtweet on recent news: https://t.co/Y0p8DpvoW4 2022-01-24 14:45:02 Case in point: what about solving Jeopardy! made us think IBM Watson would succeed in healthcare? https://t.co/7YFTeblAbA 2022-01-23 16:20:14 RT @yaroslavvb: Table 2 of https://t.co/yy2MI0M6Tz shows what's wrong with ML research. Papers got in by providing a theorem (checked by re… 2022-01-18 15:49:58 RT @etechbrew: .@mozilla fellow @rajiinio. https://t.co/AHyrp2OAbh https://t.co/OPs2Aoq9FT 2022-01-17 08:11:00 CAFIAC FIX 2022-01-12 18:26:53 RT @hypervisible: “…we must be honest about what can realistically be accomplished by these piecemeal attempts at stitching sociocultural e… 2022-01-12 16:12:14 @Abebab lol abeba it is still due eventually 2022-01-12 16:12:02 RT @FAccTConference: Due to recent disruptions & 2022-01-12 16:07:42 @KLdivergence not the hero we deserved but the hero we needed 2022-01-12 15:10:01 I like the paper so much because this is a common experience with auditing - you may request info & 2022-01-12 15:10:00 One of my fave papers on algorithmic transparency is "Seeing without knowing" by @ananny & 2022-01-12 14:58:46 RT @NaDomagala: Calling the AI ethics crowd - I'm compiling a reading list on #algorithmictransparency and I need more global examples & 2022-01-12 04:30:47 @jovialjoy @MIT @GeorgiaTech @UniofOxford @EmoryUniversity @medialab @EthanZ @LatanyaSweeney @kanarinka @AJLUnited CONGRATS JOY It's Dr. Poet of Code now! 2022-01-12 00:48:07 RT @tdietterich: The term "ablation" is widely misused lately in ML papers. An ablation is a removal: you REMOVE some component of the syst… 2022-01-11 08:11:00 CAFIAC FIX 2022-01-06 05:06:39 @Miles_Brundage @STS_News Also, the "The attachments of ‘autonomous’ vehicles" is a really level-headed description of what's happening with self-driving cars right now. Convinced me that no one will be driving self-driving cars if we don't fix safety issues. https://t.co/x67ySaILrK 2022-01-06 05:02:10 @Miles_Brundage Yeah - "Moving Violations" from @STS_News makes a version of this argument well (ie. regulating safety issues allowed the industry to exist/innovate) https://t.co/TBS18w6EY6 2022-01-06 04:48:45 @BlancheMinerva @jackclarkSF @mer__edith The large scale projects he references in those areas are for data collection - the large scale projects he's proposing for AI/ML aren't data collection instruments but models. That requires a different type of tooling, which I'm personally not convinced needs to be large scale. 2022-01-06 04:45:25 @BlancheMinerva @jackclarkSF @mer__edith Meanwhile academic teams are much harder to maintain. An individual could graduate & 2022-01-06 04:42:37 @BlancheMinerva @jackclarkSF @mer__edith I'm not talking about an individual person but the existence of a team... The tensorflow team has been intact for a while at this point and that means dedicated resources in any given year to propping up the infra necessary for the framework to remain effective. 2022-01-06 04:40:08 @Miles_Brundage The economic argument for safety (vs just discussing individual/community harm) is a huge theme in earlyish automobile industry. Interestingly, that language is kind of popping up again with concerns of safety issues and accidents slowing down the adoption of self driving cars. 2022-01-06 04:27:32 @jackclarkSF @mer__edith It's cool to think about what a common project would look like, that wasn't tied to a single grad student but developed across various labs. Here's where lessons from LHC and other large collaborative efforts could become a really useful model. 2022-01-06 04:26:03 @jackclarkSF @mer__edith Yeah this is a really interesting point. Academic open source projects are hard to manage and maintain - people graduate etc. - while companies pretty much have a steady team and stream of resources propping the tools up for years. 2022-01-06 04:20:21 @jackclarkSF @mer__edith By the way, I think you're totally asking the right questions here. The analogy to other large scale scientific instruments is an interesting one and worth thinking about. Just pointing out some of the reasons why I think the AI/ML space may be different. 2022-01-06 04:18:21 @jackclarkSF @mer__edith Hm, but this "if they build it, they will come" mindset assumes it's a good idea to build it. It's fine to be convinced about the importance of large AI models but not everyone is. Choosing to invest in that direction as an org is very diff from directing public funds there. 2022-01-06 04:11:44 @jackclarkSF @mer__edith Yeah I totally agree. But unless you have access to the training set, there's no way to be sure that the public model is going to behave even remotely similar to any other models the researcher is actually trying to understand. 2022-01-06 04:04:06 @jackclarkSF @mer__edith Though for the latter, I don't know if the tradeoff of dumping your data on some shared machine is worth it for most people (privacy, etc). It would be much more helpful to get funding directly to set up or make decisions about whatever infrastructure the researcher sees as best. 2022-01-06 04:01:57 @jackclarkSF @mer__edith Though we could make Google build their model on public infrastructure and that could be interesting. And yeah, I personally think the most potentially interesting public models are not going to be these large scale things. Ex., government & 2022-01-06 03:57:14 @jackclarkSF @mer__edith I don't think either of these assumptions hold. The large AI model built on public infrastructure is not going to be the same one that Google builds and there's a reason it could be more meaningful to poke at Google's model (for example, bc it affects users and non-users). 2022-01-06 03:55:15 @jackclarkSF @mer__edith hm, there's a couple assumptions happening here - that all large AI models are made equal (which we already know not to be the case) and are thus equally meaningful or interesting to probe, and that there's enough researchers interested in thinking about these kinds of models. 2022-01-06 03:47:09 RT @math_rachel: - Annotator disagreements may capture important nuances ignored by a single ground truth - A multi-task based approach yie… 2022-01-06 03:41:41 @jackclarkSF @mer__edith Some things I see no one talking about that I'd want to see more discussion about though: Frameworks - all our models are trained on pytorch, tensorflow, etc. Is there value in a non corporate framework for model training? What's the value of common infra vs increased funding? 2022-01-06 03:37:45 @jackclarkSF @mer__edith Making those judgements as a company feels fine (ie your product, your choices), but as a "collective" resource it'll be difficult to build models/datasets that most people will see as meaningful or unproblematic. Even the direction of "let's build larger models" is contestable. 2022-01-06 03:28:34 @jackclarkSF @mer__edith For instance, to collect the data to answer specific questions in physics we needed the LHC, but to build some common GTP-x requires a lot of contextual value judgements & 2022-01-06 03:21:54 @jackclarkSF @mer__edith Even some things that it seems we all think we need (ie compute, data) requires a lot of value judgements and the person controlling the resources will be making a lot of those decisions (perhaps inappropriately) on behalf of others. 2022-01-06 03:19:47 @jackclarkSF @mer__edith AI/ML research problems require eng infra around data analysis, processing, evaluation and not just collecting data. So the tools need to be more context specific and it's unclear if there are actually common resources that would be helpful here. 2022-01-06 03:17:21 @jackclarkSF @mer__edith This is a really interesting question to ask! My personal take is that there are characteristics of CS research and AI/ML work in particular that makes such large scale engineering work not feel worth it. These other projects are about eng infra for data collection processes but 2022-01-05 23:12:44 @KLdivergence Oh Noo 2022-01-04 18:53:11 @mariafarrell @IfeomaOzoma @erikashimizu Thanks for sharing this! By the way, I'm a big fan of your work, I really enjoyed this past article: https://t.co/QlJhHbtwjq 2022-01-04 17:08:14 @NThylstrup @Diogo_PH22 @IfeomaOzoma @erikashimizu @Agos_Daniella This looks great - thanks for sharing! 2022-01-04 17:00:08 It's been heartbreaking to see talented female researchers being gaslit about their own competence. It's really Pedro and his like that should be the ones second guessing and asking themselves "Did I get this opportunity only because I am a white man?" https://t.co/oV5aLGfvRf 2022-01-04 16:48:59 Like, I really wish anyone gave me anything just for being a Black woman. Then maybe there'd be more than one or two of us. 2022-01-04 16:48:58 It's so ridiculous to me when a white guy says that a minority only got an opportunity because they are a minority,... that guy is literally standing on centuries of white guys like him only being given opportunities because they are white men (as everyone else was excluded). 2022-01-04 16:23:38 RT @leonieclaude: #Digitalisierung wird von Männern geprägt – ist oft zu hören. Doch das ist nur die halbe Wahrheit. Denn die Silicon-Valle… 2022-01-04 16:22:07 @Diogo_PH22 @IfeomaOzoma @erikashimizu Yeah, true. Visibility is definitely a double-edged sword. But I don't think this is a reason to exclude minoritized perspectives - in fact, it's all the more reason to protect whistleblowers as we elevate them. 2022-01-04 16:05:23 It really does matter who we elevate. For eg., I'm glad @IfeomaOzoma & 2022-01-04 15:45:53 I've already noticed something similar with the Facebook case, where Frances Haugen is much more visible in many ways than Sophie Zhang, even though the latter is more intimately familiar with the specific policy & https://t.co/PNVMsfLv56 2022-01-04 15:30:55 Whistleblower politics is so weird. I can't help notice who the press & 2022-01-04 15:06:49 RT @FAccTConference: Two major #FAccT deadlines coming up: (1) Final submission deadline for PAPERS: Jan 14 (https://t.co/t2g275CtOK)… 2022-01-04 15:06:34 @alexhanna Oh no :( 2022-01-04 15:00:18 RT @leonieclaude: Die nigerianisch-kanadische Forscherin @rajiinio überprüft Software, die das Leben von Menschen zerstören kann. Das kommt… 2022-01-04 15:00:14 Thanks so much Roberta for featuring my work here! Such thorough reporting :) https://t.co/EbWJcHLUzM 2022-01-04 14:58:10 RT @AnnaNosthoff: Beeindruckender Auftakt zum Jahresbeginn heute @republikmagazin: @leonieclaude porträtiert in der Serie "Digital Warri… 2022-01-03 15:09:47 @holdmytowel Perhaps those are the incentives to produce this kind of work but it's pretty strange to me that the reviewers (for both funders and conferences) don't currently have the assessment tools to call the bluff. 2022-01-03 04:48:48 @tdietterich hm, I think it's a combo of this & 2022-01-03 04:35:23 It's frustrating because you can't necessarily blame individual authors for this - they're just following flawed evaluation norms, and are rewarded for this with publication. It's a community-wide issue. We kind of start the conversation in this paper: https://t.co/l8jRwE4HL7 2022-01-03 04:29:55 It's a serious problem & All illusionary progress. 2022-01-03 04:26:35 It's pretty remarkable how many published ML papers end up being practically & Evaluation practice defines what is accepted as evidence of progress in a field - ML's incoherent evals have clearly skewed our judgement of what a valid contribution is. https://t.co/K3UZJQ4MiQ 2021-12-31 22:55:19 RT @aylin_cim: Researchers and practitioners interested in fairness, accountability and transparency in socio-technical systems: We look fo… 2021-12-27 08:20:00 CAFIAC FIX 2021-12-21 08:21:11 @AmandaAskell But we don't design children, at least not directly in the way we do engineered objects like AI. Even when things are complex, it'll still be on those that made the decisions leading to the agent's outcomes that should be held responsible. Blaming "AI" really means blaming "ppl". 2021-12-21 08:08:57 @AmandaAskell But this is not a birth, it's a build? I'm not sue why we would humanize anything other than the humans... Engineered artifacts (from a bridge to an AI system) are a direct consequence of design choices, not the mysterious result of a natural process where design is not a factor. 2021-12-21 07:58:34 @AmandaAskell I see an AI system the same way I see I toaster or a car, and don't see any reason or benefit to projecting human characteristics unto it. 2021-12-21 07:55:19 @AmandaAskell Perhaps we can agree to disagree here. I consider that analogy to be appropriate bc there are practical limits to what one can blame a parent for due to a child's agency, but as a constructed artifact, responsibility is framed directly as consequences of eng choices wrt outcomes. 2021-12-21 07:30:40 @AmandaAskell Here I point to the irony of the fact that in order to argue for the humanization of AI, people will actually de-humanize the real people in the process - this includes people embedded in the data, but I'm beginning to realize this extends to others (annotators, impacted, etc.) 2021-12-21 07:27:47 @AmandaAskell Humanizing AI inappropriately has repercussions - specifically, I believe it leads to a false sense of absolved responsibility in those building AI systems (ie. treating AI as an independent agent, like a misbehaving child, and not the built artifact it is, disguises eng choices) 2021-12-21 05:47:51 This old tweet feeling more and more relevant these days... https://t.co/Pe0KsykYvp 2021-12-20 19:33:14 Excited to connect with colleagues at University of Toronto today, to discuss algorithmic auditing & It's been great to see how the work on that has evolved since I was a student there! https://t.co/9jL4BHUlJA 2021-12-18 00:33:09 @rasbt @thegautamkamath Interesting. This reminds me of @swabhz's paper: https://t.co/ez8IWWpuC9 2021-12-18 00:28:12 @mmitchell_ai Email me! (+ So excited you are teaching this class!) 2021-12-17 22:53:43 RT @WHOSTP: JUST LAUNCHED: The AI Researchers Portal on https://t.co/2aGbEHpYgP. It’s a central connection to many Federally-supported reso… 2021-12-17 20:28:51 RT @GoogleAI: Dataset distillation enables #ML models to be trained using less data and compute. Today we introduce two novel dataset disti… 2021-12-17 19:14:26 @o_saja @SmithaMilli Totally agreed - I also find that a lot of the essential "data in ML" references come from CHI, CSCW and CSS venues! 2021-12-17 18:59:48 RT @MaxALittle: Evidence for the claim: "deep learning (AI) has solved vision" is based heavily in ImageNet/CIFAR classification accuracy.… 2021-12-17 18:56:55 @o_saja Yes, I think this "big data" era work is still totally relevant. Class 1 could be a reading of "10 Rules for Big Data" (https://t.co/BUjR2aOM3L) + Dotan & 2021-12-17 18:52:42 @Aaron_Horowitz Hey, I said ONE day, not TOday 2021-12-17 18:22:32 I really shocked myself this morning, realizing I can cite a whole syllabus worth of "data use in ML" references off the top of my head...lol someone needs to let me teach this class one day, it would honestly be so much fun! https://t.co/rQPPEIDF2Q 2021-12-17 18:18:51 @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna @timnitGebru @laroyo @o_saja @harini824 @morganklauss @amironesei ok actually, one last thing bc I don't think this paper got enough love & -> + related papers by @vinodkpg et al.: https://t.co/f9zGQs8zTk,  https://t.co/5HsKLfn2aC 2021-12-17 18:09:31 RT @Abebab: This book is finally out and I am super excited about it. In the midst of so much AI over-hype, over-promise, and unsubstanti… 2021-12-17 18:07:49 @IasonGabriel 2021-12-17 18:06:06 @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna @timnitGebru @laroyo @o_saja And @harini824's work on mapping biases: https://t.co/LTm0qlNSER And @cephaloponderer, @alexhanna, @morganklauss & Aha let me stop here, I could honestly go on and on about this! 2021-12-17 18:02:53 @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna @timnitGebru @laroyo And I like @o_saja's work on taxonomizing data bias and approaching that methodologically: https://t.co/Ohyc9NgJHl 2021-12-17 18:00:24 @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna @timnitGebru @laroyo + https://t.co/pEo1rkicQW ( and earlier work on crowdsourcing - https://t.co/0V3Shnb19d) 2021-12-17 17:59:27 @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna @timnitGebru I especially learnt a lot from "Lessons from the archives". Also a lot of @laroyo's work is quite practical: https://t.co/9d6svMoCQs 2021-12-17 17:55:26 @thegautamkamath @emilymbender @ergodicwalk @amandalynneP @cephaloponderer @alexhanna Methodology-wise, a lot of @timnitGebru's work will be relevant (& https://t.co/Dq8zzGgtM3, https://t.co/QQ1I2uQHcj, etc. 2021-12-16 13:40:25 RT @TechCrunch: France latest to slap Clearview AI with order to delete data https://t.co/6Tg9ZXEn3R by @riptari 2021-12-16 03:31:22 RT @chopracfpb: Today, the @CFPB is re-launching its whistleblower website based on user research with people in the tech industry. Submiss… 2021-12-16 02:18:15 RT @Damini_Satija: Job opps! Today we’re launching the first two openings @AmnestyTech’s new Algorithmic Accountability Lab (AAL) 1)… 2021-12-15 20:22:23 RT @cfiesler: So apparently one of @FortuneMagazine's highlights of the entire #NeurIPS2021 AI conference was me throwing shade on Kant. ht… 2021-12-15 20:20:56 RT @AIESConf: Apologies for the silence as we tried to figure out how to maximize the chance of a physical conference... We're planning to… 2021-12-15 16:24:03 RT @FAccTConference: REMINDER: THIS IS DUE TOMORROW! Please get your abstract submissions in 2021-12-15 14:23:09 RT @Damini_Satija: Some news! So happy to have joined the @AmnestyTech team to head up a new Algorithmic Accountability Lab. Feels like a m… 2021-12-15 13:49:35 @Abebab @amandalynneP @red_abebe can't wait to read you attempting to be optimistic about AI 2021-12-15 13:48:39 @Abebab 2021-12-15 02:40:09 @Abebab @amandalynneP @red_abebe Grateful for the scholarship of those dedicated to the work - it inspires & I don't think we celebrate enough how amazing it is to have such role models available - I'm so happy to learn from all of you! 2021-12-15 02:36:39 Recently had the joy of watching the likes of @Abebab & People like to complain about the chaotic state of the field, but - I really just have so much hope for the next generation of scholars in this space. 2021-12-15 01:40:51 For various reasons, I have withdrawn from this event, and will no longer be participating in the debate. I know how hard it can be to organize events like this, so I do not make such a decision lightly, and I truly do thank the organizers for inviting me! https://t.co/3bN1KHVKcl 2021-12-15 00:51:47 RT @DavidVidalAI: Proud to be a part of this important collaborative effort with @MarkSendak, @rajiinio, and other amazing individuals http… 2021-12-14 21:21:10 @sociotech_jrd @schock @timnitGebru @bsmith13 @jovialjoy This is a main theme of @PopTechWorks's Automating Inequality! 2021-12-14 16:54:06 https://t.co/jwNCvVLrfY 2021-12-14 12:58:06 RT @jovialjoy: Thank you @TheKingCenter for this amazing honor and for elevating the work of @AJLUnited. I cannot wait for the beloved comm… 2021-12-14 12:54:25 RT @meatspacepress: Fake AI is out now! The book contains seventeen chapters charting the contemporary twists and turns of AI hype, pseudo… 2021-12-14 12:54:15 RT @FAccTConference: Do you have research that needs to have wider reach because of its importance to the daily lives of people & 2021-12-14 12:54:12 RT @FAccTConference: Our Call for Applications for Diversity, Equity and Inclusion scholars is now open! Funding up to $25,000, project len… 2021-12-11 15:36:52 RT @annargrs: A great discussion of ethics checklists! A highlight from the panel: Q (@emilymbender): ethics checklists have the danger of… 2021-12-11 15:34:27 @perayson @JesseDodge A recording is available here: https://t.co/yFdpVv5YH6 Right now I think you still need to register to see it, but hopefully the video will become public soon! 2021-12-11 15:33:04 @agstrait Ah, seems a recording is still accessible here if you can register (I'll flag if it becomes public later!): https://t.co/yFdpVv5YH6 2021-12-11 15:30:12 @agstrait Not sure :( Hoping Neurips will release its recorded content after the conference is over! 2021-12-11 00:16:43 The craziest part of this panel was when I was casually mentioning "The Checklist Manifesto" and @JesseDodge just pulls out the copy lying on his desk What an amazing group! Learnt so much from these panelists! https://t.co/l37lvjdAEj 2021-12-11 00:12:08 @Abebab Thanks for such amazing contributions 2021-12-11 00:09:28 @ruthstarkman @Abebab @cfiesler @JesseDodge Thanks for watching! 2021-12-10 22:06:54 @mmitchell_ai here! https://t.co/yFdpVv5YH6 2021-12-10 21:57:16 Excited to host this panel soon! There's so much to talk about when it comes to research ethics in the ML community, and I'm glad we could assemble this group of experts to approach the discussion from multiple angles, speaking to a broad range of ethical considerations. https://t.co/l37lviWxCj 2021-12-10 21:54:19 RT @JesseDodge: This is today! Looking forward to being on this plenary panel at NeurIPS to discuss ways we can build incentive structures… 2021-12-10 21:35:28 @Abebab 2021-12-10 21:27:36 @Abebab Lol thanks for doing it! 2021-12-10 21:24:31 RT @mirianfrsilva: To everyone who participated in #NeurIPS2021 and attended to @black_in_ai Workshop, thank you very much! We as organi… 2021-12-10 21:21:56 RT @stochastician: It's hard to overstate how much I love this paper: "Plenoxels: Radiance Fields without Neural Networks" https://t.co/vO… 2021-12-10 21:13:52 RT @vnasilva: Today is the day @black_in_ai program is live at @NeurIPSConf, come join us! https://t.co/bNHajKtcfs 2021-12-10 21:11:39 RT @alexhanna: @SashaMTL @rajiinio ICYMI -- @consentfultech, @Data4BlackLives, @DetCommTech, and @and_also_too just released this Consentfu… 2021-12-10 17:06:00 lol I do not look forward to having to write an "AI and Everything in the Whole Wide Metauniverse(s) Benchmark" paper in a couple decades https://t.co/f5bEIrtYvw 2021-12-10 16:43:50 RT @MarkSendak: (1/4) With support from @PJMFnd, we’re thrilled to launch a new collaboration between @DukeInnovate @DukeHeartCenter @Zaina… 2021-12-10 11:28:05 RT @timnitGebru: Watching this talk now, and no slides was 100% the way to go. I realize more than half of my slides are really not necessa… 2021-12-10 00:39:42 She killed this speech. What a great talk! https://t.co/19cmUV25oC 2021-12-10 00:39:14 RT @MadamePratolung: "The next frontier in databases is inside computers." Gender, Allyship & 2021-12-10 00:19:21 RT @MadamePratolung: @mathbabedotorg /11a @rajiinio we can't keep regulating AI as if it works Sect… 2021-12-09 17:05:00 RT @emilymbender: Poster session happening now! Come say hi :) #NeurIPS2021 2021-12-08 16:16:32 RT @dillonniederhut: Over time, papers tend to user fewer and fewer benchmarks for evaluation. This is a problem, because the results might… 2021-12-08 16:08:43 @laroyo @mrtz @sleepinyourhat @joavanschoren ends with a comment on how its really to the benefit of the community to address the benchmarking issues thoughtfully and ethically, and that it's to everyone's advantage to think about how to create and disseminate a diverse set of useful benchmark datasets. 2021-12-08 16:03:03 @laroyo @mrtz @sleepinyourhat @mrtz mentions the ethical issues associated with benchmarks. Isabelle says the need for accountability and reviewers scrutinizing the benchmarks before they enter mainstream use. @sleepinyourhat mentions crowdsourcing "labor nightmare" & 2021-12-08 15:58:23 @laroyo @mrtz @sleepinyourhat Lora: "One of the challenges is that we don't have any way to measure unknown unknowns - we can only measure what the model can see and understand. This can be quite dangerous in high stakes domain, so capturing these blind spots is very essential I think." 2021-12-08 15:57:01 @laroyo @mrtz @sleepinyourhat Mentions RL and how that became popular following the popularity of benchmark performance in game environment vs. live robot competitions at Neurips (where they were trained in simulation, and all failed miserably, actually) 2021-12-08 15:55:35 @laroyo @mrtz @sleepinyourhat "Not every interesting problem lends itself to a benchmark - in fact many problems don't. Often, interventions are needed to validate an approach, and are either unethical or too costly (ie. scientific discovery, theory, etc.) to make a data benchmark from." 2021-12-08 15:54:34 @laroyo @mrtz @sleepinyourhat "This ends up being more engineering than research and we want to avoid this as well." 2021-12-08 15:53:40 @laroyo @mrtz @sleepinyourhat Isabelle:"Hopefully we can have more principled approaches in how they are created and we have a proliferation of a greater number of benchmarks. Everyone is trying to imitate how to solve the problem in the same manner since they're all focused on the same benchmarks & 2021-12-08 15:52:16 @laroyo @mrtz @sleepinyourhat Isabelle: "People cherry-pick the benchmarks they want to work on & 2021-12-08 15:52:08 @laroyo @mrtz @sleepinyourhat @mrtz asks about "dynamic, open ended benchmarks" and what it would take to address these tasks. + "What are some research directions we miss out on because they do not fit this benchmarking paradigm?" 2021-12-08 15:49:52 @laroyo @mrtz asks about distribution shift and @sleepinyourhat says small domain-specific benchmarks that shift from the training data are a decent start but warns against the appeal of "adversarial benchmarks", since you can get to pretty consistent distortions in performance metrics 2021-12-08 15:48:18 This is the paper @laroyo mentions: "Truth Is a Lie: Crowd Truth and the Seven Myths of Human Annotation" https://t.co/0V3Shnb19d 2021-12-08 15:43:16 @mrtz asks about human annotators & 2021-12-08 15:40:43 Mentions "data excellence" framework to come up with a more robust set of practices around how we design datasets - inclu "maintainability of data at scale" (similar to software at scale), "validity" (capturing correlation between data and external measures, inspo from education) 2021-12-08 15:40:42 "We are testing accuracy on tasks with majority votes where there is genuine subjectivity in the results, and room for interpretation" @mrtz mentions human comparison, & 2021-12-08 15:32:54 @sleepinyourhat being very honest rn about NLP benchmark limitations - "We've got this discourse on what language models are good at that's not very grounded - you can point to benchmark performance to make claims, but also point to embarrassing failures & 2021-12-08 15:26:37 This is happening now! Amazing panel so far https://t.co/ZY8GrELczh 2021-12-08 15:23:12 Come talk with us about this during the poster session Thurs: https://t.co/CyteuTxr6x 2021-12-08 15:21:36 Great Question! @ThomasILiao will be presenting this paper at the Dataset and Benchmark Poster Session 3 Thu 9 Dec 8:30 a.m. PST — 10 a.m. PST. Please come check it out! https://t.co/llPAd9HOvJ https://t.co/ND8hIUJdkb 2021-12-08 06:11:43 @timnitGebru @rtaori13 @lschmidt3 @TheNapMinistry Thank you! I'm working on it! 2021-12-08 06:05:49 @timnitGebru @rtaori13 @lschmidt3 @TheNapMinistry Lol I nap regularly, don't worry! Everything gets released at the same time, but it's all a long time coming. 2021-12-08 03:32:54 There are *so* many things to think about when it comes to ML evaluation - some of which the field has yet to investigate properly! Glad I could be a part of this project, led by Thomas Liao, and with @rtaori13 & Paper here: https://t.co/uotjMGAcl2 https://t.co/vOF20UBDwe 2021-12-08 03:32:51 We reviewed 100+ ML survey papers & Often framed as a one-off casual consideration, ML eval is rarely presented as what it is - a chained *process*, rife w/ measurement hazards https://t.co/LT5f4TSDhN 2021-12-07 22:00:13 @cfiesler We are just as excited to hear you speak! 2021-12-07 21:59:43 RT @cfiesler: Typically plenary panels are a captive audience. I'm not sure how true that is for an online conference, but I am both excite… 2021-12-07 19:17:03 RT @CANSSIOntario: Deborah Raji, Fellow @Mozilla, speaks next @UofTDSI #DSSS@UofT. Raji's research focuses on algorithmic auditing & 2021-12-07 18:07:08 RT @emilymbender: For anyone wondering what this was about, it was (partly) in reference to what is now Raji et al 2021 in the #NeurIPS2021… 2021-12-06 21:45:38 @rachelmetz LOOL I wish I had the artistic skill! 2021-12-06 18:09:41 @DrDesmondPatton @SAFElab Congrats! They did the right thing. 2021-12-06 18:01:35 @Abebab CONGRATS! 2021-12-06 10:56:55 @benno_krojer But we don't currently have the structure to set this up as a research practice & 2021-12-06 10:54:10 @benno_krojer I do! A/B testing, functional tests, pilots, etc. are definitely all adequate alternative evaluation methods - even those technically without a human in the loop (think of sourcing live examples from customer interactions or an image search engine, etc.). 2021-12-06 01:50:16 @Abebab Thank you, Abeba!! 2021-12-05 13:19:31 RT @emilymbender: #NeurIPS2021 Datasets and Benchmark papers are now up in the preproceedings! https://t.co/12PQGvCdVX Thanks @joavansch… 2021-12-05 13:18:38 @joavanschoren @emilymbender @syeung10 @MariaXenoch @NeurIPSConf Thanks in advance for the help! 2021-12-05 13:18:26 @joavanschoren @emilymbender @syeung10 @MariaXenoch @NeurIPSConf Hey Joaquin, there happens to be a small formatting error in our abstract specifically, not sure if this can be fixed? https://t.co/wYCSC6mI9O 2021-12-05 01:11:47 @NeurIPSConf @mrtz And the fact that there's even a place for this work at NeurIPS this year (ie. a Datasets & cc: lovely collaborators @emilymbender, @cephaloponderer, @amandalynneP & 2021-12-05 01:05:21 This is a position paper - and thus just the beginning of what we hope will be a broader ongoing discussion on the role of data benchmarking in ML. Interestingly, there will also be a panel on this very topic @NeurIPSConf this year, moderated by @mrtz: https://t.co/TrU8qoIoWb https://t.co/vlqi4hXB7F 2021-12-05 00:58:40 To measure progress on models with broader capabilities, a single benchmark is not enough. Either use benchmarks for what they were originally designed for, to assess concrete progress on grounded applications & 2021-12-05 00:57:19 In our upcoming paper, we use a children's picture book to explain how bizarre it is that ML researchers claim to measure "general" model capabilities with *data* benchmarks - artifacts that are inherently specific, contextualized and finite. Deets here: https://t.co/hMqXsyuU1Z https://t.co/cIbUrR1gnd 2021-12-04 17:14:55 @reuben_aronson Yes. Any perceived short term benefit of overhyping performance claims is not worth the long term damage. It's not just about being caught in a lie (one that really hurts people!), it's also actually interfering with our ability to be honest with ourselves and tackle real work. 2021-12-04 17:04:44 AI hype is not just about caricature or a joke - unchecked it leads to premature, shoddy deployments, and causes real harm to real people. Lately, I get genuinely disappointed and upset when I spot it - in policy, or even marketing material. It's just not funny to me anymore. 2021-12-04 16:47:14 @mutalenkonde @easears @timnitGebru 2021-12-04 13:44:08 @percyliang @jennwvaughan And of course, major thanks to @IasonGabriel for blazing the trail with this work! 2021-12-04 13:43:20 @marylgray Thanks so much for your pioneering work in bringing these considerations into ML - we're all still just building & Also now that this & 2021-12-04 01:17:23 RT @michaelzimmer: I was among the 100+ (!!) ethics reviewers for @NeurIPSConf. Here's a summary of how that played out. 2021-12-04 01:17:14 @jennwvaughan Thanks for supporting this process 2021-12-04 01:16:40 RT @jennwvaughan: It’s been such a rewarding experience working with the amazing @rajiinio and Samy Bengio on this! We have a long way to… 2021-12-04 01:16:27 Next Fri, I'll be moderating a panel to discuss this further with experts @AmandaAskell, @Abebab, @JesseDodge, @cfiesler, @pascalefung, @hannawallach! Details here (+ check out the other amazing panels happening - moderated by @mrtz & 2021-12-04 01:16:26 It was such a joy to serve as Ethics Review co-chair with Samy Bengio @NeurIPSConf. The scale was ridiculous - over 100 ethical reviewers, over 450 ethical reviews - but beyond worth it to see authors, technical reviewers & 2021-12-03 23:30:30 RT @NeurIPSConf: Learn more about the #NeurIPS2021 ethics review process, including highlights and lessons learned, in this retrospective b… 2021-12-03 22:28:31 The most tragic thing is when people miss the deadline, because they forget to register their paper abstracts. DO NOT FORGET TO REGISTER YOUR PAPER ABSTRACT for @FAccTConference ‼‼ https://t.co/G95lrzfHjv 2021-12-03 22:26:24 RT @FAccTConference: SUBMISSIONS FOR ABSTRACT REGISTRATION IS NOW OPEN! A reminder that paper abstracts must be submitted before Dec 1… 2021-12-03 16:43:10 @mutalenkonde @timnitGebru @easears lol the revolution starts with Eric! 2021-12-03 16:30:18 @mutalenkonde @timnitGebru @easears Same here! @easears has been an incredible encouragement & 2021-12-03 16:17:00 Best thing about this article is the very intentional language of "AI marketed", "presented as AI", etc AI is not a real thing & 2021-12-03 15:21:59 @sedyst @AggieBalayn @sapiezynski Of course 2021-12-03 13:42:24 RT @sarahookr: Someone who regularly puts the spotlight on others is @hardmaru. I really appreciate all the small ways (often behind the sc… 2021-12-03 13:15:53 I keep returning to this report, crafted by @sedyst & Broadening the narrow discussion on "bias" to a more holistic conversation on "harms" is such an essential shift in the framing of AI policy. https://t.co/bb6ruNFJKb 2021-12-03 12:44:31 RT @ArlanWasHere: We don’t deserve @jovialjoy and @timnitGebru https://t.co/2876eF6c6g 2021-12-03 12:32:42 RT @_w0bb1t_: Fake #AI · From predicting crminality to sexual orientation, fake and deeply flawed AI is rampant .. Edited by @F_Kaltheuner… 2021-12-02 23:47:15 RT @KLdivergence: Almost six years ago, @wsisaac wrote an article using real data that showed how PredPol could exacerbate racial dispariti… 2021-12-02 19:23:44 RT @JuliaAngwin: Critics have long suspected that predictive policing software was racially biased. Today, we have the answer: @themarkup… 2021-12-02 19:23:33 RT @nealpatwari: Wow, congrats to this team for an amazing look behind the curtain! It's ironic that 6 years ago @KLdivergence, @wsisaac… 2021-12-02 19:03:24 RT @kharijohnson: Audits and assessments are being adopted by governments anxious to regulate AI and prevent discrimination, but there's li… 2021-12-02 15:41:59 I so much admire Timnit's courage, & 2021-12-01 21:08:10 RT @F_Kaltheuner: A strange time to launch something with the pandemic (rightly!) taking up all attention. Still, I'm very excited to shar… 2021-12-01 18:18:42 RT @lilianedwards: "We are not heading towards Artificial General Intelligence (AGI). We are not locked in an AI race that can only be won… 2021-12-01 14:17:14 RT @danyoel: Independent research shows speech recognition systems perform worse for AAVE speakers. New research reveals user frustration a… 2021-12-01 06:20:10 @swabhz @KrishnaPillutla @rown @jwthickstun @wellecks @YejinChoinka Congrats Swabha!!! Well deserved 2021-11-30 23:59:46 I'm so glad to see this paper rewarded - one of my favorites from the conference and such an important message! https://t.co/3AcVW8O5XZ 2021-11-30 23:55:26 @KLdivergence I dare you to write an intro with the word "decreasingly" lol 2021-11-30 23:54:36 @alexhanna @cephaloponderer Whoo! Congrats! 2021-11-30 23:53:33 RT @alexhanna: Wow! Honored that our paper with Bernie Koch, @cephaloponderer, and Jacob Foster won a best paper award at the NeurIPS Datas… 2021-11-29 18:08:38 RT @LeonYin: Trouble spotting Amazon private label products? @TheMarkup's new browser extension Amazon Brand Detector finds and highli… 2021-11-29 16:02:50 Imagine working for the @ACLU as an "Algorithmic Justice Specialist" Please apply! https://t.co/7YnA8es24r 2021-11-29 16:01:49 RT @Aaron_Horowitz: We have 4 open positions on the ACLU analytics team right now, all brand new frontiers for our team based on years of e… 2021-11-29 13:14:26 RT @anyabelz: #HumEval2022 deadline is 28 Feb - 3 months to get your paper ready for Second Workshop on Human Evaluation of NLP Systems at… 2021-11-29 01:54:33 @timnitGebru Still so in awe of this. So happy for you @IfeomaOzoma - you did not deserve anything that happened to you but so grateful you decided to invest your energy into making the situation more equitable for everyone. 2021-11-29 01:52:20 RT @veenadubal: I couldn’t agree more: “It’s remarkable how Ifeoma has taken some very painful experiences, developed solutions for them & 2021-11-28 23:27:24 RT @macfound: We're excited to support @ssrc_org's #JustTech Fellowship, just announced this month. This full-time $100K+/year fellowsh… 2021-11-27 03:41:17 @BlancheMinerva @RishiBommasani @PreetumNakkiran @davidwromero Yep this is a solid point & + you made a great earlier point that CLIP/DALLE is not necessarily the same as gtp-x and we should be careful what characteristics we use to set boundaries. 2021-11-27 03:36:50 @BlancheMinerva @RishiBommasani @PreetumNakkiran @davidwromero Did we need a new word? Probably not - we could have described the scenario with the set of qualifiers we already have (ie. Large pre-trained base models). But I do think raising awareness of the fact that this scenario happens so often & 2021-11-27 03:33:00 @BlancheMinerva @RishiBommasani @PreetumNakkiran @davidwromero My understanding is this - not all pre-trained models are base models for transfer learning & 2021-11-26 19:42:23 @emilymbender @MelMitchell1 + coming up on arxiv soon 2021-11-26 19:17:13 This was the most fun paper to write. I'm so happy we could leverage this analogy all the way to the end, aha. https://t.co/ffGLfB3Yby 2021-11-26 16:30:51 RT @MelMitchell1: This is such a perfect metaphor for AI. (From https://t.co/ZFaaO3cuL3) https://t.co/9vNzSiPOtY 2021-11-26 02:46:50 Petra Molnar, who wrote a @citizenlab report on the topic, captures why this is unsettling: “Decisions in the immigration context have lifelong and life-altering ramifications. People have the right to know ...so that we can meaningfully challenge.” https://t.co/QkvoCkzM1l 2021-11-26 02:46:49 This one hits a bit too... close to home. My uncle's rejected visa application emails look suspiciously like the templated answers spit out from this algorithm. https://t.co/pRmrhMIicA 2021-11-26 02:13:01 @joavanschoren @emilymbender @amandalynneP @cephaloponderer @alexhanna Thanks for your work advocating for and coordinating this track, Joaquin - I really appreciate that effort and think it will add a lot to this year's conference (+ hopefully future conferences as well)! 2021-11-24 19:33:21 RT @StanfordHAI: At this year's fall conference, scholar @rajiinio discussed her proposal for enabling and supporting third-party auditor a… 2021-11-23 20:25:51 @schock 2021-11-23 18:47:35 RT @qi2peng2: Crowdsourcing is hard, technical work. I wish one day I get to write a paper on a crowdsourced dataset that mainly focuses on… 2021-11-23 14:48:43 RT @emilymbender: @rajiinio @amandalynneP @cephaloponderer @alexhanna Research designing & 2021-11-23 06:53:55 @aselbst Ohh this is an interesting framing of the problem. Yeah, now that I think about it, it's pretty alarming that impacted non-users are so absent from HCI work. 2021-11-23 04:21:27 Was just reminded that this the term for what's happening. Such a good read by @mona_sloane, @MannyMoss, & https://t.co/54J84rVAmd 2021-11-23 01:56:28 I keep noticing that when companies want to show that they've consulted stakeholders to increase "participation", they will always choose users over the impacted population, even when the interests of these groups are clearly misaligned. A very real and recurring frustration. https://t.co/8XzbvLOEM4 2021-11-22 19:56:24 @ambaonadventure @mer__edith @sarahbmyers @oliviersylvain DREAM TEAM So thrilled for you all! 2021-11-22 19:54:48 There's finally a place at @NeurIPSConf for discussions on data and evaluation. It's worth everyone's time to check this out! https://t.co/6ylmZbSRus 2021-11-22 19:53:44 Daniel has done such impactful work with Algorithm Tips (https://t.co/YxUTV6wMIE) - excited to see what he does next! https://t.co/wOCA7S29sj 2021-11-22 02:36:42 RT @omertene: Dream job alert! The @OECD is hiring for Head of Unit – Data Governance and Privacy. This is a phenomenal tech policy/diploma… 2021-11-22 02:28:49 RT @rzshokri: What is the standard way of auditing data privacy for machine learning models? We have designed strong membership inference a… 2021-11-20 17:33:42 @WriteArthur oh interesting. Thanks for flagging, I'll take a look - I wasn't aware of this! 2021-11-19 21:06:26 @Adewunmi Yes, there's a paper I'm working on right now - will be able to share publicly soon! For now, feel free to cite the blog and/or presentation directly :) 2021-11-17 06:46:32 RT @jennwvaughan: Mark your calendar now!! On top of eight keynotes, #NeurIPS2021 will feature three plenary panels: - The Consequences of… 2021-11-15 21:57:39 RT @FAccTConference: We are committed to ensuring that no-one will be unable to present their work at the in-person conference due to re… 2021-11-15 15:01:10 I was fortunate to act as a technical advisor for this report & So excited to see as these ideas of third party auditor access evolve into practical realities - we need to support those already doing the work of holding platforms accountable. https://t.co/lwFrhgoEnm 2021-11-15 14:54:08 RT @drewharwell: Enjoying this dataset of thousands of images that often confuse AI systems, known as "natural adversarial objects." Donut… 2021-11-15 14:39:52 @brandonsilverm @vivian also doesn't work for me! 2021-11-15 04:48:13 Something cannot be more valuable or true because it uses a certain method, qualitative or quantitative. All these approaches have their limitations & 2021-11-15 04:39:13 I understand that we must emphasize the need for more qualitative work in a field that leans naturally towards valuing quantitative work but I also genuinely worry we don't think enough about when such methods are helpful or think carefully enough about how to properly execute. 2021-11-15 04:35:09 It confuses me when some valorize certain methods over others. Of course, quantitative methods aren't objectively "better" than qualitative methods - but the inverse is also true (ie social science can be just as abstract & 2021-11-13 01:07:22 In this paper, led by @amandalynneP, and w/ @alexhanna @cephaloponderer & 2021-11-13 01:07:21 We just published an extended version of the Data & Data has always been a critical aspect of machine learning but remains overlooked, under-considered and extensively mishandled in practice and pretty much ignored in theory. Why? https://t.co/gBxG7Riraw 2021-11-13 00:12:59 RT @amandalynneP: The new and improved version of "Data and its (dis)contents" is published at @Patterns_CP today! Co-authored with @rajiin… 2021-11-11 20:15:45 RT @dinabass: Facebook's facial recognition call doesn't mean most companies are backing off the tech, but it gives researchers & 2021-11-11 14:57:51 RT @haldaume3: agree with everything here i’d also point people to https://t.co/dcA5mqmY0T “AI and the Everything in the Whole Wide World… 2021-11-10 22:15:10 RT @WHOSTP: .@WHOSTP announces public events to engage the American public in national policymaking about AI and equity. Mark your calend… 2021-11-10 22:13:11 @SimoDragicevic @StanfordHAI @dpatil @mathbabedotorg @ProfFionasm @AvilaGarcez @chris_percy Thanks for sharing - I'll check this out! 2021-11-10 21:00:01 RT @StanfordHAI: The final session of the 2021 fall conference #RadicalPolicies4AI examined various perspectives on algorithmic auditing wi… 2021-11-10 17:37:42 @lauren_marietta Good question! @DanHo1, any idea? 2021-11-10 17:09:10 RT @FAccTConference: BIG NEWS! THE FACCT DEADLINE HAS BEEN EXTENDED! New dates: Abstract submission: 15 December 2021 Paper submis… 2021-11-10 17:02:16 Giving this talk today! Excited to engage in discussion about what it means to get third party participation to happen for algorithmic systems. I've had a lot of thoughts about this for a long time - glad to finally to present some of these ideas. https://t.co/sYRtMuC9Ma 2021-11-10 16:49:52 RT @drfeifei: Second day of @StanfordHAI Fall Conf - 2 more radical proposals for #AI and Policy, one on data cooperatives by @divyasiddart… 2021-11-09 19:20:10 RT @DrDesmondPatton: NEW OPPORTUNITY: @ssrc_org has just launched the #JustTech Fellowship, a 2-year, full-time fellowship offering $100K+/… 2021-11-07 06:14:11 @Abebab We just wrote this - about how weird it is that people keep trying to benchmark "general" AI capabilities, with data that is inherently subjective, scoped and limited lol https://t.co/ELtmdVZXM8 2021-11-06 23:20:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-08-19 15:12:22 RT @nixonron: Today we begin publishing a series of stories exploring the impact of algorithms and AI on our lives every day. The first sto… 2021-08-18 22:50:19 @pranesh Of course there can always be some imagined benefits but there are many examples where that benefit even in the most idealistic scenario seriously dwarfs the cost (think of arguments against nuclear and automated weapons). We assume fundamental models dont fall into this category 2021-08-18 22:46:42 @pranesh Don't think that's true - more often, tech is inappropriately presented as "dual use" even when, due to issues with functionality or a hyper focus on a handful of harmful applications, the risk far outweighs benefits. I think this is the overly optimistic view she warns against. 2021-08-18 04:29:15 RT @caparsons: Apple says researchers can vet its child safety features. It’s suing a startup that does just that. https://t.co/ocK3lrVOnn 2021-08-16 18:09:13 @Aaron_Horowitz @ziebrah @hiddenmarkov @red_abebe @maxkasy lol yeah unfortunately they are often optimizing for just the volume and availability of the data, not its quality 2021-08-16 17:52:30 @ziebrah @hiddenmarkov Yeah, kind of reminds me of @red_abebe & Anyone in power can just decide on what it means to be worthy of a particular outcome...even if it's based on illogical assessments. Makes fairness impossible. 2021-08-16 17:35:26 @DataVizzdom @hiddenmarkov Yeah - it's here: https://t.co/NLFtwlbNQR 2021-08-16 17:29:15 @hiddenmarkov Yet another reminder that the pseudo-scientific tasks in ML go way beyond those obviously bogus facial recognition tools - we've already normalized making claims for ML to predict personality, emotion, criminality, etc. with inappropriate data from social media clicks, text, etc. 2021-08-16 17:12:39 So @hiddenmarkov just spotted the text based version of those phrenology/physiognomy papers... I mean, what does it mean to have the writing style of an “extrovert and liar”, and how would ML even learn that, and why should this be used to assess someone's loan application? https://t.co/eEZs6Okdgl 2021-08-15 15:33:30 @CatalinaGoanta @RebekahKTromble Thanks, will for sure take a look! 2021-08-15 14:16:48 @CatalinaGoanta @RebekahKTromble Agreed! Though how the DSA discusses the concept of auditing differs imo from how that activity is characterized in other policy contexts and how auditors will refer to themselves. These definitional differences are certainly worth addressing though - thanks for bringing this up! 2021-08-15 13:58:41 @RebekahKTromble Thanks for sharing this - will read further! 2021-08-15 13:57:29 @CatalinaGoanta @RebekahKTromble I don't think the term auditing needs to be used for the regulation to impact audit activity & 2021-08-15 13:54:16 @CatalinaGoanta @RebekahKTromble The "independent" auditors mentioned in Article 28 are not third party / external auditors - they are hired by the company and audit target. Functionally, Art 31 is the only one that seems to allow 3rd party auditors (in this case, just academic researchers) protected access. 2021-08-15 06:39:29 @RebekahKTromble Interesting - do you mind mentioning some examples? My understanding is that functionally, Article 31 is the most explicit instance of protecting third party auditors in the DSA, though you make a good point that it probably wasn't drafted with that intent. 2021-08-14 23:44:19 @natematias Wow this is an incredibly interesting analogy - thanks for sharing! 2021-08-14 15:54:35 @CatalinaGoanta Hm, in my mind there is a big difference between flagging bad content and getting protected access in order to identify more systematic issues in the moderation system. The latter is the role of the external auditor and to narrow that role to just academics is I think a mistake. 2021-08-13 16:37:22 RT @algorithmwatch: 1/9 AlgorithmWatch was forced to shut down its #Instagram monitoring project after threats from #Facebook! Read… 2021-08-13 16:24:52 A reminder that it's not just academic researchers that operate as third party external auditors to AI products. Investigative journalists, civil society, regulators, law firms and so many others can also play this role - and *all* of us need to be legally protected & 2021-08-13 15:31:47 Horrible - the way Facebook is strategically silencing third-party external auditors should be raising serious red flags for everyone hoping to see these companies being held accountable. https://t.co/MExtLHr8Kb 2021-08-13 15:09:30 Not enough people cite @LatanyaSweeney's early audit work in this space - glad to see that being mentioned here! She's a pioneer of the field and deserves more recognition for her contribution. https://t.co/bnU9l21tHM 2021-08-13 12:24:36 RT @daphneleprince: I wrote about this before when I spoke to @rajiinio, who is doing fantastic work to develop a methodology for algorithm… 2021-08-10 07:20:36 RT @EthanZ: Please support us in calling attention to this important battle between Facebook and NYU researchers. https://t.co/usK1WX3Fq0 2021-08-09 17:22:15 @KLdivergence Either way, totally happy for you you deserve fulfilling work in a safe environment.. and rest lol 2021-08-09 17:20:08 @KLdivergence lol Kristian I can tell you're really loving your new job 2021-08-09 15:41:50 Especially in machine learning, there is no real separation of "industry" and "research" - claiming this as a reason to abandon ethical responsibility as a researcher is such a weak excuse. It's no longer acceptable to say, "we don't need to think about this until deployment". https://t.co/ZcxruM8OCu 2021-08-09 15:28:30 RT @random_walker: To better understand the ethics of machine learning datasets, we picked three controversial face recognition / person re… 2021-08-09 15:14:03 RT @hiddenmarkov: Wow, this was an unexpected conclusion of the week! My submission was awarded the 1st place in the Twitter's Algorithmic… 2021-08-09 05:36:46 @hiddenmarkov @ruchowdh @TwitterEng Congrats Bogdan! 2021-08-07 21:37:32 RT @imchristiepitts: Me, at the airport, impulsively buying Wired bc @timnitGebru is on the cover https://t.co/saGSnGDWHc 2021-08-07 18:50:41 @emilymbender @jovialjoy @voguemagazine I know right?! Peak science communication lol and done in style 2021-08-07 18:46:49 Thrilled to see @jovialjoy in @voguemagazine! And challenging the biased image search results for "beautiful skin" We've gone mainstream, people! https://t.co/fCugR7IlOo 2021-08-07 18:38:11 @AllDeepLearning @alexhanna @Jenny_L_Davis @AprylW @safiyanoble Here you go: https://t.co/df59N8SqfF 2021-08-07 16:36:16 @alexhanna @Jenny_L_Davis @AprylW Wow, so excited to read this paper! @safiyanoble casually mentioned reparations in a panel once and it's been at the back of my mind ever since. Such a powerful framing! 2021-08-07 13:23:00 @MarkSendak @DrLukeOR @LukasBrausch @marylgray Just adding a note here to clarify that I've yet to comment on any of this yet & Though I suspect you may be talking past each other - Twitter isn't ideal for debate. 2021-08-06 18:49:42 RT @LeonDerczynski: "Double Standards in Social Media Content Moderation": new Brennan Center report: https://t.co/bNRSbRZDKC -- thanks @ra… 2021-08-06 14:29:08 RT @AbusiveLangWS: In 8 minutes, we open the 5th WOAH! Join us for a full day of keynotes from @LeonDerczynski Murali Shanmugavelan & 2021-08-06 00:47:21 This is why we need regulators to protect qualified third-party auditor access. Big tech companies should not have the ability to block external scrutiny of their products like this! https://t.co/upCkxrNzRM 2021-08-06 00:18:54 RT @anjalie_f: @rajiinio a lot times issues in AI/NLP applications are identified by communities of color, but the broader public / resear… 2021-08-05 22:09:04 RT @ZhijingJin: [Happening Today9am ET] We‘re hosting the 1st NLP for Positive Impact Workshop @NLP4PosImpact at #ACL2021NLP @aclmeeting… 2021-08-05 22:08:52 RT @MaartenSap: Also, I'll be moderating a panel on "The Future of NLP 4 Positive Impact" which will feature @pascalefung , @rajiinio, @bao… 2021-08-03 07:57:40 @wsisaac @DeepMind its really the least they can do after the revolutionizing you've accomplished congrats! 2021-07-30 18:00:58 RT @AfogBerkeley: In our first year, @AfogBerkeley member @AmitElazari championed the idea of bug bounties for identifying algorithmic bia… 2021-07-30 16:56:15 RT @TwitterEng: Calling all bounty hunters - it’s officially go time! We’ve just released the full details of our algorithmic bias bounty c… 2021-07-30 16:47:43 RT @geminiimatt: Much love admiration to the CRASH Project for trailblazing this research & 2021-07-30 16:23:50 RT @schock: @ruchowdh This is great to hear! Over at @AJLUnited we've got some related news to share https://t.co/EcmHHAgclV 2021-07-30 15:01:29 RT @AJLUnited: Happy Hacker Summer Camp Season! A CRASH Project update, from the team at @AJLUnited. https://t.co/s4RcStRtli https://t.co… 2021-07-29 22:26:35 RT @DrDesmondPatton: Excited to announce the Fall 21 lineup for our Race and Data Science Lecture Series at @DataSciColumbia. Please share… 2021-07-29 14:36:20 RT @jovialjoy: Your actions have inspired me to set boundaries I thought were not possible because of the weight of expectations. Too often… 2021-07-27 00:35:43 RT @schock: Two Muslims walk into a ... Large Language Model. https://t.co/J9dwmXVkTk 2021-07-26 20:21:39 @seanmmcdonald @jennifercobbe Thanks! 2021-07-26 20:18:07 @jennifercobbe Oh my. Very scared to ask but could you please elaborate? 2021-07-26 20:08:52 @mmitchell_ai @andrewthesmart @undersequoias @alexhanna Yet another thing we have in common 2021-07-26 19:18:27 RT @conitzer: Please share and respond to (by Sep 1) this request for public input on a National (US) AI Research Resource 2021-07-26 19:15:04 @Aaron_Horowitz @WHOSTP @Twitter Makes sense. Two of the most influential American institutions lol 2021-07-26 14:57:21 WOW THIS IS EXCITING! Congrats! https://t.co/J2x58woaLU 2021-07-25 02:26:57 @Miles_Brundage I don't think you'd personally call anyone a "hater", but this is language I see repeated explicitly by tech leaders following criticism of a product launch or AI tool. They say that to dismiss the criticism, by framing it as related to "other" things beyond their responsibility. 2021-07-25 02:24:11 @Miles_Brundage Yeah, sure - when it's implied that the criticism originates from motivations outside of pointing out failures/limits (and thus the harm these companies are responsible for), then it can be easily dismissed, related to things companies don't actually feel responsibility for. 2021-07-25 02:16:01 @Miles_Brundage Or they want things to work differently from how they do now. My main point though, is that this has much less to do with personal mistrust or dislike for individuals and institutions than you imply in the thread. Of course, I can only speak for myself and colleagues though. 2021-07-25 02:14:01 @Miles_Brundage I think a perceived de-coupling of criticism from failure/limits is exactly what people use to prop up the "hater" framing. You're right that I don't agree - I think many critics are concerned for those impacted & 2021-07-25 01:52:39 It can be disappointing to build something & 2021-07-25 01:48:22 I criticize to protect those communities with the knowledge I have in whatever way I can. I don't criticize because I don't like tech people or don't value tech companies. I mean, I've worked at tech cos, and have a lot of respect for tech workers. This has never been about them. 2021-07-25 01:45:11 I became disillusioned by the evidence. As in, I learnt more about how the technology worked and realized the narrative being told by these tech leaders was often misleading and most importantly harmful for communities I see every day and care about a lot. 2021-07-25 01:42:59 It is upsetting to see tech leaders respond to critics as though they are "haters", as if this is personal and there is some perceived value to negative reactions, because for many of us this is not the case. I came into tech optimistic and ready to believe in progress. 2021-07-25 01:38:57 I studied engineering & 2021-07-25 01:36:43 I catch glimpse of this perspective often - it genuinely surprises me. AI critics are not haters reluctant to give tech ppl their due credit. This in fact has nothing to do with tech ppl, is not at all personal. This is about those impacted & 2021-07-24 16:59:47 RT @sherrying: What’s your favorite band? https://t.co/Lbrxm9ukAx 2021-07-24 16:58:51 @alexhanna @sherrying @andrewthesmart Yeah - he told us! I feel like I got a limited edition original lol could be worth a lot in an NFT someday 2021-07-24 16:56:23 @sherrying @andrewthesmart ANDREW START SELLING THESE SHIRTS NOW 2021-07-24 15:53:49 @schock of course this would happen to you! Your twitter threads are legendary aha 2021-07-24 15:52:42 @ncooper57 Yeah, totally agreed with this - tests could have been more systematic. Thanks for sharing that paper, it looks great! 2021-07-24 03:41:18 @databoydg Congrats! 2021-07-24 03:26:25 Or any generated text for that matter (ie. to judge "this generated text is functionally doing what it's supposed to be doing - without mistakes") Code generation is really a brilliant use case for this! 2021-07-24 03:17:46 Like, imagine if we could measure the quality of machine translation based on operational functionality with some scenario test cases, as we can with code. Guess the closest we have right now is a human judge saying "yes, this translation is doing what it's supposed to be doing". 2021-07-24 03:14:57 I think about this a lot: "BLEU score may not be a reliable indicator of functional correctness. Functionally inequivalent programs generated by our model (which are guaranteed to disagree w the reference solution) often have higher BLEU scores than functionally equivalent ones." 2021-07-24 03:10:43 Don't think enough people talked about how interesting the evaluation situation was for Codex. It's a language generation model tied to a practical use case & 2021-07-23 19:28:54 @andrewthesmart @sherrying I get complimented every single time I put it on ahaha 2021-07-23 01:21:53 @lucy3_li Oh nice! Yes, let me DM you 2021-07-23 00:52:33 @chrmanning @ariannabetti Yeah - and I don't think this phenomenon is unique to NLP. There's a plethora of judgement-based tasks (inclu. toxicity detection but really most classification tasks) where there is no valid ground truth and annotation pretty much amounts to polling for majority perspectives. 2021-07-22 21:47:13 @davegershgorn Congrats Dave! 2021-07-22 18:19:31 @chrmanning @ariannabetti This looks great - thanks for sharing! 2021-07-21 15:17:37 @alexhanna @luke_stark All our faves taking over the white house - I love it aha 2021-07-21 15:17:05 @luke_stark ‼ 2021-07-21 15:16:02 RT @_KarenHao: This @WSJ investigation is incredible. The team created 100 bots with different profiles (age, location, interests) to watch… 2021-07-21 04:32:24 I honestly could go on and on. Rashida has been tirelessly pushing for tangible accountability outcomes in the AI space for years - it gives me such joy to see her now put in a position to do even more (& Trust me, we've all won with this appointment! Well deserved 2021-07-21 04:32:23 5. Most importantly for me though - in conversation but also through her work, I've learnt so much from her about race. She sees racial dynamics with clarity & Ref: https://t.co/eQ0ZqiSDCz, https://t.co/oQJnnidUZ1 https://t.co/Xoxe4x5nEC 2021-07-21 04:32:22 4. I also particularly appreciated this recent output - a paper on sorting through policy discussions to land at a practical definition of "automated decision systems" (ADS), a concept policymakers regularly misunderstand & Ref: https://t.co/nOOApXLrzk https://t.co/glLnALBZyN 2021-07-21 04:32:21 3. There's also this *award-winning* work with @ambaonadventure, defining the risks of "suspect development systems" (SDS). It's a solid case for how databases and not just algorithms require regulatory oversight, as consequential surveillance tools. Ref: https://t.co/aTXGdGxm2l https://t.co/FbGEEvLpcg 2021-07-21 04:32:20 2. While @AINowInstitute, she led & Ref: missing?https://t.co/Zl7zC6tvmd https://t.co/hQegckXyTN 2021-07-21 04:32:19 1. In the "Dirty Data" paper, she perfectly highlights a situation where "ground truth" can't be trusted, revealing how police departments that "juke the stats" (ie. manipulate data) still feed that corrupt data into predictive policing systems. Yikes Ref:https://t.co/UTU3i7vL1J https://t.co/J3TEAhOE5B 2021-07-21 04:32:18 Amazing news! Rashida Richardson is one of my favorite scholars. Her work has been transformational in the AI accountability space - she's revolutionized how we talk about data & For those unfamiliar, here are some of her papers I learnt a lot from. (THREAD) https://t.co/zogLs9wJIC https://t.co/2nYaMd2Mxk 2021-07-20 19:25:26 RT @mer__edith: This is such a wonderful appointment and so richly deserved The inimitable #RashidaRichardson is joining @AlondraNel… 2021-07-20 18:52:48 @struthious @841io @RDBinns oh, it's not what I was thinking of but this is a nice paper - thanks for sharing! 2021-07-20 18:50:55 @RDBinns @841io lol I don't think its any of these, strangely enough (the paper I'm thinking of has "utility" right in the title). But I love your title much better tbh aha the spice girl reference is gold 2021-07-20 17:49:16 RT @ButterKaffee: @tdietterich @asayeed Ground Truth is an illusion. Not only in NLP 2021-07-20 17:02:53 This is a good point, and something I now think about a lot. Sometimes annotators fundamentally disagree because they have different worldviews that cause them to perceive the data differently -- and not because anyone is particularly wrong. (eg. see: https://t.co/C15VSPLV6L) https://t.co/BhpuixUmql 2021-07-20 16:08:07 @jacyanthis I think policymakers have been willing to present at ICML workshops in the past, and even including a non-ML researcher with policy experience would provide a much needed difference in perspective. I'm sure organizers did their best, but the debate framing here is disappointing. 2021-07-20 14:32:24 *whether, ugh 2021-07-20 14:29:06 Am I missing something or is there not a single regulator as part of this discussion? It's possible the debate question is just awkwardly worded but it will not be up to the ML community wether or not they get regulated. Hopefully someone else will be making that decision. https://t.co/ZLjbGilqOh 2021-07-20 00:56:57 h/t @davegershgorn 2021-07-20 00:56:37 oil is the new data lol https://t.co/rBMVG8Po29 2021-07-19 21:37:17 @baykenney @black_in_ai Thanks 2021-07-19 21:36:52 @jennwvaughan @black_in_ai Thanks Jenn! 2021-07-19 21:36:29 @ruha9 @black_in_ai @Berkeley_EECS @berkeley_ai Thanks, Ruha!! 2021-07-19 21:35:58 @ndiakopoulos @black_in_ai @Berkeley_EECS @berkeley_ai Lol no pressure but thank you 2021-07-19 21:12:32 @841io @RDBinns Interesting. I recall that there's a CHI paper on exactly this idea of better measuring utility for online interactions, but can't remember the title 2021-07-19 20:44:36 Not at all an exaggeration to say that I'm in research today (and will hopefully continue) becuase of the support of groups like @black_in_ai. For anyone volunteering for any affinity group, know that your work is important & Representation matters! https://t.co/N0QpAD3HRf 2021-07-19 04:54:11 RT @fborgesius: “Auditing machine learning algorithms”, A white paper for public auditors, by the Supreme Audit Institutions of Finland, Ge… 2021-07-16 20:44:28 @_alialkhatib lol parental supervision required, truly 2021-07-16 16:59:53 @Aaron_Horowitz 2021-07-16 16:55:53 There's a Hardware Lottery (https://t.co/fmOhAc6kMo) and now a Benchmark Lottery - what if all progress in ML is just a sleight of hand...? https://t.co/Ap91xyDDRe 2021-07-16 16:42:48 RT @mmitchell_ai: I did an interview with Venturebeat about AI Ethics, Corporate structures, and Inclusion. Also talk a bit abt the firing… 2021-07-16 00:07:42 RT @PhilipDawson: Mapping of existing #AI standards development efforts onto requirements of EU Artificial Intelligence Act... @EU_ScienceH… 2021-07-15 21:10:50 @Combsthepoet 2021-07-15 14:15:45 @_datamimi @RoyalStatSoc @turinginst @rajinio @timnitGebru @jovialjoy Congrats! :) 2021-07-15 04:36:01 RT @katherine1ee: Data duplication is serious business! 3% of documents in the large language dataset, C4, have near-duplicates. Dedupl… 2021-07-15 04:29:31 RT @FOX2News: Staffers barred her entry saying she was banned after her face was scanned - saying Lamya was involved in a brawl at the skat… 2021-07-15 04:28:42 @undersequoias @Roblox it is so precious to me that he uses the "@" to try to pre-empt mismoderation in his defense lol what a hero 2021-07-15 04:26:43 RT @Combsthepoet: A 14 year old Black girl in Livonia, MI was misidentified by facial recognition at a skating rink. She was accused of bei… 2021-07-14 05:03:53 RT @ProfFerguson: Proprietary algorithms should not be allowed into evidence, if they can’t be tested by experts. This shouldn’t be hard fo… 2021-07-10 16:17:36 @JordanBHarrod Happy Birthday 2021-07-09 15:14:05 I learnt a lot about this from @red_abebe's paper on "Narratives and Counternarratives on Data Sharing in Africa" (see: https://t.co/c9ADtLHJhA). https://t.co/fIVdXEjnSO 2021-07-09 04:41:32 RT @Miles_Brundage: Excited to finally share a paper on what a huge chunk of OpenAI has been working on lately: building a series of code g… 2021-07-09 02:45:30 @krvarshney 2021-07-09 02:39:57 @Mehtabkn Yeah, these are great points! I do still think it's an under-utilized legal strategy tho, and my hope is that impacted individuals can still sue in civil court or something if the company's claims are proven false and disingenuous. 2021-07-09 02:17:35 Thanks @ruchowdh, @mona_sloane, @MannyMoss for writing this: https://t.co/9eZ2FhiYAq https://t.co/2wv1NfZRj6 2021-07-09 02:17:34 AI hiring tools have proven to be an excellent example of this - I don't want to hear about the 4/5th rule or EEOC when the product is pseudoscience. https://t.co/tRjCDYZmgb 2021-07-09 01:35:36 RT @LeonYin: If anyone wants a list of hate and social justice phrases to run thru, say, TikTok... Here are several we built with civil rig… 2021-07-08 08:52:37 @d4br4 It's clear to me why the reaction here is different, even when the stakes really aren't. 2021-07-08 08:52:24 @d4br4 Yup, that's a really valid point, and apologies if my initial reaction seemed to discount that! Honestly, the reply was more directed to other comments I was seeing at the time, comparing the copilot situation to the handling of CC licensed photos in image datasets, etc. 2021-07-08 08:20:16 @d4br4 Nope, I don't think writers don't care - there's just a less active community monitoring this the way the open source community does w/ code licenses. I'm not saying other cases of copyright violations are less important, just noting that it's unsurprising it's code sparking this 2021-07-08 01:36:29 I keep seeing comparisons to Creative Commons licenses lol The attention to detail & 2021-05-21 13:35:18 RT @schock: Because using a technology known to systematically discriminate against people with darker skin as a checkpoint to access unemp… 2021-05-21 00:56:40 RT @RoxanaDaneshjou: And if you think that the FDA will protect us, please check out our paper on AI devices in medicine: https://t.co/BgrK… 2021-05-21 00:52:32 @RoxanaDaneshjou @alexhanna @Abebab Great thread! I especially appreciate the point on advertising performance despite the lack of clinical trials. One of my main frustrations with ML in healthcare applications. 2021-05-21 00:49:26 RT @RoxanaDaneshjou: What does it mean to launch a tool "meant to help patients" but shouldn't be used as a diagnostic? Google is launching… 2021-05-20 23:41:53 RT @jackbandy: For those who were interested, I put a template for the "dataset nutrition label" card on @overleaf! Should be easy to reuse… 2021-05-20 17:01:01 RT @zenalbatross: when asked about the risk of the app misdiagnosing Black and brown folks, Google's response was basically 'It's not our f… 2021-05-20 17:00:33 RT @zenalbatross: NEW: Google released an app that diagnoses skin conditions based on photos.... but it won't work on people with darker sk… 2021-05-19 22:55:21 @jefrankle @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited Thanks - though we were definitely directly influenced by your early work at Georgetown! "Perpetual Lineup" was the first thing Joy made me read when I started working with @AJLUnited lol https://t.co/KWolYPpKXy 2021-05-19 22:00:39 @mnirPRJCTS Oh so sorry - I am not good at email and it's likely in spam depending on where you sent it! I'll dig up the email and respond shortly 2021-05-19 21:54:02 In light of frequent erasures, we might need something like this but for AI? https://t.co/DfGCbHUu2L 2021-05-19 21:00:25 RT @alvarombedoya: There is a "nothing-to-see-here" tone to this statement from @60Minutes that, combined with the failure to make any ment… 2021-05-19 20:44:53 @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited I know it's unproductive to focus on "firsts" but Joy is indeed a pioneer here. Much of our knowledge of intersectional & 2021-05-19 20:39:50 @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited The 60 minutes segment was about individuals misidentified because of racial disparities in performance, and the NIST study they focus on is the 2019 one, which was the first one in which NIST includes evaluations for race for the first time (all because of @jovialjoy's work)! 2021-05-19 20:36:28 @ghoshd @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited Not necessarily - though I do see audits as being part of a participatory evaluation approach of sorts. I find them to be one mechanism for humans to collectively assess models and determine if it's appropriate for deployment in a given context. 2021-05-19 20:34:37 @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited Also, before Gender Shades, no one was evaluating this performance intersectionally. Looking at performance disparities across subgroups at the intersection of race and gender, for example, reveals more remarkable failures than looking at disparities across unitary subgroups. 2021-05-19 20:32:15 @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited But afaik, NIST didn't audit for performance differences due to race until 2019. This was was the first year NIST included a "part 3" for FRVT (for a demographics performance disparity test), and they cite Gender Shades as the direct inspiration and motivation for them doing so. 2021-05-19 20:28:50 @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited There are other papers about a decade later referencing similar observations (eg. "Face Recognition Performance: Role of Demographic Information" from 2012 and "An other-race effect for face recognition algorithms" in 2011, which we regularly cite in our work). 2021-05-19 20:26:26 @roydanroy @cfiesler @jovialjoy @60Minutes @AJLUnited The 2002 NIST reference is probably from one of the first FRVT results, which includes this comment: "Demographic results show that males are easier to recognize than females and that older people are easier to recognize than younger people." https://t.co/0N4ZEzmq9d 2021-05-19 19:53:05 @ndiakopoulos @NeurIPSConf Happy to see this work! 2021-05-19 19:52:45 RT @priyakalot: Curious about how AI researchers are thinking about the societal consequences of their work? We (@JessicaHullman @ndiakopo… 2021-05-19 19:52:25 RT @ndiakopoulos: Hope the @NeurIPSConf organizers will take note https://t.co/9CoIImet4X 2021-05-19 13:50:40 @Combsthepoet @schock @imlwilliams @jovialjoy @timnitGebru Thank you for fighting so hard despite the struggles and lack of recognition. I know it comes from pure love 2021-05-19 13:41:56 Seriously. All this advocacy took so much energy & I am literally begging journalists to be more honest about this. These are unbelievable outcomes from large tech cos - name those involved in making this happen https://t.co/LFFCURTPxe 2021-05-19 13:25:55 RT @AJLUnited: This is great news!! When we rise up together, justice wins! #SurveillanceIsntSafety https://t.co/G59O3MR6Yg 2021-05-19 01:09:45 Wow - @timnitGebru @iajunwa & No doubt that some serious truth is about to be dropped. https://t.co/ZzrjselaV5 2021-05-19 01:06:32 RT @conitzer: The AI, Ethics, and Society Conference starts tomorrow (Wednesday)! @AIESConf See the program here: https://t.co/Lfpbukygl2 2021-05-19 01:00:22 RT @richardjnieva: New: I chatted with Google AI chief Jeff Dean about the controversy involving @timnitGebru and @mmitchell_ai. "The reput… 2021-05-18 20:40:46 @Leo___Sturm Yep - decisions are being made, we're just asking to make them more carefully and with intention, please. No mythically "unbiased" dataset exists, neither is it necessary here. 2021-05-18 20:31:10 Bias in data is inevitable but there's nothing inevitable about deploying a product that doesn't work on a vulnerable population. 2021-05-18 20:17:59 RT @Combsthepoet: I don't have the energy to say all I want to say, but I also spoke to 60 mins about the organizing on the ground in Detro… 2021-05-18 19:53:14 RT @ClareAngelyn: I did NOT expect this but am not at all mad about it https://t.co/lBPqeVMXY3 2021-05-17 14:59:29 @geomblog @WHOSTP @AlondraNelson46 @BrownUniversity @BrownCSDept @Brown_DSI @senykam @UtahSoC Congrats! Never been more excited to have someone join the government lol - please tell them everything, especially all that stuff about AUC being a useless metric 2021-05-17 14:11:06 By the way, if you're looking to see the absolute other side of the spectrum - a feature on algorithmic bias with Black people everywhere, highlighting the diverse pool of researchers working on this issue, then I really suggest checking out this recent episode of #UnitedShades https://t.co/Jm5ixtzVY5 2021-05-17 13:36:16 @karaswisher @jovialjoy the irony here is literally killing me 2021-05-17 13:12:46 @jovialjoy @AJLUnited This is the thing that frustrates me the most. How could someone waste your time like that? As if you don't have so many other things to do. It's unbelievable! 2021-05-17 12:38:32 @jovialjoy And this time, the choice was so deliberate - that NIST study cites all of our work directly as inspiration & 2021-05-17 12:29:55 I'm getting tired of this pattern. At this point, @jovialjoy has to spend almost as much time actively fighting erasure as she does just doing her work. It's a waste of everyone's energy & 2021-05-17 12:16:43 RT @jovialjoy: @60Minutes producers spoke to me for many hours. I even spent additional time building a custom demo for @andersoncooper and… 2021-05-16 20:14:14 @wsisaac I don't know if this is an actual pattern but I think a lot of social scientists read Rubin and a lot of ML folks read Pearl - it's funny to see you reading both! 2021-05-16 20:10:17 @wsisaac Omg why do I have two of these books myself 2021-05-14 20:36:25 RT @NeurIPSConf: The #NeurIPS2021 paper submission deadline has been extended by 48 hours. The new deadline is Friday, May 28 at 1pm PT (ab… 2021-05-14 11:17:14 RT @anyabelz: Very excited to announce that the #ReproHum project on Reproducibility of Human Evaluations in #NLProc with @ehudreiter is ge… 2021-05-13 20:06:17 RT @CohereAI: Model cards are an important step towards the responsible productization of machine learning and the usability of derived too… 2021-05-13 20:05:44 RT @paperswithcode: Introducing Datasets on arXiv! The new "Code & Read mo… 2021-05-11 13:04:28 @benwagne_r 2021-05-10 22:46:49 @Abebab Billionaires in general lol - they don't even have to give their money away. 2021-05-10 22:44:02 RT @eveikey: Please join us for @rajiinio's exciting talk on Wednesday at 4pm Pacific! The session is open to all, and ASL services will be… 2021-05-10 22:43:36 RT @DesignLabUCSD: This Wednesday's #DesignAtLarge, join us as Inioluwa Deborah Raji (@mozilla) sheds some light on the problems we face wi… 2021-05-09 23:13:38 @BlakeleyHPayne @cfiesler @allergyPhD @Greene_DM @morganklauss Thanks! @amironesei contributed greatly to that paper as well - it's here for anyone that may be interested in checking it out: https://t.co/R6iFUQFAqu 2021-05-07 13:55:06 RT @EugeneVinitsky: What if we just didn't bold the overlapping distributions https://t.co/XYtBGRbwso 2021-05-06 21:30:39 RT @SEDL_workshop: We have a great line-up of speakers and panelists: @alexhanna Joëlle Pineau @adinamwilliams @pushmeet @adyasha10 @rajiin… 2021-05-06 20:11:35 RT @marc_schulder: From a #ACL2021NLP reviewer perspective I must say I was very happy about the new ethics board and guidelines. It gave m… 2021-05-06 15:01:19 RT @bluevincent: Tomorrow I will serve as moderator for the Social Impact of ML Research session featuring Deb Raji (@rajiinio), Adina Will… 2021-05-06 14:49:33 RT @AdaLovelaceInst: Today we're excited to announce a new project exploring solutions to the unique ethical risks that are emerging in ass… 2021-05-05 23:18:39 @ZeerakW @AlvinGrissomII @Abebab Oh, that's fair - it's common to just use whatever info you have about people to gauge first impressions. I guess I'm thinking of those that go a step further to really shame people for taking a job they need. And these people are often hyper-focus on minorities :( 2021-05-05 22:58:44 @ZeerakW @Abebab oh I respectfully disagree here - I don't think rank & 2021-05-05 22:18:29 RT @sayashk: @rajiinio @Abebab Someone shared this excerpt here a while back (from Kai Cheng Thom's book), and it seems very applicable! ht… 2021-05-05 22:13:56 @sayashk @Abebab Amazing - thanks for sharing this! 2021-05-05 21:29:04 Also, it's unbelievable that someone is targeting @Abebab, of all people. She's more fearless in her critique of companies than anyone I know. Pls stop acting like corporate affiliation is the entirety of what a person is - declare conflict of interest when relevant & 2021-05-05 21:29:03 Why do people do this? If you don't believe in working at a company, then that's fine - but why police someone else's choice? ML/AI products are a veritable mess - there's more than enough work for those on the outside and lots of internal accountability work to be done as well https://t.co/gAGmo2e2jk 2021-05-05 14:56:40 @Gabuhamad @FAccTConference @AIESConf For my own sanity, I can't get involved in yet another project, but please share anything you write about this - more than happy to cite & 2021-05-05 14:55:37 @IasonGabriel Thanks for your leadership on this at Neurips! 2021-05-05 04:52:39 @timnitGebru @negar_rz @FAccTConference @AIESConf 2021-05-05 01:46:05 I'm pretty excited to see mainstream ML conferences like NeurIPS and ICLR incorporating some level of ethical reflection in reviews. This is a step that I don't think even @FAccTConference and @AIESConf have taken yet. https://t.co/OPLzsj3mjg https://t.co/1g4Ph1dZ2k 2021-05-04 22:27:53 RT @arxiv: arXiv is free to use, but not free to operate. If everyone who visits arXiv this month gives just one dollar, we would meet our… 2021-05-04 17:22:13 RT @shakir_za: The #ICLR2021 Town Hall is coming up soon. If you are participating, the OC update has so much good info on ethics, thoughts… 2021-05-04 16:52:04 RT @bsmith13: I'm hiring a Policy Research Analyst! https://t.co/4VQ4MdHQYq https://t.co/OYnjOQGU5F 2021-05-04 03:01:07 RT @peard33: Apple hires ex-Google AI scientist Samy Bengio who resigned after colleagues' firings https://t.co/lxkWqK6dHc 2021-05-04 02:59:01 RT @rishiyer: Hi @NeurIPSConf, sincere requests from many of my friends, collaborators (and most importantly students) affected by COVID to… 2021-04-30 16:56:29 RT @nowthisnews: A false match by racial recognition technology sent this man to jail for a crime he didn’t commit https://t.co/F2SOsHWCHR 2021-04-30 15:08:16 RT @SNolanCollins: During last Tuesday's hearing, @maziehirono asked Facebook if they were complying with all federal civil rights laws. Fa… 2021-04-30 12:18:19 RT @wsisaac: The NYPD is ending its controversial robot dog trial https://t.co/V6gZy5K8NF 2021-04-30 00:43:10 RT @math_rachel: Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing, from AIES 2020, by @rajiinio @timnitGebru… 2021-04-29 22:33:58 RT @hypervisible: “…because no federal guidelines exist to limit or standardize the use of facial recognition by law enforcement, states --… 2021-04-28 16:59:28 RT @GretchenMarina: @math_rachel @rajiinio @AINowInstitute I also like this piece by @alfredwkng in @themarkup (which itself might be on my… 2021-04-26 20:47:37 @EugeneVinitsky @NeurIPSConf @jennwvaughan Personally though, I agree that if an extension was provided to those affected by US protests last summer, one should definitely be made available to those meaningfully impacted by India's COVID situation. 2021-04-26 20:45:44 @EugeneVinitsky @NeurIPSConf I'm not sure - I feel like there should be a way to collectively contact program chairs directly to make such a request ? @jennwvaughan may have some feedback on this. 2021-04-25 21:41:19 Many things in this thread are true but I want to highlight this. I spent a long time focused on identifying a safe space for myself to do my work. Some tried to rush me & 2021-04-25 21:18:45 I agree - and we shouldn't hesitate to make such direct requests to @NeurIPSConf (or any conference) organizers, especially when this kind of accommodation is necessary. Sending much love and strength to those affected by the dire COVID situation in India right now. https://t.co/Q0EscWE7PL 2021-04-25 20:16:09 RT @hima_lakkaraju: As I struggled to deal with the impact of COVID on my family members in India, I got delayed by a day for submitting my… 2021-04-24 12:34:24 I can't reiterate how much this is true. Even AI critics will fall for the PR hype, discussing ethics in the context of some supposedly functional technology. But, often, there is no moral dilemma beyond the fact that something consequential was deployed & 2021-04-22 18:20:15 Love how much @radical_ai_ 's one brave question at the @FAccTConference 2019 Town Hall shifted people's thinking on this. She went up to the mic, quoted "Data Feminism" & 2021-04-22 14:06:37 @andrewhu_ @morganklauss @amironesei Thank you! Glad you enjoyed it :) 2021-04-22 14:03:05 @mmitchell_ai @timnitGebru @JeffDean 2021-04-20 23:05:34 RT @drlungamam: Police were alerted to the area that ended in the murder of 13 yo Adam Toledo by Shot Spotter, a smart city technology incr… 2021-04-20 19:56:36 @Combsthepoet I'm so nervous... my mind has been on this all week. 2021-04-20 19:40:27 RT @ComputerHistory: .@rajiinio's one word of advice is "courage", because it will take courage for the next generation to create change. "… 2021-04-20 19:37:00 RT @melindagates: I’m very grateful for @jovialjoy’s leadership in the growing movement to confront bias in algorithms. I hope as many peop… 2021-04-20 19:33:42 RT @ComputerHistory: "We treat racist AI as if they're a racist human being, not an automated technology. Usually, you have a situation whe… 2021-04-20 19:33:26 RT @ComputerHistory: "It's very easy to forget the people at the intersection. It's easy to forget Black women when talking about Black peo… 2021-04-20 19:33:18 RT @ComputerHistory: "Machine learning requires a lot of data. Having to define the datasets, label it, makes people concerned since it add… 2021-04-20 19:33:02 RT @ComputerHistory: "Institutional barriers are some of the hardest to resist, and that includes the ones in tech." - @rajiinio #CHMLive 2021-04-20 19:32:46 RT @ComputerHistory: "Sometimes these failures happen to groups that are most vulnerable. If you have a police department that's prejudiced… 2021-04-20 19:32:34 RT @ComputerHistory: "As someone who's part of the Black community, I am emotionally affected by seeing how people of color are impacted. W… 2021-04-20 19:32:25 RT @ComputerHistory: "I've felt this lack of response even in spaces that are meant to talk about AI ethics or fairness. Those concerns get… 2021-04-20 17:19:42 As an aside, here's a run-down of another creative audit @LeonYin designed - using a "staining" method to identify the amount of real estate Google was giving to its own products in the search result pages. Still one of my favourite things to read about. https://t.co/O5V75eivXG 2021-04-20 17:19:41 Wow, check out the methodology for the latest @themarkup audit of Youtube advertising blocklists. Journalists, regulators & https://t.co/84RDXVLTqb 2021-04-20 16:21:34 RT @jackbandy: Ever wondered how your algorithmic timeline is different from the old-fashioned chronological timeline? Lucky for you,… 2021-04-19 22:03:44 @swabhz @CSatUSC @nlp_usc Yay! 2021-04-19 18:25:40 RT @karaswisher: Important interview with ⁦@jovialjoy⁩ about where we are since she unmasked racial bias in facial software: Opinion | She’… 2021-04-19 03:59:24 And we absolutely need regulation, to address situations where corporate incentives are fundamentally misaligned with preventing harm. Advocating for internal accountability is not the same as advocating against external accountability. That's a false dichotomy - we need both! 2021-04-19 03:59:23 Projects like ABOUT ML is what an idealized PAI should be doing. If you ever talk to @mmitchell_ai, @hannawallach or @erichorvitz about the org, this goal is immediately clear. However, it's unclear to me if the field or the org itself understands these issues to be a priority. 2021-04-19 03:59:22 For example, in the face of several embarrassing failures, some companies (IBM, Google, Microsoft, etc.) realized engineers weren't even documenting what was happening & 2021-04-19 03:59:21 Ideally, interventions like @PartnershipAI are supposed to support industry in addressing situations within Case 1. In other fields where functionality is also a safety issue (medical devices, aerospace, etc), they have similar industry collectives thinking through best practices 2021-04-19 03:59:20 "Case 2" problems are not like this. To stop deploying a perfectly "functional" product, for ethical reasons alone, completely contradicts corporate incentives. We should be suspicious of company input on such issues & 2021-04-19 03:59:19 I often frame "Case 1" as engineering responsibility or safety issues - in my opinion, these situations should not be controversial. It's much more straightforward to refute stated performance claims than make a moral argument for why a functional product should be recalled. 2021-04-16 16:21:23 RT @ComputerHistory: As a computer scientist and activist, @rajiinio has worked closely with @AJLUnited on several award-winning projects t… 2021-04-16 16:20:33 RT @ComputerHistory: Is AI racist? Can it be used for anti-racist outcomes? Join us on 4/20 for a conversation featuring @lilich, @cmcilwai… 2021-04-16 13:05:27 RT @baxterkb: Look who are the keynote speakers at the @AIESConf (#AIethics & 2021-04-16 11:50:30 RT @autreche: Super excited to co-organize the ICML Workshop on Algorithmic Recourse (https://t.co/GKuGg52okP) with @hima_lakkaraju @strati… 2021-04-16 11:48:10 RT @bansalg_: Do you work on AI/ML + HCI? We invite submissions for our new journal--- Special Issue on "AI for (and by) the People" http… 2021-04-15 16:38:45 @reshamas @PartnershipAI @hannawallach @mmitchell_ai @OTI @EFF @IBM @Sony @LeverhulmeCFI @jennwvaughan @timnitGebru @Jinyingyang @HannahWallach I know there's this: https://t.co/8fXNhopi5n @jingyingyang can definitely point you to where the draft white paper now lies. 2021-04-15 12:11:23 RT @anyabelz: So, wow. We have 326 people registered for #HumEval workshop at #EACL2021!!! If anyone else wants to come, hey I'm sure we ca… 2021-04-15 10:48:40 @DMulliganUCB @luke_stark @Berkeley_EECS @beenwrekt @red_abebe @jennaburrell @AfogBerkeley @BerkeleyISchool @emma_lurie @JiSuYoo1 @zoebkahn @lizzielansdowne @xiao_sijia @liza_gak Nice - looking forward to meeting everyone!! 2021-04-15 10:43:05 @LeonYin With the initials it's going to be Dr. D.R. and I'd be lying if I said this doesn't make me just a lil excited. 2021-04-15 10:35:53 @lucy3_li @Berkeley_EECS @beenwrekt @red_abebe @BerkeleyISchool Definitely! 2021-04-14 21:42:13 @red_abebe @databoydg 2021-04-14 21:41:38 @red_abebe @Berkeley_EECS lol you really are the best at recruiting tho. 2021-04-14 21:21:21 As an aside: minority students will work with you if you value their work, make them feel safe & 2021-04-14 21:21:20 Like so many woc, my undergrad experience was difficult, socially isolating & 2021-04-14 21:21:19 I'm starting a CS PhD @Berkeley_EECS this Fall, working with @beenwrekt & Doing a PhD is a casual decision for some - that wasn't the case for me. I appreciate everyone that respected me enough to understand this & 2021-04-14 18:39:31 @mmitchell_ai I wish I could give a lecture simply titled "Erasure", and just list out one instance after the other of this happening to minorities and women in the ML field over and over, for no reason whatsoever. 2021-04-14 18:00:44 No one is allowed to forget the origin story of ABOUT ML (https://t.co/8fXNho7HdP). These women were pioneers in establishing documentation practice not just at their respective institutions (Google, Microsoft) but across the entire industry! No a small feat, at all. https://t.co/9k7MZ0GTEu 2021-04-14 17:45:06 @mmitchell_ai @PartnershipAI @OTI @EFF @IBM @Sony @LeverhulmeCFI @hannawallach Very strange to omit you two - who really pushed for the partnership to start this initiative in the first place 2021-04-14 10:58:55 RT @LeonYin: YouTube blocked advertisers from finding videos related to "Black Lives Matter," but not "White Lives Matter." https://t.co/M… 2021-04-13 20:14:52 RT @drewharwell: New: Detroit police wrongfully arrested Robert Williams in front of his two young daughters after a bad facial recognition… 2021-04-13 19:59:00 @yoavgo @timnitGebru @nsthorat @mmitchell_ai Sure, but this is research. It's not as if her participation was exchangeable, her contributions were very unique to her as a person. They didn't value what she did for them, which is why the dismissed her and did so disrespectfully. Either way, we can agree to disagree lol 2021-04-13 19:54:04 @yoavgo @timnitGebru @nsthorat @mmitchell_ai Yes, that's right. Why don't you think that's the case? 2021-04-13 19:52:03 @yoavgo @timnitGebru @nsthorat @mmitchell_ai I think Google values both diversity and ai ethics work in the abstract, but was dismissive of Timnit's specific contributions to these broader efforts in both cases. 2021-04-13 19:50:42 @yoavgo @timnitGebru @nsthorat @mmitchell_ai Yeah - it's possible there's some disagreement here & 2021-04-13 19:38:38 @yoavgo @timnitGebru @nsthorat @mmitchell_ai I don't doubt that ethics work continues to be valued at Google. I just don't think @timnitGebru's own legitimate contributions to that work was valued in the way it should have been. Her dismissal happened in a context where her specific contributions were truly being sidelined. 2021-04-13 19:33:41 @yoavgo @timnitGebru @nsthorat @mmitchell_ai Hm, I don't think that's quite right. I think the claim is that there's a version of ethics work that is valued at Google, but the valid contributions of especially marginalized individuals are not equally valued as part of that work. They are thus more easily dismissed. 2021-04-13 19:26:33 What Timnit says here is so correct in so many ways. The lack of diverse leadership at most AI ethics institutions reveals how much more can be done to protect & 2021-04-13 19:14:53 @timnitGebru @nsthorat @mmitchell_ai 2021-04-13 18:16:54 @timnitGebru @nsthorat @mmitchell_ai Yep - even if this general work could still be valued at the company, it's clear that your specific contribution to that broader agenda was not valued. I think this is my point. 2021-04-13 18:14:50 RT @D_schoolz: Im not special and not in the AI community but I stopped interviewing with Google last month for mobile software engineering… 2021-04-13 17:42:43 @nsthorat @timnitGebru @mmitchell_ai Happy to hear this but I think Google definitely underestimated their impact on the research community, for sure - otherwise the dismissals would have been handled with much more caution & 2021-04-13 17:15:03 I said this because I believe it: “It was really easy for them to fire her because they didn’t value the work she was doing”. Unfortunately for Google, much of the research community disagrees. Proud to continue standing by @timnitGebru & https://t.co/SDxjwXfkv9 2021-04-13 17:00:48 RT @jjvincent: The fallout from Google's firing of @timnitGebru and @mmitchell_ai continues to shake the AI community — I wrote about ongoi… 2021-04-12 20:03:33 RT @StanfordAILab: The Stanford AI Lab supports and deeply appreciates the talented Iranian members of our community. We strive for equitab… 2021-04-12 15:45:53 @DataSciBae But there's so much more though. I want to see romcoms! And the past has Black joy, if people just pay attention. I want to see pre-colonial African royal dramas 2021-04-12 15:08:18 RT @sapiezynski: This shouldn't be controversial but it apparently is. If you're doing something shown to be harmful, you should not get t… 2021-04-12 12:54:42 RT @ReviewAcl: ARR is now accepting submissions! Please see https://t.co/wiXNAvVhZO for an overview of the submission form and link to the… 2021-04-12 05:05:56 RT @techreview: The 10 most cited AI data sets are riddled with label errors, according to a new study out of MIT, and it’s distorting our… 2021-04-11 20:54:18 @agstrait @ruchowdh 2021-04-11 10:54:50 RT @black_in_ai: Appreciation post for @sindero who is a black pioneer in the field of machine learning (20y experience) and is the co-inve… 2021-04-11 01:20:29 Simon was the first real ML researcher I met (sat beside me at the first @black_in_ai workshop!) - so annoyed to hear of anyone attempting erasure of his many important contributions. https://t.co/5TM9hgzMCX 2021-04-09 21:04:43 RT @korolova: Facebook’s ad delivery algorithms have been known to introduce bias and create echo chambers for job, housing and politics ad… 2021-04-09 16:50:45 RT @_KarenHao: The latest audit of FB's ad service shows it's still excluding women from seeing job ads without regard to their qualificati… 2021-04-09 14:26:11 Shouldn't there be more serious consequences for this? The POST Act (which passed through NYC City Council this June) has provisions "requiring the NYPD to disclose basic information about the surveillance tools it uses" (see: https://t.co/LtWVRjtBNh). https://t.co/45f97YABuj 2021-04-09 00:29:35 @BobGoffer Agreed! 2021-04-09 00:27:46 @JulianPosada0 It's actually an evaluation dataset, so not technically meant to be used for training. But yes, this is a good point - there doesn't seem to be meta-data on geographic context. Do you think that would cause important changes to the image/video/audio? 2021-04-08 22:17:50 RT @VPjedwards: Breaking: It will be illegal for nearly all Virginia law enforcement agencies to use facial recognition technology starting… 2021-04-08 22:04:34 RT @hypervisible: "The models failed to detect faces in images labeled as including Black faces 57 percent of the time. Some of the failure… 2021-04-08 22:02:54 @BobGoffer Oh, thanks for sharing - I'll take a look! 2021-04-08 21:55:09 @BobGoffer Thanks for sharing that context. Yeah, it's worth thinking about how this will play out as a resource within the company. I was actually thinking of this as a resource for the audit community - given how difficult it can be to ethically source diverse data for evals, it's useful. 2021-04-08 20:13:11 RT @rajiinio: Also, to clarify just how *difficult* this problem of ethically constructing such datasets are - take a look at this paper on… 2021-04-08 19:13:23 @dlowd Yeah, I understand completely. My understanding is that companies this large also contain many great researchers doing good work (@mbogen, for example!) despite the visible mistakes made by leadership, policy teams, etc. 2021-04-08 19:07:04 @AutoArtMachine Yeah, I agree that would have been valuable here - I think the motivation for doing this was that for a computer vision system, which the dataset was created to evaluate, the "visual cue" determining prediction performance is skin type, not necessarily self-identified race. 2021-04-08 19:06:19 @hannawallach btws I really enjoyed this paper! We explored similar issues in our "Saving Face" paper (reflecting on the challenges of doing Gender Shades-style audits), you might like it! https://t.co/qyGrhn3hHF 2021-04-08 17:28:35 Also, to clarify just how *difficult* this problem of ethically constructing such datasets are - take a look at this paper on "Designing Disaggregated Evaluations of AI Systems". We also discuss similar challenges in "Saving Face"(https://t.co/qyGrhn3hHF). https://t.co/vjtMhd09GW 2021-04-08 17:21:15 h/t @lizjosullivan for putting this on my feed 2021-04-08 17:20:05 To be clear, facial recognition in particular is a technology I don't support - it hurts people when it doesn't work and also hurts people when it does. But as we push back, it's important to flag that current failures are inexcusable. You can't deploy a tool & 2021-04-08 17:12:52 They paid them and asked for consent. This is larger & Functionality is just one of *many* issues, of course, but now there's literally no excuse for a tool not to work on minorities. https://t.co/A6iyyjaiY5 2021-04-08 13:34:32 @kgajos We had a workshop featuring some meta-papers that included examples of this after @NeurIPSConf's requirement last year (deets at https://t.co/V8izkEO9Yo). I think "Like a Researcher Stating Broader Impact For the Very First Time" is likely most relevant (https://t.co/HTLSL1h1Vb) 2021-04-07 23:59:07 Finally https://t.co/QwU6ajqknW 2021-04-06 20:40:43 RT @NandoDF: Bravo @kchonyc Highly deserved! https://t.co/A9Uom1aVZf 2021-04-06 19:53:05 RT @josheidelson: Scoop w/ @NicoAGrant & 2021-04-06 19:51:54 RT @L_badikho: The resignation of Samy Bengio is a big loss for Google. Samy co-founded one of the most fundamental research groups in indu… 2021-04-06 17:41:59 RT @RMac18: It took more than a year to report this story. We contacted more than 1,800 US taxpayer-funded organizations listed in data as… 2021-04-06 16:50:23 @ZeerakW @yoavgo lol or/and travel grants for everyone! 2021-04-06 16:38:09 @yoavgo 2021-04-05 21:40:42 RT @DataSciBae: If you read this thread and learned something, check out the full versions of each paper. I rounded them up here: https://t… 2021-04-01 12:58:30 @DavidVidalJD @MarkSendak @m_c_elish Interesting - reminds me of this attempt at Google to make Model cards "easier" : https://t.co/2UyA4sUNiZ 2021-04-01 12:48:44 @DavidVidalJD @MarkSendak @m_c_elish Thank you! 2021-04-01 12:24:16 @DavidVidalJD @MarkSendak @m_c_elish Nice! Though the link in the tweet on regulatory science tools gives an error :/ 2021-03-31 17:43:21 @gofango 2021-03-31 17:42:52 RT @CGAPeterson: Extremely well made Vox video just released featuring many of the big figures in AI Ethics. From Twitter cropping bots to… 2021-03-31 17:42:39 RT @voxdotcom: Ruha Benjamin (@ruha9), author of Race After Technology, explains why AI often fails people of color: "It’s a systemic iss… 2021-03-31 12:48:14 @agstrait @SandraWachter5 lol I'm scared to read this 2021-03-31 12:46:00 RT @black_in_ai: We are excited to announce the launch of the BlackAIR Summer Research Grant Program providing support for AI Research pro… 2021-03-31 12:41:53 @agstrait @SandraWachter5 wow - what's the context for this report?? 2021-03-30 20:27:32 RT @Abebab: Wow! This is huge!!! Well done, @Foxglovelegal Also, reminder that @WFP still works with Palantir, where the fate of vulner… 2021-03-30 14:46:05 RT @shalinikantayya: I'm so incredibly thrilled to announce that @CodedBias will be available to stream globally on @netflix April 5th! It’… 2021-03-30 12:51:27 RT @hypervisible: Such essential journalism, again pointing to the fact that institutions often deploy computational tools against students… 2021-03-30 12:38:28 RT @Aaroth: I wrote a blog post two+ years ago last time folks were discussing bias in data vs bias in algorithms. https://t.co/LQlm9LXCUh… 2021-03-30 07:38:13 @BrianSJ3 Yeah, for sure, I'll keep this in mind for next time! Thanks for bringing this up. 2021-03-30 07:30:10 @nancy_iskander Human bias would be independent of the algo, that's not the case here - the algo makes bias worse, just via interactions rather than model output (https://t.co/6LTeXkadTV). To some, "algorithmic bias" = "algorithmically enabled discrimination", necessitating a systems-level view. 2021-03-30 07:19:43 @BrianSJ3 Yeah, I was wary of using that term when I wrote this - in this situation, I am talking about the same thing, but in a context when a group trusts the algorithm *more* (ie. demonstrates more "automation bias") in situations when making a call in favor of their natural prejudice. 2021-03-30 07:08:09 RT @black_in_ai: Congratulations to @red_abebe , @wsisaac , @shakir_za and @png_marie for contributing to this new book exploring work, de… 2021-03-28 17:20:11 @BetaBlueIS Oh, sorry if this point was unclear. Of course we should intervene! My point was that we shouldn't frame such interventions as "fixes", because it gives people people the excuse they're looking for to abdicate responsibility 2021-03-28 17:01:22 @boazbaraktcs @kareem_carr @JacobBloom31 @KLdivergence @mrtz @AngeleChristin @mawnikr @zephoria @HodaHeidari @mkearnsupenn @Aaroth There's also the fact that regardless of the "bias" definition, certain assertions that feel natural in idealized scenarios (ie "fixes","obvious gender & 2021-03-28 16:57:22 @boazbaraktcs @kareem_carr @JacobBloom31 @KLdivergence @mrtz @AngeleChristin @mawnikr @zephoria @HodaHeidari @mkearnsupenn @Aaroth I agree with a lot of what you say here - from my perspective, Kareem's definition of bias is only appropriate in a certain narrow context (ie data representation bias?) but an expanded definition is required to properly formalize these issues for other situations, and contexts. 2021-03-28 16:42:23 RT @harini824: read @rajiinio's thread! also: the updated version of this paper/fig includes "deployment bias," or harm arising from mistr… 2021-03-28 04:48:01 @AngeleChristin which we discovered is from @harini824's paper here: https://t.co/LTm0qlwhNj 2021-03-28 04:38:34 @AngeleChristin I pulled it from this blog! https://t.co/vLdvffmH8q 2021-03-28 04:37:43 RT @dmshanmugam: one of @harini824’s many beautiful figures! https://t.co/yRBHtlvrbn 2021-03-28 03:24:52 @tdietterich Oh, thank you 2021-03-28 01:15:24 @athundt No worries! Someone else asked and we found it here: https://t.co/LTm0qlwhNj 2021-03-28 00:28:30 @JFPuget Well, not everyone can be correct about everything. These misconceptions aren't at all unique to Kareem - many statisticians make similar assertions when first learning about the algorithmic fairness space, and it even took the community quite a while to get to this point. 2021-03-28 00:14:08 @kareem_carr I hope this didn't feel like too pointed a call-out, that's not my intention (I really like your memes!) but wanted to flag that the framing in the original thread could easily lead to this confusion, even if that wasn't the intent. 2021-03-28 00:12:42 @kareem_carr Yeah, I mention in my thread that we're interpreting that term "algorithmic bias" differently - however, my point at the end is that reducing the conversation of algorithmic discrimination to statistical bias gives people the excuses they're looking for to escape responsibility. 2021-03-28 00:05:55 @timnitGebru lol there's a whole syllabus of content one could develop on "What Twitter keeps forgetting about algorithmic bias" 2021-03-28 00:00:32 @timnitGebru lol I was thinking of you when I wrote this. This is for reference for us to use later, when the conversation inevitably comes up again. 2021-03-27 23:55:38 @ryanbsteed Not sure if this is the original, but I grabbed it from Harini Suresh's blog post on "The Problem with Biased Data" https://t.co/WdPwPK4w0c 2021-03-27 23:46:02 Sadly, people use these misconceptions to escape responsibility (ie."I fixed the data bias, there's nothing I can do.") so it's important to be careful. Algorithmic discrimination requires socio-technical systems level thinking - that should affect how we think about all of this. 2021-03-27 23:46:01 One culprit for these common misconceptions is the use of that term "bias". Some hear "algorithmic bias" and map this to "statistical bias" (as in the simple notion of bias vs. variance). Others hear it and think of algorithmically enabled discrimination and harm. Both are right. 2021-03-27 23:46:00 There's also a reality of various tradeoffs (ie. diversifying data may require privacy violations, measuring fairness could increase liability, etc.) & 2021-03-27 23:45:59 3. Race & Here's a recent @FAccTConference paper about this: https://t.co/yQ0Ju9A8L8 https://t.co/InS5FwP8rs 2021-03-27 23:45:58 2. Much of the de-biasing work makes it clear that algorithmic design choices can lead to *more fair* outcomes (w/ a "fixed" dataset) so it shouldn't be surprising that algorithmic design can also lead to *less fair* outcomes. @sarahookr explains it here: https://t.co/Wc9RUJzouJ 2021-03-27 23:45:57 For example, "automation bias" occurs when just the introduction of an algorithm results in increasing the bias of human discretion (ie. model predictions being perceived differently in one context vs. another, leading to biased outcomes for minorities). https://t.co/kL8FWKbojT https://t.co/fEPLfHImr7 2021-03-26 18:42:24 RT @emilymbender: This looks excellent -- and I'm particularly impressed with this innovation, meant to get people thinking about these que… 2021-03-26 17:20:07 RT @mmitchell_ai: "...authors should be rewarded, rather than punished, for being up front about the limitations and potential negative soc… 2021-03-26 16:22:38 @jackclarkSF @geomblog yup, and better edits of stunt double work! Alarmingly thin line between "this is disinformation" and "this is just part of the art" 2021-03-26 16:08:20 @geomblog As far as I know, main use cases are data augmentation (ie. add generated images to supplement a too-small dataset), marketing copy (ie. generate images with fake models, rather than real ones) and animation (ie. GANs can generate in-between frames to increase resolution, etc.) 2021-03-26 16:04:08 RT @NeurIPSConf: Introducing the NeurIPS 2021 Paper Checklist! https://t.co/SjMkX6JMVR 2021-03-26 13:43:57 RT @UMDscience: TODAY: Join us at 4pm ET for our @CodedBias panel discussion. Panelists: @rajiinio who's featured in the film, @drturnerlee… 2021-03-25 22:34:30 RT @timnitGebru: "A Google spokesperson said that, over the past 15 years, the company has furnished over 6,500 academic and research grant… 2021-03-25 02:16:54 I'm really proud of the author for articulating her experience so beautifully - can't even imagine what it took to be so open. Please keep in mind our Asian colleagues still processing & h/t @SanhEstPasMoi for posting! 2021-03-25 02:16:53 Love this essay by Renee Chang, responding vulnerably to the shooting in Georgia. I've been there before, it hurts so badly. Attacks on our communities are attacks on us as individuals - the fear, anger & https://t.co/kv7lb7khWp https://t.co/Fyc4vTO5XK 2021-03-24 19:04:54 RT @Jane24477: Facebook and Amazon are now corporate America's two biggest lobbying spenders. my new report @Public_Citizen finds that Bi… 2021-03-24 16:14:27 @marylgray @morganklauss @amironesei Thank you, Mary! 2021-03-24 13:48:42 I was so shocked to read this. Please tell me @ozm will continue in some other form off the platform, we need that kind of reporting! (+ I think @hackernoon successfully emigrated off a couple years ago?) https://t.co/FnIKAzKvJH 2021-03-24 08:20:46 @math_rachel @morganklauss @amironesei Thanks for reading, Rachel! 2021-03-24 01:19:50 @ZeerakW @Abebab 2021-03-24 01:12:21 @josephdviviano @sh_reya Don't mean to nag you, it's just that I see that stereotype a lot (ie. High GPA people are uncreative), and think it can become just as ridiculous as those saying a low GPA somehow means incompetence. It's a hack metric, it doesn't mean much of anything. People should ignore it. 2021-03-24 01:08:17 @josephdviviano @sh_reya Sure but some well rounded, disadvantaged, busy people also get high GPAs. The only thing a high GPA means is that a person was taught to care about that metric & 2021-03-24 00:46:43 @ZeerakW Let's start a GoFundMe so we can both attend @Abebab's next talk 2021-03-23 23:23:43 @josephdviviano @sh_reya Hm, I don't know if this is fair. It can take grit & 2021-03-23 23:04:52 RT @dustinvtran: I'm so appreciative that ML is at a state where open-source code, freely available conference videos/proceedings, and now… 2021-03-23 23:02:31 @_KarenHao @Abebab 2021-03-23 23:02:11 I'm very proud to have a famous friend like @Abebab https://t.co/j48qKK19As 2021-03-23 19:13:32 RT @HughLangley: Last week, @luke_stark turned down a $60k Google research scholar award because of the company's treatment of @timnitGebru… 2021-03-23 12:46:39 @schock 2021-03-22 22:51:25 RT @jovialjoy: TONIGHT on @PBS As we celebrate #WomensHistoryMonth, I am so honored to share the screen with these phenomenal women @timn… 2021-03-22 01:35:44 RT @JayAlammar: What are inductive biases? Can models make different predictions when trained on the same data? @RTomMcCoy distills the co… 2021-03-19 18:20:46 RT @mmitchell_ai: @rachelmetz Yeah, it means so much. @timnitGebru and I both made choices in line with AI ethics, and so lost our income.… 2021-03-19 18:18:15 @luke_stark wow - this is unbelievable leadership. Thanks for making what must have been a very difficult decision in order to advocate for this. 2021-03-19 18:17:35 RT @luke_stark: Last week I found out I'd been selected for a Google Research Scholar award. Today I declined it. https://t.co/7LDmGKlwy8 2021-03-19 13:41:41 Someone please help me with this too Gmail is a nightmare to work through at the moment. https://t.co/WBDhpYmScE 2021-03-19 13:40:59 RT @tiffani: Does anybody know how to scale back Gmail’s spam filters? It is sending emails from people I’ve been corresponding with for… 2021-03-19 13:40:38 @KLdivergence @mer__edith @tiffani I literally got mad about this happening to me right now - a link for recording a talk ended up in spam 2021-03-18 16:52:22 RT @Abebab: Clearview AI is literally phrenology rehashed in a digital form https://t.co/JQDTNnrIJA 2021-03-18 12:15:21 RT @schock: Good morning! Please circulate this awesome opportunity! https://t.co/TeZooBkEgc 2021-03-18 09:16:35 @trelsco Hm I don't think their users are their customers. They do provide value to their customers (ie. Advertisers) 2021-03-18 09:03:46 RT @yisongyue: Growing up in Chicagoland in the 80's & I've been called chink and slanty-eye… 2021-03-17 18:40:47 Oh, this is happening tonight! https://t.co/NeHIvOmSME 2021-03-17 17:25:02 @psettel @RLerallut I do agree with your earlier point though, that Innovation and Regulation can have this two-way arrow influence, and mutually beneficial interaction. I also really find that framing of Awareness, Alternatives and Regulation to be helpful, so thanks for introducing that framing! 2021-03-17 17:23:19 @psettel @RLerallut Ahaha I'll peacefully disagree on that one. I've lost faith in meaningful self regulation some time ago, but I do think companies can be pressured into acting well without regulation through strategic litigation & 2021-03-17 17:09:51 @psettel @RLerallut And I think actual regulation will take time, but some level of legal liability should apply to the issues we care about, and at minimum advocates should be organizing with the understanding that most companies will need to have constraints enforced on them in order to improve. 2021-03-17 17:08:25 @psettel @RLerallut I agree awareness is essential first but I don't think alternatives need to come before regulation. There's already examples in privacy/security where policy restrictions (based on public needs, to address problems w/o an available solution) actually spurred innovation. 2021-03-17 14:57:34 @LeonYin 2021-03-17 14:57:20 RT @LeonYin: Today I learned national hate crime stats are undercounted and unreliable. https://t.co/e44gSbo9fJ 2021-03-17 12:57:43 RT @DrIbram: Locking arms with Asian Americans facing this lethal wave of anti-Asian terror. Their struggle is my struggle. Our struggle is… 2021-03-17 12:56:00 I found this to be devastatingly accurate https://t.co/IHLa3DghDS 2021-03-17 12:54:55 This is really good point. And when the understanding of the technology is limited, policymakers will often rely on corporations to break down the functional details for them, making them even more vulnerable to the corporate PR narrative. https://t.co/VToxkJAkYa 2021-03-17 11:55:50 "Only the unloved hate I remind myself of that whenever things like this happen. Sending love & 2021-03-17 04:08:44 @EricMeyersonSF And those consequences could be as simple as "lies don't get algorithmically amplified". Right now, there's nothing - there's no regulation even requiring any scrutiny and def nothing penalizing those that create & 2021-03-17 04:04:18 @EricMeyersonSF Yes, this is a really good point. Ironically, Ailes lobbied against legislation at the time (ironically called "The Fairness Doctrine") in order to escalate Fox to it's current state. I'm partial to the take that MSM or not, there should be consequences for putting out misinfo. 2021-03-17 03:58:02 @go_kerem_go For sure! 2021-03-17 03:25:00 RT @_KarenHao: I write words for a living and yet cannot find the right ones to describe the mix of grief, fear, and loss of seeing the new… 2021-03-17 03:24:55 @_KarenHao My heart breaks for you - hang in there. 2021-03-17 03:01:35 @go_kerem_go I think there's a lot of reasons fairness is a palatable problem to work on for industry researchers but it doesn't make sense for AI policy practitioners to obsess about bias. I think policymakers can certainly do better to think about regulations needed for other harms as well. 2021-03-17 02:41:18 @geomblog I understand & 2021-03-17 02:37:06 @Miles_Brundage This is a fact. Never felt more strongly the need to expand our vocabulary on AI harms in pretty much every context they're discussed. 2021-03-17 02:16:04 And that leads me to the thing I don't understand: that panic Zuckerberg feels for "conservative bias" and upcoming algorithmic discrimination regulation - why doesn't he feel that for misinfo? If we care about that problem, why don't we have regulation to address those concerns? 2021-03-17 02:16:03 This happens with other companies too. Uber has its large Fairness Working group, but funds campaigns to strip drivers of their worker rights. Amazon funds an NSF fairness grant but refuses climate friendly innovation for data centers & 2021-03-16 23:49:54 RT @PBS: MIT Media Lab researcher @jovialjoy makes a startling discovery: Machine learning algorithms are only as unbiased as the humans an… 2021-03-16 23:06:51 RT @timnitGebru: I wrote exactly the same thing. The way they tried to discredit me was exactly how they tried to discredit you and Joy & 2021-03-16 22:52:32 @camillefrancois @FastCompany @Graphika_NYC @schock @jovialjoy Grateful to know you rockstars 2021-03-16 21:38:59 @camillefrancois @FastCompany @Graphika_NYC Amazing! Congrats 2021-03-16 21:29:44 Thanks to @CadeMetz & 2021-03-16 21:25:22 I finally read this NYT article, and was shocked to discover I'm heavily featured. But it makes sense? What's happening now with Google isn't divorced from what happened with Amazon, and whatever will happen to whoever needs to be challenged in the future. Same story, ongoing. https://t.co/66x0nasosu 2021-03-16 20:57:00 @nsaphra @chrisalbon I appreciate the alliteration there 2021-03-16 20:06:38 @geomblog @ruchowdh @Combsthepoet oh, interesting! Well Beethoven was certainly much more prolific aha 2021-03-16 19:58:05 @geomblog @ruchowdh @Combsthepoet Why do you call it Beethoven vs Mozart? 2021-03-16 19:49:28 @geomblog @chrisalbon hahaha this is the only logical response... 2021-03-16 19:23:37 RT @alexhanna: I am glad that @Abebab and @vinayprabhu have pointed out this pattern of citation erasure from ImageNet authors. There's a w… 2021-03-16 19:11:46 @chrisalbon In a clumsy attempt at anti-hype, I once referred to an ML model as a "data-defined model" 2021-03-16 18:26:35 How anyone could think that they could mess with @Abebab and get away with it is beyond me... https://t.co/DfOHHKBT2m 2021-03-16 17:22:34 RT @mozillafestival: Live tomorrow from #MozFest: a panel discussion of acclaimed documentary #CodedBias With @rajiinio, @CummingsRenee,… 2021-03-16 17:16:31 RT @tsimonite: Google is running an invite only robotics workshop this week. Two academics withdrew in protest of the company's treatment o… 2021-03-15 19:26:57 @mmitchell_ai @DiverseInAI @timnitGebru @AJLUnited @jovialjoy @ZoubinGhahrama1 Nonsense! More like "#1" to me 2021-03-15 18:53:34 So far, we've been learning so much about how practices from the security space can inform our approaches to identifying, prioritizing and documenting algorithmic harms. Beyond grateful to be working with @schock, @camillefrancois, @jovialjoy & 2021-03-15 18:48:42 Me & "If we had a harms discovery process that, like in security, was ...robust, structured & https://t.co/ADbdDgca0v 2021-03-15 18:37:30 RT @daphneleprince: I spoke to @rajiinio about the fascinating work she is doing with @AJLUnited to apply bug bounty models to the detectio… 2021-03-15 18:36:51 RT @mathbabedotorg: Amazing article today with Timnit, Meg, and Deb prominently featured! https://t.co/V2J9WzOYnx @rajiinio 2021-03-15 18:36:37 RT @mozilla: #MozillaFellow @rajiinio is researching ways to apply the models that underpin bug bounty programs to detect #AIBias. More o… 2021-03-15 18:36:22 RT @schock: Excited to see the hard work of @rajiinio featured in this @ZDNet article! Shout outs to the @AJLUnited CRASH project (Communit… 2021-03-15 12:50:23 @jachiam0 Spider man? 2021-03-15 12:38:27 RT @JordanBHarrod: First issue of my newsletter is live on Substack! Featuring: a quick intro to me and some life updates, interesting v… 2021-03-15 12:38:02 @JordanBHarrod Love this! I'm about to start one of these myself (eventually...), and yours is the prettiest/best formatted I've seen so far 2021-03-12 16:25:09 @ghoshd Aha, no worries at all - here is info on signal (https://t.co/x1J4Ie43WF). It's an alternative messenger app company, that really values privacy! 2021-03-12 16:21:51 @ghoshd Also, before anyone says anything - yes, I'm aware Whatsapp is part of Facebook, but they've had to fight hard internally to maintain encryption and a commitment to their values on privacy. Facebook has been ready to compromise on this from the beginning. 2021-03-12 16:19:53 @ghoshd Mozilla? Whatsapp? Signal? Subscription-based platforms, that opt for a different business model? I'd even say Twitter, on occasion. My point is that there are some companies that don't make the extent of the compromise that Facebook has made to keep up disinfo & 2021-03-12 16:16:03 I tend to disagree with this. Yes, the business model is flawed. Optimizing for advertising clicks & 2021-03-12 13:06:16 @widdr @_KarenHao @schrep @CaseyNewton They were only directed towards & 2021-03-12 13:01:43 @widdr @_KarenHao @schrep @CaseyNewton Fair. Facebook, as a company, like Google as a company, doesn't care about anything at all - it's a capitalist institution optimized for profit. That being said, there are individuals at Facebook that do care & 2021-03-12 01:36:19 @_KarenHao @schrep @CaseyNewton The article does a great job clarifying that although progress on these issues was meaningful, it missed the point in addressing the primary way Facebook is understood to cause harm (ie. misinfo). Harm reduction is not just about minimizing liability, and they misunderstood this. 2021-03-12 01:32:54 @_KarenHao @schrep @CaseyNewton I think it's a challenging framing for them to process because Facebook does indeed have dire fairness/bias issues (ie. they were sued by HUD/ACLU for biased ad delivery for housing, jobs). Facebook focused on those issues, as situations most directly tied to ongoing legal threat 2021-03-12 01:08:27 RT @Aaron_Horowitz: I've been feeling down about the potential of algorithmic audits as of late, but then this happened! It was more of a t… 2021-03-11 21:04:54 @mmitchell_ai @rachelmetz @timnitGebru I want to reassure you it was important work, and that will only become more clear over time. There's no way you could have known what was coming. Rooting for you all! 2021-03-11 21:01:36 RT @frecklesforgood: Fascinating discussion between @rajiinio and @camillefrancois asking what we can learn from #CyberSecurity to identify… 2021-03-11 20:23:06 RT @mozillafestival: Streaming in five minutes on #MozFest live: Hunting Biased Algorithms: A #DialoguesandDebates conversation with @r… 2021-03-11 19:28:18 RT @rachelmetz: new from me: an in-depth look at months of chaos in google's ethical ai group, and how it has reverberated throughout the A… 2021-03-11 17:17:02 RT @djleufer: Excellent thread from @AINowInstitute on the fantastic work on AI that's been done from outside the Euro-American context Fe… 2021-03-11 13:55:36 RT @JuliaAngwin: Facebook is a mass personalization tool: everyone’s feed is different. Today we launch “Split Screen” — a tool that lets… 2021-03-11 00:06:40 RT @FAccTConference: Thanks organizers, volunteers, attendees, authors, participants & 2021-03-11 00:00:14 @Abebab every time Don't think we've ever actually successfully met for 15 mins! 2021-03-10 23:58:43 RT @carolinesinders: @rajiinio I dig citizen browser but also the work or @_vecna and @WhoTargetsMe is pretty grand in this area too! 2021-03-10 23:58:19 @carolinesinders @_vecna @WhoTargetsMe Oh nice - I wasn't aware, thanks for sharing! 2021-03-10 21:55:26 T'was much fun to be publicity co-chair with @Abebab for @FAccTConference this year! Thanks @m_c_elish, @wsisaac & Also, yes, sorry for the gifs, but this is now part of the official brand guide. 2021-03-10 20:34:23 RT @timnitGebru: It is surreal to watch the talk for our paper by @mcmillan_majora and @emilymbender at @FAccTConference. Never imagined wh… 2021-03-10 20:03:36 RT @FAccTConference: In Stream 2: 1) Censorship of Online Encyclopedias: Implications for NLP Models by Eddie Yang & 2021-03-10 19:40:13 RT @FAccTConference: "What does the future of this research community look like both epistemically & 2021-03-10 19:39:39 RT @AINowInstitute: Great article by on the vital (& 2021-03-10 19:11:35 RT @FAccTConference: STARTING SOON IS THE #facct21 TOWN HALL. (Live on Stream 1) This event will provide attendees with the opportunity… 2021-03-10 19:02:41 @AutoArtMachine @JuliaAngwin lol replace "police" with "tech company" and that's pretty much our situation here 2021-03-10 19:00:57 RT @__lucab: “All data is terrible” @JuliaAngwin Why is this so true @FAccTConference #FAccTConference 2021-03-10 18:54:44 @AutoArtMachine @JuliaAngwin Yes, and the example she gives is very telling of this - when WSJ had to collect the database on the racial breakdown of police killings since law enforcement was not incentivized to create such a dataset. 2021-03-10 18:50:45 Love how she doesn't sugarcoat any of the work involved in this. "We don't spend a lot of time 'asking for access' - corporate provided data is just like leaked data, it's politically motivated. We will put in requests but can be fairly adverserial in our data collection approach 2021-03-10 18:14:00 "Data can help sway a debate" - mentions how their work is public service, to contribute "new facts & 2021-03-10 18:12:54 Wow @JuliaAngwin at #facct21 mentions "treating engineers as journalists, with a different skillset", mentions the goal of wanting to get to a 1:1 ratio at The Markup, where multiple engineers can work with multiple reporters to collaborate on great stories & 2021-03-10 18:04:06 RT @FAccTConference: Next, our final keynote speaker is Julia Angwin @JuliaAngwin (The Markup) speaking about "Algorithms, Accountability,… 2021-03-10 18:02:06 RT @yuvalmarton: Great example of how to do #DataScience and #AI right There’s even an #NLProc candy here... New POS tagger for certain T… 2021-03-10 17:54:13 RT @SeeTedTalk: #FAcct2021 panel moderator @vinodkpg making the important observation that teaching #AIEthics via a series of invited speak… 2021-03-10 16:09:20 @WellsLucasSanto oh wow, thank you so much for live tweeting I'm always energized by your natural enthusiasm for this work! 2021-03-10 16:08:06 By the way, "Artificial Intelligence and Inclusion: Formerly Gang-Involved Youth as Domain Experts for Analyzing Unstructured Twitter Data" is a great paper from this lab that ML people need to read. Exemplifies participatory ML & 2021-03-10 15:53:58 I was referring to this: "What Do We Teach When We Teach Tech Ethics?: A Syllabi Analysis" https://t.co/gNNLQD1n5k + There's this older paper by Deborah Johnson - "Who should teach computer ethics and computers & that gets right to the point. https://t.co/5il1uPh7pu 2021-03-10 14:59:37 RT @WellsLucasSanto: @rajiinio: "The expansion of the AI field is required to solve the AI ethics problem, but this exclusionary behavior n… 2021-03-10 14:58:19 RT @WellsLucasSanto: Through their survey of courses, they found that participants of the CS discipline tried to isolate themselves other d… 2021-03-10 14:56:22 This work is lovely. "Prior work has looked into the perception of algorithmic decision-making from the *user's* point of view. In this work, we investigate how students in fields adjacent to algorithm development perceive algorithmic decision-making." https://t.co/nHbS2go1Lt https://t.co/DxEpW7Ge08 2021-03-10 14:54:49 RT @pinbarlas: "I agree with the decision but they didn't deserve this" - A paper in #facct2021 on how future developers (current students)… 2021-03-10 14:52:57 RT @pinbarlas: We're presenting this paper at #facct21 #facct2021 in about 45 mins on Track Two! Join us! https://t.co/ysOKxmTVsa 2021-03-10 14:27:58 RT @WellsLucasSanto: .@rajiinio starts the talk off by mentioning that there's an AI ethics crisis happening & 2021-03-10 13:22:17 RT @mdekstrand: Really happy to see attention to measurement at FAccT. Let's do a lot more of this please . I personally think that some o… 2021-03-10 12:28:13 RT @Andrew__Brown__: We recently held our “Thinking Through and Writing about Research Ethics Beyond “Broader Impact” ‘ tutorial at @FAccT… 2021-03-10 12:24:42 RT @j2bryson: @rajiinio @bgeurkink Exactly. Similarly, trust is the wrong metaphor for AI https://t.co/N0wiePRMR0 see also the transpare… 2021-03-10 12:23:54 RT @LauraC_rter: "Educate students on frameworks of interventions based on existing problems, not anchored to the existing skills of those… 2021-03-10 03:21:09 @kareem_carr how do you differentiate between inference and prediction? 2021-03-10 03:20:00 RT @AINowInstitute: A panel deep dive into #landlordtech by @erin_mc_elroy and their work w/@antievictionmap: "Currently, there are over 2,… 2021-03-10 03:19:07 RT @d_malinsky: @rajiinio @ziebrah @david_madras Hi @rajinio pardon the shameless self-promotion but I agree with you! (at least abt the se… 2021-03-10 02:27:03 @d_malinsky @ziebrah @david_madras @rajinio Nice! Thanks for sharing 2021-03-10 01:33:00 RT @berkustun: @rajiinio @geomblog @Aaron_Horowitz @ziebrah @david_madras Every prediction can be explained. This includes: - Predictions… 2021-03-10 01:27:44 @aselbst @ziebrah @david_madras @s010n @manish_raghavan Though I'm still kind of thinking that explainability people don't yet focus on causal inference the way fairness people do and that part doesn't make sense to me lol This is kind of a point your own work makes though (thinking of your last year's Facct paper on explainability) 2021-03-10 01:26:26 @aselbst @ziebrah @david_madras @s010n @manish_raghavan For sure - it makes sense for them not to talk. Explainability and fairness are framed as very different goals. What's interesting is that both communities converged on the need to re-define their problem wrt causal inference. It implies some less obvious connection between them. 2021-03-10 01:20:05 @andrewthesmart @ziebrah @aselbst @david_madras lol moral of the story is that we all need to read your paper 2021-03-10 01:18:05 @aselbst @ziebrah @david_madras @s010n @manish_raghavan Yeah, this is kind of wild. It's possible there's some wheel re-invention happening here? There's clearly some cultural divide (ie. non-overlapping communities) responsible for that + now I'm scratching my head about which of how this confusion/divide plays out in XAI policy 2021-03-10 01:11:31 @geomblog @Aaron_Horowitz @ziebrah @david_madras @berkustun its the good fight 2021-03-10 01:10:07 @ziebrah @aselbst @david_madras ahahhaha oh lord 2021-03-10 01:09:46 @aselbst @ziebrah @david_madras personally more familiar with the latter and not so much with the former though I guess I can imagine. Either way, what I meant earlier is that although explainability clearly has some causal expectation, the causal inference crowd I'm exposed to often frames goals wrt fairness 2021-03-10 01:07:26 @aselbst @ziebrah @david_madras oh interesting - what's the interplay between "counterfactual explanation" and "counterfactual fairness"? lol 2021-03-10 01:01:20 @mdekstrand @aselbst @ghadfield @BrownSarahM @ziebrah @david_madras @s010n Unless you're confirming that none of the possible explanations are awful? The idea of multiple acceptable causes makes me kind of uncomfortable, almost like something is underspecified and now there's room for a cop-out. 2021-03-10 01:00:07 @mdekstrand @aselbst @ghadfield @BrownSarahM @ziebrah @david_madras @s010n That's the thing for me - causal explanations and "causal inference for fair outcomes" are super murky, and look almost like the same problem when you think about it. Also, the idea of multiple explanations/possible causes is interesting, but I worry it's not necessarily useful 2021-03-10 00:57:23 @mdekstrand @aselbst @ghadfield @BrownSarahM @ziebrah @david_madras @s010n Thanks for sharing - I'll check it out! 2021-03-10 00:56:37 Someone please fight Aaron on this. (I think he's right tho) https://t.co/rhdC1GoJXn 2021-03-10 00:55:51 @Aaron_Horowitz @ziebrah @david_madras I love how meaningful Facct participation necessitates starting beef over the policy interpretations of technical terms lool 2021-03-10 00:54:53 @Aaron_Horowitz @ziebrah @david_madras 2021-03-10 00:53:45 @aselbst @mdekstrand @ghadfield @BrownSarahM @ziebrah @david_madras @s010n Next year's tutorial lol 2021-03-09 18:09:12 @salome_viljoen_ @RDBinns @lilianedwards @mikarv You should email the reporter? That's a frustrating oversight. Also, they got it so wrong aha 2021-03-09 18:05:47 @mmitchell_ai @KLdivergence samesamesame 2021-03-09 18:05:20 RT @FAccTConference: STARTING NOW Our keynote speaker for the day will be Katrina Ligett (Hewbrew University) speaking "In Praise of (Flaw… 2021-03-09 14:36:08 RT @AJLUnited: We are honored to be among so many incredible companies that made @FastCompany's annual list of the World’s Most Innovative… 2021-03-09 14:35:50 RT @TessaDarbyshire: Thank you for an excellent Q& 2021-03-09 14:35:10 RT @_pmkr: The #facct2021 talk I recorded on our work, "An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates… 2021-03-09 11:59:38 RT @FAccTConference: Paper sessions starting now! We have 2 Streams in parallel, each with 4 papers. In Steam One: 1) Narratives and Count… 2021-03-09 01:38:57 Much thanks @AngeleChristin for facilitating such a great discussion - I learnt a lot! 2021-03-09 01:37:28 Also liked @autopoietic's closing words: “The creation of knowledge and the generalizability of it matters. As lived experiences, value systems and knowledge systems from diverse sources come together, hopefully we learn from each other and we see that in the next few years." 2021-03-09 01:36:36 “Find algorithms that benefit people on their own terms.” - @undersequoias 2021-03-09 01:25:49 RT @vmsoutherland: Some work to share. My soon-to-be-published article “The Intersection of Race and Algorithmic Tools in the Criminal Lega… 2021-03-09 01:13:22 Wow - this is an amazing discussion. So many practical examples being shared piecing together what accountability can mean in a variety of contexts! https://t.co/huBMwWp8XY 2021-03-09 00:43:03 RT @BrownSarahM: Our FAQ in the #facct21 conference guide is being built as we go. As attendee questions come in, we're taking care to arch… 2021-03-08 22:07:39 @KLdivergence Agreed - not testing for this is completely inexcusable at this point. I think I said "oops" only because I'm kind of embarrassed for those involved lol 2021-03-08 22:00:58 Ops - not again! https://t.co/qfPCmEV9AT 2021-03-08 20:53:24 RT @timnitGebru: My sisters. https://t.co/g4AS5uOY3D 2021-03-08 20:47:34 @sedyst @alixtrot @MarzyehGhassemi Yes, ditto - loved your facilitation! 2021-03-08 19:55:10 @timnitGebru 2021-03-08 19:53:27 Love these women https://t.co/5NIJkgtneH 2021-03-08 19:52:59 RT @AJLUnited: Happy #InternationalWomensDay! In celebration, meet the Black women who continue to inspire & 2021-03-08 18:36:41 RT @baxterkb: "We like to have conversations around accountability & 2021-03-08 18:21:06 RT @wsisaac: So excited to have @alixtrot moderating the panel today!! She is IMO the absolute *best* at facilitating and guiding complex… 2021-03-08 18:19:18 RT @KLdivergence: On deck to speak at #FAccT2021 is @YESHICAN. I've heard her speak before, and she is fantastic. Tune in now! 2021-03-08 18:03:30 RT @baxterkb: "Trying to replace public health with a machine learning model, it doesn't make any sense." - @rajiinio 2021-03-08 18:02:57 RT @baxterkb: "It becomes incredibly difficult to remove yourself from these systems because you need it to be healthy or to live. There a… 2021-03-08 18:01:13 RT @TessaDarbyshire: "In the FAccT space, we like to talk about fairness and accountability... When it comes to the impact on real lives, w… 2021-03-08 18:00:37 RT @gleemie: How do we get regulation to be independent of companies so there can be actual accountability? Companies want to have transpar… 2021-03-08 16:52:23 RT @kenandoesmath: "Data isn't just numbers. It's people and their stories." Social justice will come from the combination of quantitative… 2021-03-08 16:08:12 @candacerossio Yeah. And in addition to lives lost, there's businesses lost, community and relationships lost. It's a lot. I'm increasingly worried we'll never really be given enough time to process, even afterwards. 2021-03-08 16:01:39 RT @m_c_elish: Today's #FAccT2021 plenary sessions focus on health, data, and in/equity Join @YESHICAN on @Data4BlackLives in discuss… 2021-03-08 15:38:42 It's crazy to me how little anything has stopped during the pandemic and how little time has been dedicated to mourning and reflection. I wonder at what point we're actually going to acknowledge how much we've lost, and how inhumane we've been, just in order to "keep going". 2021-03-08 15:32:44 I wrote "The Discomfort of Death Counts" (https://t.co/uxbfJlTWBU) when the US COVID toll had just hit 100,000 & Now, we're past 500,000 dead. I almost didn't notice the headline. 2021-03-08 12:15:15 RT @FAccTConference: Paper sessions begin today!!! We also have an amazing lineup: a keynote on "Health, Technology and Race" from @YE… 2021-03-08 00:59:45 RT @le_roux_nicolas: @timnitGebru Replies I got when raising google issues: - "You see everything with a negative lens." - "Have you tried… 2021-03-07 23:05:37 By the way @kharijohnson this is the paper where we cite you (in our intro)! 2021-03-07 22:59:39 Wow. Just spotted this. I spend way too much time on Wikipedia so this is truly the highest honor. Thank you! https://t.co/vD902Mhvb7 2021-03-07 22:58:08 RT @catherinehyeo: 2/? Working alongside @hessiejones @volhalitvinets for the @Women_AI_Ethics project, it was an honor to create the page… 2021-03-06 19:36:20 RT @IsmaelKherGar: Such a good read! This paragraph’s references look like GOLD. (h/t @yoyehudi) You Can't Sit With Us: Exclusionary Pedag… 2021-03-06 00:08:35 @SerenaOduro @VinhcentLe @alicexiang Thank you! 2021-03-05 23:26:00 @SerenaOduro @VinhcentLe I couldn't get into any sessions Is there a recording of this / link to your work? 2021-03-05 23:20:40 @hannawallach @krob @STurkle @joannagoode13 @morganklauss @amironesei lol yeah, we cite it in this paper and talked through that work a lot while drafting. 2021-03-05 22:45:31 RT @hima_lakkaraju: Here is the link to our @FAccTConference tutorial on "Explainable ML in the Wild: When Not to Trust Your Explanations"… 2021-03-05 22:43:51 RT @quarantiain: Yes! Transparency without scrutiny and accountability is us being given a front row seat to our own funeral https://t.co/7… 2021-03-05 22:41:59 RT @tsimonite: Sprinkling a few ethics classes into CS courses isn't enough to keep AI on the rails, say @rajiinio @morganklauss @amironese… 2021-03-05 20:18:00 @DataSciBae Yes, I totally agree. And I'm so glad you said something - this honestly doesn't get pointed out often enough! 2021-03-05 18:59:11 RT @m_c_elish: I am simply blown away by all the incredible (50+) #FAccT2021 volunteers. You all are phenomenal! Thank you for making thi… 2021-03-05 18:44:10 @ambaonadventure Yes! lol, you already know it: Disclosure & 2021-03-05 18:34:01 This is one of many reasons I like @hima_lakkaraju's work (I suggest those registered check out her @FAccTConference tutorial "Explainable ML in the Wild: When Not to Trust Your Explanations"). Explanations can't be trusted at face value. Actual accountability requires much more 2021-03-05 18:25:52 These days I think of transparency & The value of transparency is in it's ability to enable system scrutiny & 2021-03-05 18:08:22 @mahimapushkarna @FAccTConference I wish I was there too - unfortunately, the event was already at capacity. Great work though, I really love this project! Thank you so much for working on this 2021-03-05 18:04:11 RT @FAccTConference: NEXT we have another incredible CRAFT session: "Narratives and Counternarratives on Data Practices in the Global Sou… 2021-03-05 17:51:51 RT @verge: Timnit Gebru was fired from Google — then the harassers arrived https://t.co/lGX04xnRLT https://t.co/pwUWvApvUQ 2021-03-05 17:33:58 RT @_KarenHao: You know of labor strikes. But what about data strikes? @nickmvincent, @hanlinliii, @nic_tilly, @snchancellor & 2021-03-05 12:03:56 @histoftech Thank you! 2021-03-05 12:02:55 There's this part of the paper I really like, where @amironesei presents an interesting theory of the value hierarchies that naturally fall out of disciplinary characterizations and categories - inherently giving "hard" fields like CS more power than "soft" HSS disciplines. https://t.co/8nm8yFlaof 2021-03-05 11:53:32 RT @wsisaac: Very excited for CRAFT this year! Remember to check the hub for links to the respective sessions. If you are not attending CRA… 2021-03-05 11:27:06 RT @JulianPosada0: How do traditional disciplinary divisions encourage exclusionary pedagogy in AI Ethics? Disciplines “don't value each ot… 2021-03-05 11:24:56 RT @rachelcoldicutt: This is https://t.co/I8Q9c7JHfF https://t.co/S6uleW5Iua 2021-03-05 04:34:37 @JulianPosada0 @morganklauss @amironesei @cgpgrey Aha neither did we, honestly, but "Humans Need Not Apply" was referenced so many times we had to mention it lol 2021-03-04 23:55:27 Context on why we went out of our comfort zone, as people coming from VERY different academic backgrounds, to collaborate and write this: https://t.co/DuiQflPI30 2021-03-04 23:42:02 RT @WellsLucasSanto: I glanced through the slides for the paper talk (deep diving on all the paper talk videos this weekend), and this is d… 2021-03-04 23:41:06 RT @sherrying: Yes! I was just thinking about some of these ideas on the problem with the dominance of computational approaches in AI and t… 2021-03-04 23:21:30 @WellsLucasSanto Ah, thank you, thank you - so glad you're finding this work interesting & 2021-03-04 23:05:18 Finally, it's out! It's here! In this paper, me, @morganklauss & Check it out: https://t.co/f2ByPGx5q1 https://t.co/WCD9fecmUO 2021-03-04 22:37:27 RT @BrownSarahM: As always, I *loved* @red_abebe 's thinking in her tutorial on human centered mechanism design. That was quite the readin… 2021-03-04 21:20:00 These tutorials were fantastic! Those that didn't have the opportunity during the day should seriously take a moment to check these out. Quite a few were like opinionated surveys of past work. It brought the insight of interpretation to the discussion - I enjoyed that so much! https://t.co/bCT6ZQr6JF 2021-03-04 20:57:33 RT @geomblog: What @BrownSarahM is not saying (and should) is that she's been responsible for much of the very nice social environment that… 2021-03-04 15:13:50 RT @GretchenMarina: @shalmali_joshi_ on how explanations can be used to manipulate or miscalibrate trust, or to change fairness perceptions… 2021-03-04 15:11:06 @mdekstrand @emilymbender @FAccTConference We wanted it to be #facct21 but people started using #facct2021 organically lol 2021-03-04 14:13:13 @KLdivergence Thanks for your service So glad to have met you last year! 2021-03-04 14:11:47 @Abebab also lol is this what your photoshoot was for? 2021-03-04 14:00:49 RT @FAccTConference: Our next set of tutorials are going live now! We have 3 parallel sessions. In Stream One -> 2021-03-04 13:29:39 RT @zacharylipton: The 2021 @FAccTConference has begun and a full day of tutorials with amazing speakers is under way! Register and join us… 2021-03-03 23:07:17 RT @benzevgreen: Just had an incredible time kicking off this year's @FAccTConference with the Doctoral Consortium! Thank you to all of the… 2021-03-03 23:03:01 RT @_KarenHao: I love when AI talks are really just thinly veiled history & 2021-03-03 17:49:21 RT @FAccTConference: ACM FAccT has started with the Doctoral Colloquium today! We have Tutorials lined up tomorrow (March, 4) and CRAFT s… 2021-03-03 14:49:10 RT @emilymbender: More excellent reporting by @kharijohnson on the continuing fallout of Google firing @timnitGebru and @mmitchell_ai http… 2021-03-02 21:46:30 RT @timnitGebru: https://t.co/uKj4ZHzNXa from @financialtimes "The US Congress has been considering an Algorithmic Accountability Act, whi… 2021-03-02 15:50:19 RT @ruchowdh: IDK what event organizer needs to hear this, but if you are asking someone to give a talk, can you PLEASE send out calendar i… 2021-03-02 15:11:04 RT @JuliaAngwin: Most risk scores in the criminal justice system do not use race as an input variable. But major universities are using r… 2021-03-02 01:05:06 RT @BrownSarahM: If you're also excited about @FAccTConference, remember the conference site is already open if you're registered! You can… 2021-03-01 12:51:23 RT @fborgesius: Coooool! Keynote by @JuliaAngwin at @FAccTConference! https://t.co/Q5mdP3k36T 2021-03-01 01:22:19 RT @mmitchell_ai: I abruptly lost my job 9 days ago. Tomorrow I lose my healthcare. This is the "view from the top" for a successful female… 2021-02-28 00:46:18 RT @joonlee: With all of the Anti-Asian sentiment around the world over the last week, things feel like they’re starting to reach a boiling… 2021-02-27 20:46:37 I just want to say that I know a Black employee at Amazon and this is 100% true. We need to remember that in situations like this, real people get very hurt. They deserved better. https://t.co/rVh30Rk6aD via @voxdotcom 2021-02-27 20:18:53 @undersequoias @anotherday____ I've been finding a lot of value in the literature on recalls and how that connects to engineering responsibility. If a product fails to fit certain standards for functionality and due diligence, then it's not fit for the market. Unsure how well this applies to AI but I hope so. 2021-02-27 18:08:15 @ndiakopoulos Yeah, I agree that eng responsibility is part of broader ethical conduct. Strategically, though, I wonder if they should be approached separately - IMO ethics requires a different type of discussion & 2021-02-27 17:41:33 @ndiakopoulos @databoydg @undersequoias @bendotolsen I'm really open to changing my mind here though. I've been introduced to these concepts through engineering design / engineering responsibility, so have a pretty biased understanding. 2021-02-27 17:39:31 @ndiakopoulos @databoydg @undersequoias @bendotolsen Oh thank you for sharing! + hm I guess I was speaking more to the fact that the focus tends to be more on what the engineer has to do and the qualities of the artifact or its design & 2021-02-27 16:55:07 @ndiakopoulos I see engineering responsibility as motivation for framing eng design problems with specific values in mind & https://t.co/8znoZvT8SU 2021-02-27 16:52:34 RT @undersequoias: @rajiinio Anyway there’s also this https://t.co/Rpx73wxw2m 2021-02-27 16:47:51 @undersequoias oohh, thanks for sharing this! 2021-02-27 16:47:16 @undersequoias Yep - we both see the current situation the same way. I guess I think I disagree strategically - I think things will be even more of a mess if we make the tent bigger. I want AI/ML people to do "ethics" properly and to so "responsibility" properly. Right now, they do both poorly. 2021-02-27 16:42:44 @bendotolsen @databoydg @undersequoias @mira_lane @yolandagil Me & 2021-02-27 16:42:09 @bendotolsen @databoydg @undersequoias @mira_lane @yolandagil Aha, I love that! Yeah, I think AI is a strange space, where the need for responsibility is greater than even traditional engineering fields (because - data) but for some reason people are more careless than in any other field... 2021-02-27 16:37:10 @undersequoias Looking at Google, the notion of "ethics" in a corporation is a way more contentious thing. Also, practically, especially in AI, where "engineering best practices" is nebulously defined, there's no notion of "engineering responsibility" that resembles the original concept at all. 2021-02-27 16:34:05 @undersequoias Yep - and Google did have both a "Responsible Innovation" & 2021-02-27 16:24:47 @undersequoias There's certainly overlap in concerns, but accountability is framed very differently. Also, I will admit the AI/ML space is very weird, and people use buzzwords in all kinds of ways. It's not uncommon for someone to use these terms incorrectly, especially when they don't apply. 2021-02-27 16:23:01 @undersequoias Ethics takes on more controversial & 2021-02-27 16:20:58 @undersequoias Respectfully, I disagree. Engineering responsibility traditionally means creating a product that's functional & 2021-02-27 16:08:39 @bendotolsen @databoydg @undersequoias @mira_lane Yep - I also think the AI/ML space is wierd. The way these concepts are interpreted and handled in this space is inconsistent in certain important ways from the original ideas. 2021-02-26 18:16:43 For reference, this is the children's book the title of our paper is based on - unbelievable parallels between how the wrong-headed attempt to fit "everything in the whole wide world" into a museum and the silliness of when we try to do the same with our benchmarks. https://t.co/dqSacgWCw4 2021-02-26 18:16:42 This was an incredibly low-key release (still very much work in progress, please provide feedback!) but it led to me making the slides I am still the most proud of. https://t.co/r5b4VeL8er https://t.co/ZT9lWzFAqT 2021-02-26 16:25:58 RT @kschwabable: I've been working on this story about Google pushing out @timnitGebru for the last 3 months. I spoke to 25 people involved… 2021-02-26 14:32:00 RT @FastCompany: In the field of AI, a handful of giant companies are able to direct the conversation, determine which ideas get funding, a… 2021-02-26 14:14:46 RT @kchonyc: https://t.co/eslcM1IO0v 2021-02-26 13:01:13 @mutalenkonde Actually speechless this time lol if anything they should be surveilling all the white collar criminals walking around Wall Street. 2021-02-26 12:58:29 OF COURSE THEY LAUNCH THIS IN THE BRONX ‼ https://t.co/qhhDGXS2Eo 2021-02-26 12:33:25 RT @FAccTConference: FAccT begins next week! If you haven't registered already, make sure you do so https://t.co/BOoUJgdDdr We are looking… 2021-02-26 08:44:03 RT @william_fitz: why hasn't a tech reporter written an article about timnit's advocacy for herself? it's incredible. she's fierce & 2021-02-26 03:49:27 @Combsthepoet @BigDataMargins @WesternU lol there was a grimace on my face the whole time you were sharing those horrific stories. 2021-02-25 23:47:56 RT @JesseDodge: This is a big reproduction of results from a wide variety of papers. Their conclusion is that most modifications to transfo… 2021-02-25 17:18:43 Still can't believe what happened. It's unreal. @mmitchell_ai fought for me. She would remind me to speak up & @timnitGebru hypes me up every time. I wanted to quit, she's the reason I didn't. https://t.co/eJ3Y8Wn2cH 2021-02-25 13:34:00 @math_rachel @jeremyphoward Good luck on your next adventure, Rachel! 2021-02-25 12:50:01 @DavidVidalJD Thank you! 2021-02-25 12:48:33 @EstateMatt Lol what if it attacks a person in self defense? Like Asimov's three laws lol 2021-02-24 21:51:11 RT @nikitaljohnson: “AI doesn’t work unless it works for everyone.” Hear more from @rajiinio in this interview. https://t.co/GQs50IPtsN #… 2021-02-24 18:44:04 @hypervisible Students surveilled at school while their parents are being surveilled at work 2021-02-24 18:41:28 RT @FortuneMagazine: Your data is a weapon that can help change corporate behavior https://t.co/c3KBS2rbFc 2021-02-24 12:59:33 RT @Combsthepoet: Tomorrow. Please join us. https://t.co/rouT3LkeaS 2021-02-23 17:15:36 RT @ruchowdh: In my last article as Parity founder - a very salient and much-needed discussion on the risks of growing the "algorithmic aud… 2021-02-23 13:33:26 It speaks to the acumen of BLM organizers I think, and how they understood the importance of prioritizing stories over stats, when it comes to explaining the impact of unjust death. 2021-02-23 13:30:11 There's a moment in this interview I keep thinking about. Where Natasha talks about George Floyd & 2021-02-23 12:55:30 @dhh @dafacto It would be great if this supported Markdown. If anything just to allow for users to add equations & 2021-02-23 11:49:33 RT @JaniceWyattRoss: Daughter 1 was taking an exam today being proctored by some type of software that apparently was not tested on dark sk… 2021-02-22 22:07:17 Whenyou measure include the measurer! https://t.co/rFuVPEe9Hu 2021-02-22 20:20:19 RT @BigDataMargins: Coming up this week! Big Data at the Margins is pleased to present: "Digital Policing: Facial Recognition Software & 2021-02-22 15:33:52 @mmitchell_ai Sending so much love Hope you are ok! 2021-02-22 15:29:28 lol English is such a disappointing language https://t.co/KG385DLBK1 2021-02-22 06:42:23 RT @mozilla: Technology has never been colorblind, but calling out racial inequities of data and algorithms means facing denials and backla… 2021-02-21 15:33:36 Closed questions with negative clauses are impossible to answer. "Is it never going to happen?" "Yes, it's never going to happen." = "No, it's never going to happen." "Are you not going?" "Yes, I'm not going" = "No, I'm not going." 2021-02-21 15:20:04 RT @techreview: Algorithms now decide your credit score, which patients receive medical care, which families get access to stable housing,… 2021-02-20 22:34:39 @red_abebe oh yeah @databoydg & 2021-02-20 22:32:10 @mutalenkonde Man, so sorry to hear that 2021-02-20 21:59:25 The fact is that there's currently almost nowhere that is 100% safe for Black people to do their work right now. That's incredibly upsetting - and a tragedy for the field. With Meg & 2021-02-20 21:59:24 I haven't been working for long, but have already witnessed a Black woman at a nominally civil society org being devalued (ie. literally paid $30k less than peers with less experience), then tossed out in an equally hostile & 2021-02-20 21:59:23 People need to understand that what's happening to @timnitGebru & 2021-02-20 21:21:22 Unbelievable how much of a difference one Black faculty makes - in her first year! Amazing. https://t.co/9jNdhQTEes 2021-02-20 00:37:24 RT @dmetaxak: Google recently added a feature allowing users to search for "Black-owned businesses near you". They're advertising it all ov… 2021-02-20 00:28:38 @alexhanna 2021-02-19 22:17:48 Seriously. @mmitchell_ai & Completely Google's loss. https://t.co/JEymKCLDcu 2021-02-19 22:14:55 @alexhanna Sending strength to you, @cephaloponderer & 2021-02-19 22:12:57 @mmitchell_ai So sorry 2021-02-19 22:12:36 https://t.co/gutoyvSsXK 2021-02-19 18:45:06 @BrownSarahM Yeah, I also wonder how this compares to the audit of medical devices actually (groups like BSI kind of act as government certified corporate watchdogs, still hired by companies to meet some externally & 2021-02-19 16:20:28 Whoo! This is unbelievably inspiring, given this lab's earlier commitment to do tech + society work without taking *any* money from corporations. https://t.co/2J4GpeTRO3 2021-02-19 14:40:02 "He tried not to panic. He wanted to make an appeal ... But nobody, not his teachers, not Ofqual, not government ministers, would say what counted as evidence for him to mount a protest. He had been told what he was worth and given no means to disagree." https://t.co/TH4FoE9wPe 2021-02-19 14:24:00 @Mehtabkn Sure, feel free to send over an email! I've also written about internal audit pros/cons here (https://t.co/UcQTUanEvZ) & 2021-02-19 13:43:47 RT @ndiakopoulos: Once you start digging the algorithms are *everywhere* in government ... here's how the Treasury Department calculates ta… 2021-02-19 01:36:49 And by new perspective I mean perspectives prioritizing other stakeholders, and not just the company's interests. Product risks literally look different when you value different people. Audits from an outside lens thus includes questions internal audits may not even think to ask. 2021-02-19 01:33:49 Though a completely neutral independent audit isn't really the goal. Even regulators have interests that compromise & 2021-02-19 01:29:23 This is an important point when it comes to auditing. I actually consider paid consultants (ie third party auditors paid by a company to complete an audit) as interchangeable in certain ways with internal audit teams. They are valuable in their own way but certainly not neutral. https://t.co/bhH2gRnWwI 2021-02-19 01:25:13 RT @Mehtabkn: Third party ≠ neutral ... 2021-02-19 00:01:22 RT @shalinikantayya: So incredibly thrilled to announce that #CodedBias has won three SIMA Awards for Best Director, Best Sound Design, and… 2021-02-19 00:00:00 RT @erichorvitz: Knowing when & 2021-02-18 23:35:53 @mmitchell_ai 2021-02-18 20:06:32 RT @mmitchell_ai: ...And this is how I find out. I'm so glad for all the trust they've rebuilt. It seems I've been completely erased and my… 2021-02-18 17:26:22 RT @alvarombedoya: Dear world, Dr. @TimnitGebru is a luminary. Her work has changed critical policy debates and will do so for decades to c… 2021-02-18 13:38:38 RT @FAccTConference: REMINDER - today is the LAST DAY for conference registration at a reduced price!! Don't miss this opportunity to sa… 2021-02-18 04:53:49 RT @hatr: Colleagues of mine analyzed A.I.-based job interviews. The software promises to be able to detect personality traits and be "fast… 2021-02-18 01:03:42 @anoushnajarian @SilverJacket @alexhanna @IasonGabriel @mmitchell_ai @timnitGebru @KatieShilton @black_in_ai @NewYorker @ruha9 @safiyanoble @jovialjoy @robotsmarts @tiffani @uwcse Thank you for bringing up this issue, though! I do think even contacting Pedro was a mistake, and a decision all journalists should continue to be seriously challenged on. 2021-02-18 01:02:50 @anoushnajarian @SilverJacket @alexhanna @IasonGabriel @mmitchell_ai @timnitGebru @KatieShilton @black_in_ai @NewYorker @ruha9 @safiyanoble @jovialjoy @robotsmarts @tiffani @uwcse Ah, I see. It seems @SilverJacket clarifies above that, in that tweet, he was really just tagging everyone he had contacted for the article, Pedro being amongst them. I also disagree with the decision to contact Pedro here, but don't think tagging him was intentional promotion. 2021-02-18 00:54:12 @timnitGebru @SilverJacket @anoushnajarian @alexhanna @IasonGabriel @mmitchell_ai @KatieShilton @black_in_ai @NewYorker @ruha9 @safiyanoble @jovialjoy @robotsmarts @tiffani @uwcse Yeah, I understand completely. Given all the active harassment he's engaged in and condoned, he should no longer be even approached as a valid source on this topic (or any for that matter). Sad to see him being continuously engaged with, despite his bad faith engagement. 2021-02-18 00:46:18 @SilverJacket @anoushnajarian @alexhanna @IasonGabriel @mmitchell_ai @timnitGebru @KatieShilton @black_in_ai @NewYorker @ruha9 @safiyanoble @jovialjoy @robotsmarts @tiffani @uwcse @anoushnajarian @timnitGebru fwiw I've read the article and Pedros is not mentioned by name, nor is he necessarily painted favorably. I totally understand the valid concerns here but just want to reassure you both that he wasn't legitimized or promoted in this particular piece. 2021-02-17 19:02:42 RT @FAccTConference: This year's ACM FAccT 2021 will be held virtually from March 3 to March 10, 2021 and registration is OPEN. Our early b… 2021-02-16 17:49:31 RT @garnettachieng: the tech + society resource list i was compiling is here it's not done yet, but so far it has: -open access books -co… 2021-02-16 16:52:17 RT @FAccTConference: ACM FAccT 2021 was slated to take place in Toronto, Canada. However, due to the ongoing COVID pandemic, it will be hel… 2021-02-15 22:10:37 RT @sarahookr: Yesterday, I ended up in a debate where the position was "algorithmic bias is a data problem". I thought this had already b… 2021-02-15 22:03:30 @mmitchell_ai @timnitGebru So upsetting. Sending 2021-02-15 22:02:49 I assure you, Meg is the absolute last person on earth to ever deserve any of this. She is still one of very few I would describe as a true ally, constantly and consistently sharing power to make opportunity for people of color and advocate for them. So incredibly upsetting. https://t.co/nfGXRxOAMM 2021-02-15 19:18:58 The role of the affected population in defining & 2021-02-15 19:18:57 One notable oversight I noticed is this - @alexhanna is not just more "woke" or "informed" than the voice-to-face researcher. As a trans woman, she's also more at risk of the harm being discussed. This is exactly why she sees that harm more clearly, and exactly why she speaks up. 2021-02-15 19:18:56 This @NewYorker article heavily features our #NBIAIR workshop @NeurIPSConf & The conversation about research ethics is an important one that the AI field certainly needs to have. https://t.co/oO6U4al88v 2021-02-15 16:06:32 RT @hannawallach: Finally got a chance to post a (near) transcript of the talk I gave at the "Navigating the Broader Impacts of AI Research… 2021-02-15 04:11:35 @JulianPosada0 I genuinely wonder how many industry researchers pay attention to GDPR when deciding on how to leverage internal data for their projects. I think many interpret this to default to articulating commercial use & 2021-02-15 04:05:42 @MarkSendak oh curious - care to elaborate here? 2021-02-15 04:05:10 And in case people are wondering, no, the DeepFace paper makes no comment on consent. At all. Granted it was a diff time with a diff level of awareness, but the sense of entitlement to the data uploaded to people's profiles on the platform... is a lot. https://t.co/Ez4VipYqXx 2021-02-15 04:00:23 Why don't we have clear norms around internal corporate data & 2021-02-15 04:00:22 This secret Google dataset reminds me of the internal data used by Facebook in 2014 to train DeepFace - the first deep learning model for facial recognition, that held SOTA on LfW for a while. That dataset was composed of 4.4 million labeled faces from actual Facebook profiles. https://t.co/DyjWXoG6pI 2021-02-14 13:00:59 RT @tdietterich: Every once in a while, something amazing comes out on @arxiv. Check out this cool new ML book by @mrtz and @beenwrekt . ht… 2021-02-14 03:57:09 @alexhanna Awful. Sending you all the love and strength you need right now. 2021-02-12 21:32:18 RT @sherrying: I’m proud to share https://t.co/RjHBOLsHzl, featuring political and economic solutions and ideas generated by CASBS’s networ… 2021-02-12 20:28:59 RT @TCIAMN: We did it, Minneapolis. The City Council just voted to ban facial recognition in our city — a huge victory for privacy and for… 2021-02-12 20:26:49 RT @suryamattu: Good example of why we invest the time in building tools that allow for persistent monitoring https://t.co/9QHn9fVxnG 2021-02-12 13:30:55 RT @Jeopardy: Can you do the math? @AJLUnited founder Joy Buolamwini (@jovialjoy) presents the MATH IN THE WORLD category, in partnership w… 2021-02-12 13:16:58 RT @Patterns_CP: For @WomeninScienceDay, we asked Advisory Board member Inioluwa Deborah Raji how to get girls interested in data science.… 2021-02-11 19:29:58 Yeah, what I had said was, "Recruiting already vulnerable people into a hostile environment is abuse." Why bring minorities into conditions where you know they will be disrespected or otherwise mistreated? Without offering protection, that serves as its own form of violence. https://t.co/AMrPwFNoTH 2021-02-11 18:46:02 RT @lawrennd: This is quite different to my experience. My breakthrough progress always occurred through inspiration. I needed time and… 2021-02-11 18:28:14 RT @lucy3_li: a little reprioritization https://t.co/GydAEqtc6m 2021-02-11 18:17:41 @mutalenkonde 2021-02-11 16:53:14 @lopalasi @mutalenkonde @daphnehk @Abebab The primary concern is how I can protect myself given the fact that I'm Black and that everyone always knows it. 2021-02-11 16:51:28 @lopalasi @mutalenkonde @daphnehk @Abebab As in the Nazis will always know we are Black, because we often look very different and can't easily hide this. And they already know which local businesses are black owned, already know where to find us. So identification/exposure as a Black person isn't often a primary concern. 2021-02-11 16:50:11 @lopalasi @mutalenkonde @daphnehk @Abebab Hm, this is something I hear a lot from Europeans, likely because the traits for which people experience discrimination there can be hidden (ie. immigration status, sexuality, religion, or ambiguous ethic groupings like being Jewish or Hispanic). IMO Blackness is always visible. 2021-02-11 16:38:53 @lopalasi @daphnehk @Abebab @mutalenkonde I don't think so, in this case. Community-led lists have the same highlighter effect & 2021-02-11 16:33:12 @mutalenkonde @daphnehk @lopalasi @Abebab lol I'm sure @daphnehk meant well. Though yeah, race issues are always much simpler than we'd like to pretend, I'm getting into the habit of pointing out when the solution is there and the thing to do is just acknowledge the situation, however uncomfortable & 2021-02-11 16:26:32 @daphnehk @lopalasi @Abebab @mutalenkonde + an inappropriate process can yield a list that doesn't embody the original cause of the effort or does so without transparency. If not done well, this could be an oversaturated list of any Black business vs. a mode to highlight undervalued gems in true need of further promotion 2021-02-11 16:16:04 @daphnehk @lopalasi @Abebab @mutalenkonde hm, I'm not sure if it's that complicated. It most definitely matters who made the list & 2021-02-11 15:19:27 RT @beenwrekt: I’m excited to share a new textbook @mrtz and I wrote: "Patterns, Predictions, and Actions: A Story about Machine Learning."… 2021-02-11 00:47:14 With all this talk of 90 hr "intense work" weeks, I'm feeling especially grateful for the positive role models I've miraculously surrounded myself with. People that prioritize rest, can say "No" & 2021-02-10 22:50:02 @jackclarkSF 2021-02-10 19:47:27 RT @mariadearteaga: This year's @FAccTConference offers: 1⃣Discounted registration for countries listed by ACM as economically developing (… 2021-02-10 19:46:20 "Discovering multi-billion dollar spending deltas across different info sources from same government" https://t.co/jCugLCmrGs 2021-02-10 19:44:42 RT @Patterns_CP: Patterns is proud to celebrate Black women in STEM, including Advisory Board member @rajiinio. Inioluwa Deborah Raji belie… 2021-02-10 16:52:33 RT @jovialjoy: #ad As part of @Olay’s #FaceTheSTEMGap mission, I’m making an appearance on @Jeopardy to raise awareness for the gender gap… 2021-02-10 13:58:01 RT @FAccTConference: Registration is now **OPEN** for ACM FAccT 2021! The conference will be held March 3-10. Early bird pricing is availa… 2021-02-10 04:19:09 RT @red_abebe: The video is up: https://t.co/tSattxKKcH It is well worth your time and I especially appreciate the discussion on time. T… 2021-02-09 16:19:50 RT @bobehayes: Webinar February 9, 2021 2021-02-08 18:19:10 RT @easears: From the wrongful arrest of a Black man based on a faulty facial recognition tool to the firing of @timnitGebru from Google, 2… 2021-02-08 18:18:20 "Beneath the veneer of new & Ironic when tech meant to push us forwards ends up entrenching past mistakes. https://t.co/Ptor0ZKW2i 2021-02-07 23:00:44 RT @mercola: “Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't… 2021-02-07 19:00:08 RT @RaceNYU: Incredibly important new study from @rajiinio & 2021-02-07 00:26:25 @genmaicha____ (Retweet) 2021-02-06 16:49:50 Yes! And they should really learn about engineering responsibility too, including something we call... documentation. https://t.co/LrcSoRvfg8 2021-02-06 13:43:20 I still remember when my little sister once asked me if there was an African Disney princess and I had to say "Nala" (from Lion King), as if we don't have this rich history of actual human princesses with incredible, interesting stories. 2021-02-06 13:35:56 RT @UpFromTheCracks: I just sent my kid to school IRL for the first time this week and discovered they’re using facial recognition temperat… 2021-02-06 13:15:07 I applaud the push for diversity in #Bridgerton but can't help thinking - so many actual Black characters were actually around back then, there are dozens of great stories of Black royalty in Africa. Wish someone would finally write a series about them. 2021-02-06 12:19:32 RT @_KarenHao: As @rajiinio says, “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you hav… 2021-02-05 23:37:32 RT @mmitchell_ai: I am concerned about @timnitGebru 's firing from Google and its relationship to sexism and discrimination. I wanted to sh… 2021-02-05 20:37:59 RT @jennifer_e_lee: Y'all, SB 5116, a bill that would ban discrimination via automated decision-making systems has just PASSED out of the… 2021-02-05 20:35:47 RT @dsango: “We were shocked by the outcomes of the research/project/etc.” The dataset: https://t.co/JkpLuGzkGs 2021-02-05 14:22:46 RT @robotsmarts: Facial recognition - how did we ever get here? When most of us researchers started in this domain - it was an interesting… 2021-02-05 14:13:12 Also, something that doesn't quite make it into the article is how facial rec was motivated by a certain type of surveillance problem from the beginning. It's unclear how much, in that context, consent was ever to be prioritized in applications anyways. https://t.co/7352rtZGfG 2021-02-05 14:07:33 @neilturkewitz @hypervisible @_KarenHao @techreview Thanks for sharing this! Will check it out. 2021-02-05 14:06:35 The fact that we fall into this trend of data hoarding *even when handling the most sensitive biometric info* reveals something about our priorities as a field and how it's shifted away from recognizing the humanity of the people present in our accumulating datasets. https://t.co/HHxoPwZedS 2021-02-05 14:06:34 In this article, @_KarenHao summarizes our paper so concisely. Facial recognition is just the latest (& https://t.co/w2qTe2WWyy 2021-02-05 13:46:30 RT @ambaonadventure: https://t.co/BRK58Sl6OK Read @_KarenHao's excellent summary of @rajiinio and @genmaicha____ important new work. (1/3) 2021-02-05 13:46:15 @nunuska @kelarini Wow, that was lovely. Thanks for sharing! 2021-02-05 05:29:44 @kelarini This is honestly what hurts me the most 2021-02-05 01:19:35 @alexhanna lol clearly all the judges watch the same TV 2021-02-05 01:17:43 @alexhanna I know I shouldn't care but man, it hurts. 2021-02-05 01:16:21 ...how is there not a *single* woman of color on this list? #GoldenGlobes https://t.co/slvlfkXE5O 2021-02-04 23:41:22 RT @melindagates: Pass this along to a student in your life who should hear from @serenawilliams, @YaraShahidi, @rajiinio and more that com… 2021-02-04 20:35:51 RT @lauramoy: Hi friends! Last week on my birthday (!), Illinois Law Review published my first law review article, "A Taxonomy of Police Te… 2021-02-04 13:10:03 @hildeweerts @roeldobbe @Lauren_wbg oh my goodness, thank you! 2021-02-04 13:07:58 RT @ForbesTech: https://t.co/rA41F83y76 2021-02-04 12:29:20 @roeldobbe @hildeweerts @Lauren_wbg oh man, we really need to put this up on arvix! 2021-02-04 05:44:47 This is an interesting full circle moment - when working on outreach via @hashtag_include, we heavily relied on @codeorg resources to bring coding curriculum to local students in low income & Incredible to actually be able to contribute something back! 2021-02-04 05:34:50 Happy to see this video out in the world! It's never been a more important time to celebrate #BlackVoicesForCS https://t.co/mpK79HF9Bn 2021-02-04 00:17:08 RT @vineshgkannan: Yesterday was my last day at Google. I left because Google's mistreatment of @timnitGebru and @RealAbril crossed a perso… 2021-02-03 23:55:08 RT @PrivacyPrivee: News release: Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say https://t.… 2021-02-03 22:47:41 @mmitchell_ai LOL I love this. 2021-02-03 21:03:17 Ahaha, I love this thread! Pure joy Congrats! https://t.co/c49a4aMymE 2021-02-03 16:52:46 @databoydg Don't even get me started on that one 2021-02-03 16:38:25 @BrownSarahM 2021-02-03 14:12:44 Knives don't require the hoarding of my biometric information. Also, knives work. Can't stand this dual use argument when used to justify AI deployments that aren't even functional. Also annoyed with comparisons that ignore the tech's unique, data-specific ethical implications. https://t.co/Uq6Y7jBhrX 2021-02-03 01:18:04 RT @LeonYin: Want EZ data dictionaries/directories for README files? Here's a short script to turn a dictionary like, {"column": "descripti… 2021-02-02 23:01:35 @mdekstrand I don't even know how to start thinking about this lol 2021-02-02 22:35:57 @undersequoias Honestly? The scare quotes around "suspicious persons" - I don't think NYT would have stated that ironically 2021-02-02 22:28:43 @undersequoias lol at first I thought that was a quote from the article 2021-02-02 22:24:06 RT @samuelmcurtis: Great preprint: the segmentation of FRT development into periods is very informative. https://t.co/UhJM8kF1zj https://t.… 2021-02-02 22:23:33 I really hope people working on facial rec development & https://t.co/LjbXrVxKtU 2021-02-02 22:17:24 @jackclarkSF Thanks! We're hoping it can be a resource for those trying to make decisions about the technology and it's use, especially those thinking about audit design. 2021-02-02 22:14:57 Facial recognition was literally conceived with the task of matching mugshot photos in a book of "suspects" & 2021-02-02 21:26:45 The main lesson we learn is this: from the *very* beginning, facial recognition was designed to identify suspects with the goal of apprehension, whether in the context of law enforcement, intelligence or immigration. Current "benign" commercial uses are relatively new inventions. 2021-02-02 21:26:44 A long overdue pre-print is finally out today! Me & 2021-02-02 19:57:18 RT @JuliaAngwin: So true. Tech coverage has become a civil rights beat. https://t.co/zNqeaKubdp 2021-02-02 12:07:32 @Abebab There's a good critique paper from our ICML workshop (https://t.co/FLeJhnI2ag) - one of my fave contributions, "Participation is Not a Design Fix for Machine Learning". They also wrote on op-ed on it: https://t.co/AS035TJgLo 2021-02-01 22:53:39 @databoydg @wgrathwohl lol this is blackmail... 2021-02-01 21:01:45 @alexhanna @megfenway LOL the only thing I miss 2021-02-01 19:09:43 Wow - thanks for the shoutout! Happy #BlackHistoryMonth ! https://t.co/7jh4IrkPKG 2021-02-01 18:17:32 @mer__edith @mmitchell_ai 2021-02-01 18:11:05 Now, I've found out he's also gone after @mmitchell_ai as well People need to understand how absolutely exhausting it must be for these women to be facing this harassment, for weeks now. https://t.co/prLOjpu0eU 2021-02-01 01:03:34 @zehavoc @yoavgo @krismicinski You too! 2021-02-01 01:02:49 @zehavoc @yoavgo @krismicinski If it helps, I'm on Chrome and the "I'm feeling lucky" button is still there lol. @yoavgo I think you need to literally go to https://t.co/2myEhOAyX6 to see it! 2021-02-01 00:59:40 @zehavoc @yoavgo @krismicinski You can check out the livestream though, I think the panels and talks covered a lot of the main directions captured in the papers. 2021-02-01 00:59:05 @zehavoc @yoavgo @krismicinski Gah, unfortunately not yet. We were thinking of writing something but felt like ideas are still maturing and so we decided to hold off until we've run this a couple more times to figure out how people were actually defining "participatory ML". 2021-02-01 00:41:21 @yoavgo @krismicinski We actually organized a workshop at ICML last year trying to get at how to technically design ML systems in such a way, to give individuals the power to shape the perspective informing their predictions and the objectives of the model. You might enjoy this:https://t.co/FLeJhnI2ag 2021-02-01 00:39:26 @yoavgo @krismicinski ok think I'm starting to get what you're worried about and it's reasonable. I think my take is that a model explicitly encoding certain inclusive values is safest for me so I prefer that being the default. But ideally, I think we'd want something like https://t.co/Emy7Ehtdah 2021-02-01 00:32:33 @yoavgo @krismicinski But they are already pushing a uniform agenda into the world via their products? It's just not actively or consciously acknowledged. And they're allowed to do that I guess, as an org with certain values. If they chose to push something different & 2021-02-01 00:25:58 @yoavgo @krismicinski What's the concern / feared outcome here? Sorry, I don't get the risk of acting with intention about the perspective/biases one chooses to represent with their data or model. 2021-02-01 00:24:43 @yoavgo @krismicinski I don't think one can "remove" biases. There's no such thing as an "unbiased", "neutral", "objective" value-free dataset here. The default still encodes societal default biases (which tend to oppress certain groups), every perspective embedded in data includes some kind of bias. 2021-02-01 00:21:11 @yoavgo @krismicinski I think it's as simple as that. This isn't about who's view wins, but just recognizing the reality of bias in every perspective (including the nominally "default", "neutral" perspective) and understanding that we have the option to approach things with intention instead. 2021-02-01 00:19:44 @yoavgo @krismicinski No, I'm not really proposing anything. I'm saying the current biases are imposed perspectives that aren't really neutral. They represent a certain worldview, that's actually not objective. Those oppressed under that world view actually have the option of doing things differently. 2021-02-01 00:17:20 @yoavgo @yuvalmarton @krismicinski Yeah, that's allowed. Curious though - what's the advantage of keeping that definition unsaid, and keeping these biases implicit under the pretense of "neutrality"? 2021-02-01 00:15:32 @yoavgo @krismicinski Yeah, I think the AI ethics stance here is that "neutral" doesn't exist, so we might as well be explicit in what decisions we're making and what's going on. "Neutral" is really anchored to a bunch of social biases and other things that means it's not really objective at all. 2021-02-01 00:13:13 @yoavgo @yuvalmarton @krismicinski I think being explicit in deciding what's good is better than implicitly deciding what's good? 2021-02-01 00:11:37 @yoavgo @krismicinski The proposal of alternative biases is there to remind people that the current, default biases aren't the only ones we have to accept. We can choose differently, especially if we claim to have values that prioritize inclusion, and the protection of people, etc. 2021-02-01 00:09:34 @yoavgo @krismicinski It's hard for me to assess that proposal without reading the paper. "Positive" could translate to "non-violent" or "less extreme", which would skew recs away from content that will radicalize ppl in harmful ways. Either way, this isn't the argument a lot of AI ethics is making. 2021-01-31 23:33:12 RT @conitzer: The AI, Ethics, and Society Conference's (@AIESConf) submission deadline is this Sunday (as long as it's still Sunday anywhe… 2021-01-31 23:31:03 @michaellavelle @yoavgo @krismicinski I agree with this completely - my perspective is actually that much of the harmful "biased" claims coming out of LMs are also just factually incorrect (ie. "All Muslims are ..." claims). Internet data is corrupt data and there are a lot of reasons to be skeptical. 2021-01-31 22:56:03 This guys has been targeting and harassing women on Twitter for weeks, including @timnitGebru & Exactly how many more times do we need to report him to get his account suspended? Ugh. https://t.co/PPzFUvCd7V 2021-01-31 22:54:12 @TaliaRinger This is disgusting. I'm so sorry! 2021-01-31 22:44:53 @yoavgo @krismicinski It's more of a comment of "look at how society treats this marginalized group & 2021-01-31 22:44:03 @yoavgo @krismicinski That's a reasonable concern. My understanding, though is that this is a question of intention more than the installation of one view over another. Right now, the biases we observe are being presented as some objective fact or truth. A lot of that work points out this is false. 2021-01-31 22:33:32 @yoavgo @krismicinski Either way, just because atm its not very actionable doesn't make the critique any less valid - that AI is a technology entwined with society itself (to a degree more intensely than other technologies that don't involve data), and that AI's issues are tied to society's issues. 2021-01-31 22:29:44 @yoavgo @krismicinski You mentioned it was coming from "non-tech" people. Sorry if I misunderstood! 2021-01-31 22:28:50 @yoavgo @krismicinski I'll admit though, that this kind of critique is often hard for technical researchers to grapple with, because it's an imprecise critique, and it can be really frustrating due to that impracticality. There are technical responses for what is being said, but that's still evolving 2021-01-31 22:25:10 @yoavgo @krismicinski I'll also challenge the idea that the critique is only coming from non-technical perspectives (it's mostly women & 2021-01-31 22:22:12 @yoavgo @krismicinski I don't know of a lot of AI Ethics work that's framed this way. I'd say their point is that society's issues (including racism) shape the AI systems - ie. AI doesn't originate from "nowhere" - and so its challenging to make AI less harmful without properly considering that origin 2021-01-31 02:32:01 @kchonyc lol I feel like they just listed the team's alma maters 2021-01-31 02:23:38 @JennaJrdn @hedgielib @KristinBriney @LibSkrat @elliewix @paigecmorgan @thecorkboard @jon_petters @ragamouf @Hao_and_Y @HannahGunderman @melissaekline @Dorris_Scott @PhDToothFAIRy Thank for putting this together - very helpful :) 2021-01-30 19:27:52 @LibSkrat Thanks for sharing - glad to check it out! I admit I'm really new to this idea of data librarians but quite curious. Any recommendations for where to start reading to learn more would be really helpful! 2021-01-30 18:41:08 @JennaJrdn @hedgielib @KristinBriney @LibSkrat @elliewix @paigecmorgan @thecorkboard @jon_petters @ragamouf @Hao_and_Y Wow, thanks so much for an incredible explaination! 2021-01-30 18:16:08 @JennaJrdn Hm was looking this up & 2021-01-30 18:11:01 Yes! Good reminder - @nycgov does a lot of things right with their Open Data initiative. I appreciate! https://t.co/flJQZOs9rd 2021-01-30 17:43:02 Wow - proof @themarkup is actually superior. https://t.co/kEoKHbL4Af 2021-01-30 17:29:23 @jackclarkSF @OECDinnovation Makes no sense - data is just as portable as functions, context is just as necessary. Also, commerical context is likely where this has a chance of getting addressed - public sector dataset variable names are rife with niche assumed knowledge that never gets articulated anywhere 2021-01-30 17:25:58 @jackclarkSF @OECDinnovation Hm, I honestly can't think of anything since it's not a widespread practice at all. Some github repos will have a data index in the README but that's it. Even eng teams that are super meticulous of defining function variable names get sloppy when it comes to data. Don't know why 2021-01-30 17:10:49 I call it a "data directory", some call it a "data dictionary". With every data release, someone will inevitably ask for one. It's almost never provided...because for the vast majority of datasets it doesn't exist. https://t.co/HdeJuAGHx9 2021-01-30 11:48:06 RT @jachiam0: Does anyone else feel a little spooked by the similarity between r/WSB culture and the 4chan far right radicalization pipelin… 2021-01-29 21:38:33 + congrats to @Aaroth & It's honestly really cool to see researchers working on these problems being recognized more broadly. An encouraging moment that indicates that we're on the right path! https://t.co/T5uEqZ5l6z 2021-01-29 21:38:32 Today I learnt @schock WON an Association of American Publishers (AAP) PROSE Award for Excellence in Physical Sciences and Mathematics for the book "Design Justice" So happy to see this work recognized!! @AJLUnited for the win (literally)! https://t.co/mvpqSxjcBl https://t.co/IBGYxnZM03 2021-01-29 20:38:03 @geomblog @KLdivergence @Aaron_Horowitz @mmitchell_ai @SwitchedOnPop what, how is there so much discussion about this random teen pop tune?? 2021-01-29 20:37:15 @KLdivergence @Aaron_Horowitz @mmitchell_ai lol you should! It's a compliment 2021-01-29 18:26:30 RT @_KarenHao: The latest image-generation algorithms are trained using unsupervised learning (unlabeled images), removing any possible bia… 2021-01-29 18:25:42 @mmitchell_ai lol I needed this! Yeah, trying to learn from @KLdivergence on how to seamlessly increase my shitposting & 2021-01-29 18:21:19 @mmitchell_ai 2021-01-29 18:00:25 RT @BigDataMargins: Big Data at the Margins is pleased to present: "Digital Policing: Facial Recognition Software & 2021-01-29 18:00:00 @mikarv @BigDataMargins lol I wish! Still looking for a non-ugly template :( 2021-01-29 17:53:45 @mikarv @BigDataMargins Perfectly captures the absolute drama of surveillance 2021-01-28 22:37:44 Wow. "As I watched a mob batter down the Capitol...I thought of my work in machine learning, thought of that drone strike, and wondered ‘did I unwittingly help create this?’" from "My friend radicalized. This made me rethink how I build AI" by @thejaan https://t.co/dEHdvtOyfu 2021-01-28 18:00:54 RT @_KarenHao: Ok AI Twitter hive mind! It's that time of year when @techreview starts researching our annual list of Innovators Under 35.… 2021-01-28 09:35:25 @kharijohnson @_KarenHao lol it's coming soon Will tag you once it's out! 2021-01-28 02:11:59 @nsthorat lol yeah, I actually agree here. So depressing though - the actual fate of people dependent of if those that do care can make interventions easy enough to implement by those that don't. ugh. 2021-01-28 02:10:23 @nsthorat But why are you building a car without understanding the importance of adding breaks? If you need someone else to build the breaks for you, then you really can't be trusted to build a vehicle that's safe. 2021-01-28 02:07:00 @nsthorat I agree that for ML in particular, eng responsibility is not always clear & 2021-01-28 02:04:07 @nsthorat Intent doesn't matter. If I built and sold a car with no brakes because I didn't feel like adding them, and thought I would get away with it, I'm still responsible for the harm that comes from that. 2021-01-28 01:50:09 The most tragic lesson I've learnt doing any kind of algorithmic auditing is that companies will not bother to make their product work, if they know the group it doesn't work for is powerless to stop them. 2021-01-27 23:45:44 About to talk to my laptop screen for an hour - if you're registered to watch me do so live, I'll see you soon! https://t.co/zmlg9h6j9G 2021-01-27 22:26:42 lol seriously! I even recently quoted his words in a paper. Him & 2021-01-27 14:03:47 RT @ndiakopoulos: This is an important critique of the limitations of for-hire algorithm auditing. Accountability only happens when there a… 2021-01-27 11:44:50 @alixtrot 2021-01-26 23:03:50 @aylin_cim @KendraSerra @ryanbsteed @FAccTConference This is my understanding of current norms as well. These norms are worth revisiting tho. It's already led to the slippery slope of IBM's Flickr face haul (https://t.co/Yvn4UNScHg) & https://t.co/7SjQsS5wI4 2021-01-26 22:15:15 @Charles_Butlerk @AOC This is gross and disrespectful. Please delete this or I'll report. 2021-01-26 21:59:38 @davedarko Good questions! It's definitely creepy - but it's a norm in computer vision to use any image that's public domain, especially the faces of public figures, with the assumption that well photographed politicians & 2021-01-26 21:21:37 Interesting thread. Fortunately, @timnitGebru & I hope ML researchers read it! https://t.co/5n4y59wmRr 2021-01-26 20:29:14 I'm also reminded of this - the infamous PULSE face depixelizer is another solid demo of bias in unsupervised image models. https://t.co/2B8ZlFNHsv 2021-01-26 19:55:53 Well, maybe I shouldn't be so surprised. @Abebab & https://t.co/9eXjtRGmEd 2021-01-26 19:25:37 Wow, the results here are genuinely alarming. Just like with auto-generated text, auto-generated images will encode harmful biases - here we have a cropped image of @AOC being completed with generated pixels depicting overtly sexual attire/nudity! https://t.co/OJHB01zU6m https://t.co/LSqu2SExzZ 2021-01-26 16:42:35 RT @echo_pbreyer: @simonilse In the US, the police bases operative decisions on AI algorithms, explains @rajiinio: "The police use of the… 2021-01-26 11:15:00 @Tarakiyee Oddly enough, I just saw this before seeing your tweet. GTP-2's anti-Muslim bias is definitely a thing! https://t.co/BepdydpVhg 2021-01-25 23:23:40 "Perhaps training language models on data from the open Web might be a fundamentally flawed approach." Yeah - I have never agreed with a conclusion more! 2021-01-25 23:23:00 Please read this amazing blog post by the authors that lays the risk out even more clearly - some pretty insane examples here, including GTP-2 sharing leaked/doxxed info, and copyrighted info in addition to personal data. link: https://t.co/gg1xHyriX7 https://t.co/l59RvKnkUS https://t.co/YlZV0hmdLk 2021-01-25 23:14:27 @adversariel @IreneSolaiman @Eric_Wallace_ @florian_tramer @mcjagielski That was an amazing read - thanks for sharing & 2021-01-25 22:57:04 @jabuppartyon Yeah, was very surprised the press did not pick it up lol 2021-01-25 22:56:25 @IreneSolaiman @adversariel Very spooky. Not sure where to file this mentally - in addition to bias, data corruption, environmental impact, consent, etc. there's clearly an infinite number of things to think about! 2021-01-25 22:28:49 An often ignored aspect of large language models is the security threat of input data leakage. It's no joke. It's not just Gmail auto-completing my home address - even models like GTP-2 trained on "public" Internet data will memorize sensitive info (ie. https://t.co/9bK1P1klnV) https://t.co/1YU1NJzDsf 2021-01-25 22:05:55 @morganklauss Well deserved! 2021-01-25 20:32:45 RT @ZoeSchiffer: More than 150 computer science educators have signed an open letter in support of Dr. Timnit Gebru. "We believe the comput… 2021-01-25 19:40:05 RT @benzevgreen: Reminder: two days left to apply for the @FAccTConference Doctoral Consortium! We welcome doctoral students in all discipl… 2021-01-25 14:23:47 RT @VidushiMarda: Today we're releasing a labor of love over the last year, "Emotional Entanglement: China’s emotion recognition market a… 2021-01-25 14:19:22 RT @feministandacc1: This Wednesday at 7 PM ET is our event with @rajiinio !!! For our full schedule and links to sign up for our free, ope… 2021-01-25 00:05:45 This is quite literally the thesis of our paper "Data and its (dis)contents: A survey of dataset development and use in machine learning research". https://t.co/23InWHyomr I hope ML researchers will take a look! https://t.co/NWfMjveByr 2021-01-24 23:11:04 RT @niedakh: Arguments made by @emilymbender & 2021-01-23 05:55:52 @nealkhosla @vc @sethbannon @zebulgar @winnerstakeall + debate on the reality of the "threat" of CCP & Ok this is my last message lol - gotta get out of these poor people's mentions! 2021-01-23 05:44:44 @nealkhosla @vc @sethbannon @zebulgar @winnerstakeall All the things I've communicated are part of the message of the original thread? And *you* may not be saying to have blind faith, but it's a position I see & 2021-01-22 23:55:14 @nealkhosla @vc @sethbannon @zebulgar @winnerstakeall That's given my values & 2021-01-22 23:51:04 @nealkhosla @vc @sethbannon @zebulgar @winnerstakeall I don't agree that the value of winning the race supercedes the risk. The reality of the situations I encounter with deployed AI systems is that rushing something out thoughtlessly ends up doing more harm than good, while also not really achieving the original goals for the tool. 2021-01-22 22:18:58 @nealkhosla @vc @sethbannon @zebulgar @winnerstakeall She doesn't go there in her thread but I've mentioned before that the cost is more than environmental (ie. https://t.co/zjlZQdA4e6). Either way, the gist of her call to caution isn't unwarranted. Being careful doesn't mean innovation has to stop, just that it's ok to slow down. 2021-01-22 22:13:19 RT @RaceNYU: Thanks to the work of people like @jovialjoy, @timnitGebru, @rajiinio, @GeorgetownCPT, we know the technology does not work fo… 2021-01-22 20:13:30 @vc @sethbannon @zebulgar @winnerstakeall I get that some will think the protection and consideration for those that may be harmed is not worth the dip in efficiency, but I think that caution in investment is more than justified, especially if these people are unlikely to experience the benefit anyways. 2021-01-22 20:09:52 @vc @sethbannon @zebulgar @winnerstakeall Her only point is that we can't run ahead and innovate without being careful. And we definitely can't justify any harm done based on just the promise of some aspirational and imagined benefit. The response to this should be an acknowledgment of slowing down, being more cautious 2021-01-22 20:07:41 @vc @sethbannon @zebulgar @winnerstakeall But also this isn't even really about who does it - the slowness of govt funding processes is a feature, not a bug, to push for the additional consideration of some often overlooked impact to doing certain kinds of research. We can import that caution into industry research too. 2021-01-22 20:06:09 @vc @sethbannon @zebulgar @winnerstakeall Yeah - why can't the government do it? We can tax corporations or the rich to fund it. The same researchers will be involved anyways, just with diff affiliations. 2021-01-22 19:53:47 @sethbannon @vc @zebulgar @winnerstakeall That interpretation is reasonable but really not what she's saying in the thread. She clarifies this in subsequent tweets: https://t.co/2FkLYppSrT 2021-01-22 19:26:22 @sethbannon @vc @zebulgar @winnerstakeall With my own comments, I guess I'm trying to clarify that the kind of equity she's calling for is more of a participatory & 2021-01-22 19:19:21 @sethbannon @vc @zebulgar @winnerstakeall The context is that Big tech cos are training large models & https://t.co/lJ7Xr36svA 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired