Découvrez Les IA Experts
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Profil AI Expert
61 AI Experts l'ont reconnu
Les derniers messages de l'Expert:
2024-11-28 18:08:16 RT @mayameme: @Abebab has an AI accountability lab now and is Hiring | AI Accountability Lab https://t.co/Qi2PEkTceQ
2024-11-28 16:02:36 So happy for Abeba! A huge milestone for her AI accountability work! https://t.co/z4NgWEFfyp
2024-11-28 00:12:28 @HellinaNigatu @alsuhr @sarahchasins @monojitchou Congrats!! Lol dying at the iconic African presentation template
2024-11-26 23:51:55 RT @Knibbs: Exclusive: A new analysis found that more than 50% of LinkedIn blogs are written with AI. For anyone who spends time on LinkedI…
2024-11-26 20:44:59 Excited to see this - a solid hire for US AISI! https://t.co/tAZjMxq9z3
2024-11-24 01:26:19 RT @sebkrier: Do you think external third party model testing is important? Do you have experience working on frontier safety (e.g. CBRN),…
2024-11-24 01:14:31 Work with Marissa! Genuinely one of the best people to learn from on how to translate data science work into legitimate impact! https://t.co/SipdKX62nq
2024-11-24 01:13:21 RT @mkgerchick: ACLU is hiring an Algorithmic Justice Fellow to work on cutting edge projects focused on digital rights — come work with us…
2024-11-23 15:51:44 @nrmarda Hm yeah I get that but why can't "AGI" be a network of less capable models? Or a more accessible, more usable lower capability model, etc? Like imo even by their own definitions it's worth monitoring model &
2024-11-23 15:41:46 @YJernite Wow, what? Adoption could be as simple as number of consumer and enterprise users
2024-11-23 01:47:12 .. why can't AI product risk categories operate the same way? Clearly the risk of ChatGPT and the like is linked to the scale of its adoption, which domain it gets deployed into, etc. Genuinely curious about why this happened - wonder if this is one of those arbitrary anchors.
2024-11-23 01:37:54 Don't get why AI Safety Frameworks only focus on risk being correlated to increases in "capability" (ie how much an individual model can do) vs other things (eg. the scale of adoption/impact, domain of use, etc)? For eg., the DSA classifies risk on platforms by number of users
2024-11-22 19:24:50 RT @KLdivergence: Hi! I'm hiring a Research Engineer to join my team at Google DeepMind for the year. You'd be working with a great, interd…
2024-11-22 19:24:11 @thegautamkamath Fwiw this is really not what ethics review is for
2024-11-22 18:51:55 @jessicadai_ Oh ok one sec one sec
2024-11-20 11:57:05 This is .. alarming to say the least. The bureaucratic over-scrutiny of medical insurance claims (via ~50 algorithms ?!) in order to systematically deny mental health care. https://t.co/NJFtT12zgO
2024-11-20 05:24:46 RT @Manderljung: The EU Commission is looking for a Lead Scientific Adviser for AI. Would strongly encourage technical folks apply. Giv…
2024-11-19 01:00:51 @iajunwa @emory @EmoryLaw Congrats!
2024-11-18 17:30:30 Great to see our paper w @HellinaNigatu (https://t.co/xyPqPF5s6r) mentioned in this @WIRED article: https://t.co/DSYEl8UxX2 https://t.co/DzFtLfXRtF
2024-11-15 00:50:31 Anyways, I am also in the other place (bsky!) - same username @rajiinio :)
2024-11-15 00:47:56 Whoa - the examples in this thread are kind of concerning. Is X trying to encourage actual ad purchases by making it seem like more accounts are advertising on this platform than there actually are? That would be so strange &
2024-11-15 00:41:08 @adash0193 Yeah, np - thanks for flagging!
2024-11-15 00:28:48 @adash0193 Whoa, that's super weird... Yeah I most definitely didn't buy an ad
2024-11-15 00:15:39 Ah, so excited to see that this paper won an Outstanding Paper Award at EMNLP! I've learnt so much from @HellinaNigatu about how to think about the complex politics of "low resourced" languages
2024-11-13 02:16:10 @MFGensheimer @zakkohane @AMIAinformatics @CALonghurst @UCSDHealth @doc_b_right @CedarsSinai @UCBerkeley @NEJM_AI I disagreed with that too actually. I think AI products are much more similar to medical devices than drugs, &
2024-11-12 16:27:31 Excited for this! It's too easy to see "values" in ML design, development &
2024-11-11 03:15:09 RT @zakkohane: How To Put The Missing Human Values Back Into AI: Looking forward to our panel @AMIAinformatics #AMIA2024 Tuesday https://t.…
2024-11-08 01:04:38 RT @_ahmedmalaa: Please retweet: We're recruiting PhD students at UC Berkeley and UCSF! Please apply if you are interests in machine lea…
2024-11-06 22:00:37 @Ket_Cherie At the time it made sense but ultimately a new crop of rules will need to come from legislative interventions in order to be harder to reverse long term. We can't rely on executive interpretation as the main mechanism for defining new AI guardrails. + hope you're well, as well!
2024-11-06 21:56:58 @Ket_Cherie Yes, all regulation is dependent on the executive branch for enforcement but there was a lot of rule-setting happening at that level for AI policy in particular. Different agencies and the WH were re-interpreting or updating existing rules to cover the needs of dealing with AI -
2024-11-06 19:27:58 AI policy is, at present, way too dependent on a cooperating executive branch. Part of this trend was pragmatic (ie agencies hold the tech expertise, legislation is slow &
2024-11-01 01:36:51 RT @sarameghanbeery: FAT BEAR WEEK!!!! Happy Halloween from the BEAR-y Lab https://t.co/UbLcYTT2CT
2024-10-29 18:24:34 @QVeraLiao @UMichCSE Congrats, Vera!! Excited to see what you do in this new role!
2024-10-29 18:18:37 @zephoria Congrats, danah!! Your students will be so lucky to learn from you
2024-10-29 16:11:00 @mathver Lol the way I held my breath
2024-10-29 16:01:03 RT @mathver: Today the European Commission proposed how Art. 40 of the Digital Services Act (#DSA) could work in practice. In a worldwide f…
2024-10-29 15:46:20 RT @FAccTConference: We've released the CFP for #FAccT2025, which will be held in Athens, Greece! Abstracts are due on January 15th, pap…
2024-10-28 17:25:40 RT @ShayneRedford: Webinar on The Future of Third-Party AI Evaluation starting soon! At 8 am PT / 11 am ET join the zoom link here: ht…
2024-10-28 15:00:04 RT @kevin_klyman: Starting in half an hour - check out our workshop on the future of AI evaluation! Co-organized with @ShayneRedford, @saya…
2024-10-28 14:12:53 RT @2plus2make5: Please retweet: I am recruiting PhD students at Berkeley! Please apply to @Berkeley_EECS or @UCJointCPH if you are intere…
2024-10-28 14:10:32 @_JacobRosenthal @LiamGMcCoy I agree, actually, and, if you haven't yet, I encourage you to check out the linked article! So many of these issues can be avoided with greater caution in deployment &
2024-10-27 18:30:34 RT @AP: Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said https://t.co/mRjYfdxWgR
2024-10-24 15:01:31 @IanArawjo @jeffbigham Lol yeah I used to have convos where ml researchers were clearly collecting human subject data, sometimes even doing *interviews*, and still insisting they did not require an IRB Also explaining that "I've never done it before" is not a reason to keep not doing it
2024-10-24 14:54:03 @jeffbigham depends on venue or the type of paper, but iirc the checkbox items are of the sort "did you do an irb and mention this in the paper"? Esp for data related work, I think it shifted norms to be as explicit as possible (ie. No one wants to deal w it coming up in ethics review lol)
2024-10-24 14:17:52 @jeffbigham fwiw in the ml context (eg. Neurips, ICML, ACL), before ethics reviews / checklists were implemented, no one mentioned IRBs because no one was doing them lol so perhaps a good problem to have? Aha
2024-10-24 14:14:05 RT @kanarinka: I’m so thrilled and honored that Counting Feminicide won the @amerbookfest award for best book in the Women’s Issues/Women’s…
2024-10-24 14:11:29 @kanarinka @AmerBookFest Congrats!!
2024-10-22 16:23:15 RT @canfer_akbulut: I'm presenting our work on the Gaps in the Safety Evaluation of Generative AI today at @AIESConf ! We survey the state…
2024-10-22 14:33:08 RT @CamilleAHarris: I’m here at @AIESConf presenting on my thesis work for the 6pm poster session, if you’re here come say hi! #AIES2024 ht…
2024-10-21 23:16:47 RT @leahanelson: No one: Me: I have done a Science! My first-ever conference paper is now live at the AAAI/ACM Conference on Artificial In…
2024-10-21 22:45:40 @RishiBommasani Oh, nice! Yeah I recall that there were a few things we couldn't add to Model Cards bc of the legal context of Google at the time. @mmitchell_ai probably has more to say on this but it's been great to see efforts evolve to focus on other priorities beyond responsible innovation.
2024-10-21 22:28:35 This is super great to see - Model Cards was published by a team at Google, Datasheets published by Microsoft, Factsheets came from IBM, etc. While undoubtedly useful as AI transparency mechanisms, it's useful to reflect on these origins as they evolve into policy doc templates! https://t.co/y76WcYr6qE
2024-10-17 03:35:36 RT @amifieldsmeyer: Let's work together: I'm researching a new U.S. tech policy agenda that closes the gap between a few big companies and…
2024-10-16 01:01:28 @Wenbinters @ConsumerFed Whoa, congrats, Ben excited to see what you do there
2024-10-15 15:34:22 @deepfates Lol @sebkrier this is the 30% of adults / users you are defending
2024-10-15 14:32:15 @thegautamkamath @NYU_Courant And don't worry, I hear the pizza is not too bad in NYC of all places
2024-10-15 14:28:05 @thegautamkamath @NYU_Courant Congrats, Gautam!
2024-10-15 06:18:59 @Miles_Brundage Omg, congrats Miles!!
2024-10-13 21:22:55 @boazbaraktcs @MelMitchell1 Yeah I agree the actual wording of claims in the paper is not great
2024-10-13 16:30:12 @boazbaraktcs Imo what this paper is saying is not that there don't exist any tricks to get your model to solve the math problems at hand
2024-10-13 16:27:33 @boazbaraktcs I don't doubt that model perf can improve w prompting but when anyone says "we do this well on this benchmark &
2024-10-13 16:24:14 @boazbaraktcs Ops I missed this earlier - but I don't quite agree. I think user testing over a representative distribution of prompts is one thing
2024-10-13 16:02:29 @sebkrier @sleepinyourhat I just don't think those achievements are what people expect
2024-10-12 17:57:29 How has this been 2-5 years away for like 5-7 years now? Even longer if you forget the large language model thing and anchor on previous definitions of "AGI" (cnns &
2024-10-12 15:42:51 "A.I. cut the number of students deemed at risk.." Not the nyt finally using the active voice... for A.I. https://t.co/jZ5a6fClWL
2024-10-11 16:34:46 RT @sayashk: How can we enable independent safety and security research on AI? Join our October 28 virtual workshop to learn how technica…
2024-10-11 16:29:52 Having a static text question-answer pair for LLM evaluation increasingly makes no sense - what matters is what models do when important features (i.e. key inputs) &
2024-10-11 16:18:33 @boazbaraktcs Like, sure, we could do some prompt hacking to get to the right answer eventually but it's a bit unsettling that the baseline performance is kind of fundamentally misreported/ unpredictable, and certainly fails in ways we'd never expect a human to fail
2024-10-09 18:19:16 @PeterHndrsn Thanks, very helpful context!
2024-10-09 16:26:54 @PeterHndrsn What do you think of these remedies? I feel like providing external access to AI products is a very small fraction of anti-trust concerns, and was really surprised not to see more on the exploitation of their disproportionate control/ self promotion in advertising &
2024-10-04 14:27:29 RT @mona_sloane: Yesterday was a big day for #AI procurement, one of the most important ways in which accountable tech can be enforced (in…
2024-10-01 18:41:19 @mmitchell_ai Oh no, I'm so sorry to hear this what a loss for the community, I remember how much energy he had at every gathering
2024-10-01 16:12:02 Yay, @ruha9!! Very well deserved https://t.co/9HcwCdbCZb
2024-09-30 14:51:15 @KellerScholl Aha, no worries -- and I hope your family is alright!!
2024-09-30 14:23:03 @ShakeelHashim @LocBibliophilia @GavinNewsom Yeah I think you can be skeptical of the letter but also if his goal was corporate signaling, he has no reason to not frame his letter to appeal to that crowd. The fact that it isn't framed that way says that wasn't necessarily his only or even main audience.
2024-09-30 14:21:04 @ShakeelHashim @LocBibliophilia @GavinNewsom There was a diverse coalition (inclu open source folks, academics) that did not support this bill. The outcome wasn't necessarily a capitulation to industry - many of those non-corporate opponents had legitimate reasons to object, which are named as part of Newsom's rationale.
2024-09-25 19:44:00 I'm so glad to see the FTC leaning into this as a strategy since their stern "warning" shot last year (https://t.co/OYLLTcvXol). It'll be interesting to see how these particular investigations play out over the next few years...
2024-09-25 19:40:15 False advertising is such a powerful argument for removing harmful AI products from the market. In 2015 (!), at peak computer vision hype, this strategy led to the removal of skin cancer detection apps plagued w robustness, accuracy &
2024-09-25 16:26:17 I keep seeing AI policy takes from folks that have clearly not read the bill text. Which, honestly, I can understand - bill drafts are boring! And long! But the core of policy debates are anchored to specific details...which you're likely to overlook if you don't just read it.
2024-09-25 16:14:47 This is incredible - each of these cases are AI scams that have been alarmingly normalized in the past couple years (including DoNotPay, a "robo-lawyer"
2024-09-24 19:10:44 RT @SenMarkey: I’m live from the Capitol to introduce the Artificial Intelligence Civil Rights Act. It’s time to ensure that the AI Age doe…
2024-09-24 19:08:10 RT @mlittmancs: I got to help shape this document, providing guidance about how AI researchers collaborate globally. It was unveiled at the…
2024-09-24 19:07:38 @mlittmancs Wow, this looks incredible - thanks for your work on this!
2024-09-24 19:05:25 RT @geomblog: It's great news that the AI and Civil Rights Act has been introduced. Kudos to @SenMarkey and all the cosponsors. This has pe…
2024-09-24 15:03:52 @aylin_cim Congrats, Aylin!! Well deserved
2024-09-23 05:18:47 RT @geomblog: Great piece by @SerenaOduro from @datasociety on the importance of an expansive notion of AI safety that includes pressing co…
2024-09-19 14:20:48 RT @karen_ec_levy: Returning from perpetual Twitter hiatus to spread the word: @CornellInfoSci is hiring! Tenure-track hires at all levels…
2024-09-18 14:53:16 I really love this - it captures what most frustrated me when I took this class. Some problems are easier to formally model - these are the scenarios in which optimization methods "work". But there's so many other types of problems where we're pretty much just fooling ourselves. https://t.co/AYyN9LvuGI
2024-09-18 04:22:36 RT @mmitchell_ai: Can you imagine working in a company that not only supports you, but celebrates you? Feeling all kinds of gratitude for…
2024-09-18 00:56:54 @mmitchell_ai @huggingface Yay, Meg! Excited to see this
2024-09-13 22:14:08 RT @thegautamkamath: Have a nice paper on secure and trustworthy ML? Consider sending it to SaTML! Note that the new deadline is one day a…
2024-09-13 22:08:23 RT @megyoung0: Mike led our work in Seattle with community-based organizations like @ACLU_WA @DenshoProject @CAIRWashington. To honor Mik…
2024-09-13 22:08:08 @megyoung0 @MikeKatell Oh no, so sorry to hear this, Meg
2024-09-13 04:42:35 RT @charlesxjyang: And its live! Our Request for Info on DOE's Frontiers in AI for Science, Security, and Technology (FASST) initiative, wh…
2024-09-12 18:10:57 RT @mmitchell_ai: Honored to participate in Senators Blumenthal &
2024-09-11 23:31:52 RT @nmervegurel: Several new dataset and benchmark papers have been accepted to the DMLR Journal recently! Follow @DMLRJournal for updates
2024-09-09 23:37:32 RT @verityharding: Very cool press fellowship opportunity from @techpolicypress who do fantastic AI journalism—check it out: https://t.co/…
2024-09-09 19:52:03 RT @alokpathy: Hi all prospective grad students! Our Equal Access to Application Assistance (EAAA) program for @Berkeley_EECS is now accept…
2024-09-09 17:55:20 This is such a unique opportunity for anyone working at the intersection of CS, policy &
2024-09-09 15:38:38 RT @esme_harrington: So wonderful to attend this Data Fluencies workshop in NYC, exploring the data politics at the heart of AI! A wonderfu…
2024-09-06 15:35:37 RT @mozilla: While we couldn't save @CrowdTangle, we're happy to see that @Meta has now eased its Content Library API access requirements,…
2024-09-05 18:22:50 RT @minilek: https://t.co/lzX6PGyYiI Sep 16th application deadline. UC Berkeley "seeks applicants for four tenure-track (Assistant Profes…
2024-08-15 05:32:40 This reveals so much about how little we meaningfully discuss data choices in computer science education. Data are at the locus of pretty much every tech policy issue - labor, bias, environmental, copyright, privacy, security, toxicity, safety, etc. It is literally politics! https://t.co/EnbfBqKoVy
2024-08-14 20:02:01 + of course, I learnt so much working with @judyhshen &
2024-08-14 19:58:30 Anyways, it was a joy to get to finally dig into a topic like this that I've been curious about for a while now! Practically, I feel like data scaling is so much more complicated a phenom than "more data = better" &
2024-08-14 19:56:53 @Aaron_Horowitz Blame Reviewer number 2 you gotta give the gatekeepers what they want lol
2024-08-14 19:51:31 In those settings, there's a trade-off btw a perf dip due to increasing distribution shift &
2024-08-14 19:47:10 Or at least, it isn't *always* true.. there exist situations where adding more data can lead to *worse* model outcomes! We called this the "data addition dilemma" &
2024-08-01 20:25:32 RT @HellinaNigatu: Excited to be featured by CDSS!
2024-07-31 13:51:34 RT @weidingerlaura: Had an exciting day seeing the @WhiteHouse from the inside to talk about sociotechnical AI safety research! A star-stud…
2024-07-31 13:51:27 @weidingerlaura @WhiteHouse Incredible, Laura! Lol you're wearing a collar and blazer aha very proud
2024-07-26 14:35:25 RT @ChrisCoons: Yesterday @SenBillCassidy and I, along with 15 of our colleagues from both chambers of Congress, sent a bipartisan letter t…
2024-07-25 19:52:22 RT @BerkeleyISchool: HIRING: The University of California, Berkeley seeks applicants for four tenure-track (Assistant Professor) positions…
2024-07-19 05:02:46 RT @DrMetaxa: #FAccT25 will be happening in Athens, Greece! The GCs (myself included) are looking for PhD students interested in paid…
2024-07-11 11:42:32 Such a nice and comprehensive resource for policy-makers trying to make sense of LLM limitations in multi-lingual contexts. This impacts not just international user experiences, but also diaspora and immigrant experiences within the US (eg. https://t.co/o4aE2WIxuo). Important! https://t.co/zGmmqeKbjd
2024-07-11 00:55:15 RT @sarahookr: Does more compute equate with greater risk? What is our track record at predicting what risks emerge with scale? I don't…
2024-07-10 10:00:15 @KellerScholl The nurses were striking (amongst other things) over concerns for patient safety - I think that's a serious disconnect if one crowd thinks this will save us all and the workers involved are saying it's causing more harm.
2024-07-09 20:07:13 @Aaron_Horowitz Yeah, I could write a whole separate thread on the specific thing they're advocating for &
2024-07-08 17:33:11 RT @dfreelon: If you study TikTok, have a look at my newly updated Python package Pyktok--I just added a few features you might find useful…
2024-07-08 15:46:55 RT @GabeNicholas: New op-ed from me in @ForeignPolicy! The premise: to regulate AI effectively, we need information about how people ac…
2024-07-03 12:24:54 RT @charlesxjyang: For anyone interested in critical and emerging tech policy, my DOE office is hiring a fellow! Can't say enough good thi…
2024-07-01 17:53:35 RT @PeterHndrsn: Super important! And to be clear it's not just Loper Bright (the Chevron decision). Several other cases in the last week,…
2024-07-01 14:42:17 RT @tribelaw: The 6-3 Corner Post opinion by Justice Barrett multiplied the harm done by Chevron’s overruling by effectively holding that t…
2024-06-30 14:43:12 RT @pulitzercenter: Apply to be part of the third cohort of the AI Accountability Fellowships. Don’t miss this opportunity to report in-de…
2024-06-30 01:39:21 RT @reshmagar: Chevron has been overruled by #SCOTUS. This is a dark day for public health &
2024-06-24 17:35:19 RT @CohereForAI: Tomorrow check out @HellinaNigatu and her presentation with our community-led Geo Africa Group! Learn more: https://t.co/…
2024-06-24 16:37:55 RT @kevindeliban: Overdue focus on how low-income folks lose Medicaid and SNAP—with all the attendant devastation to their health—because a…
2024-06-24 14:35:03 Interesting to see an in-the-wild study on the use and impact of model cards! Even though there's clearly still a lot to do, it's great to see how far AI documentation has come. Very grateful for the leadership of @timnitGebru @mmitchell_ai in leading these efforts at the time https://t.co/4pgJ3egh8R
2024-06-23 16:27:24 RT @NeurIPSConf: NeurIPS 2024 is looking for AI Ethics Reviewers for submissions regarding risks and harms of the work. If you are inter…
2024-06-22 00:43:39 What I learnt from this (&
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-22 19:43:56 @yonashav Yeah, happened in front of me in person at least twice .. I'm afraid that kind of behavior is fairly normalized within a certain kind of tech crowd
2023-05-22 17:50:10 Can't believe we live in a world where some would rather see an AI system as human before acknowledging the humanity of the marginalized actual people around them.
2023-05-19 22:08:33 RT @russellwald: There were multiple Senate AI hearings today. But only one focused on federal use of the tech. Congrats to my @StanfordHAI…
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-04-21 00:00:01 CAFIAC FIX
2023-04-18 07:07:57 RT @suryamattu: I am thrilled to finally announce this new partnership between Digital Witness Lab and @pulitzercenter. https://t.co/UN…
2023-04-16 14:35:30 RT @AlexCEngler: Good and interesting new letter from @AINowInstitute, @DAIRInstitute, and others on general purpose AI (GPAI) in the EU AI…
2023-04-14 20:05:36 @dcalacci Could not have said this better myself!
2023-04-14 17:10:23 +this reminds me of a hilarious recent interaction. Someone came in with the familiar argument: "This system is too big, too complex to audit" The rebuttal was gold - "wait, so why is it being deployed in the first place?" If a system can't be reliably evaluated, why allow it?
2023-04-14 17:10:22 I highly doubt anyone is advocating for companies to do nothing &
2023-04-14 17:10:21 It seems like there's some mainstream confusion on what algorithm audit policy actually involves - many "audit mandates" are really independent review mandates, meaning that the org produces an internal audit report that's shared with a hired third party or regulator to confirm
2023-04-14 13:26:16 I kinda see this as a false choice. Of course the onus should be on companies to provide data details &
2023-04-13 19:21:09 RT @AJLUnited: The government is still using IDme to access tax accounts after promising to stop after many complaints. Read @jovialjoy 's…
2023-04-13 18:58:10 RT @ambaonadventure: "GPAI models carry inherent risks.... (which) can be carried over to a wide range of downstream actors and applicatio…
2023-04-13 18:57:19 RT @ghadfield: OpenAI will pay you to join its ‘bug bounty program’ and hundreds have signed up—already finding 14 flaws within 24 hours ht…
2023-04-12 22:32:06 RT @b_schwanke: Still buzzing from yesterday’s @PittCyber’s convos w/ @NTIA and the really rich panel with @ellgood, @Wenbinters, @rajiinio…
2023-04-11 20:27:12 RT @PittCyber: Following comments from @DavidsonNTIA, a panel of experts including Ellen P. Goodman, @Wenbinters, @rajiinio, and Nat Beuse,…
2023-04-11 15:30:19 Excited to be on this panel today! Should be a great discussion about practical paths forward for auditing in AI regulation https://t.co/U1xhbC53rB
2023-04-11 15:29:27 RT @Wenbinters: Request for Comment on 'AI Assurance' (audits, impact assessments+++) from @NTIAgov is out!! https://t.co/lvued3yuXz 60…
2023-04-11 15:07:46 @emmharv @CornellInfoSci @allisonkoe @whynotyet Congrats, Emma! So excited for you
2023-04-11 03:36:30 RT @dcalacci: Friends! I'll be defending my dissertation tomorrow at noon. The talk is open to the public on zoom or in-person at the M…
2023-04-10 03:24:51 Also sometimes the best thing that can happen to a paper is to not get accepted! I've personally experienced this process of maturity via critique. IMO the quality of reviews were higher this year &
2023-04-10 03:17:49 RT @evijitghosh: In the wake of FAccT decisions, I’ve seen a few tweets similar to “If <
2023-04-09 17:01:45 RT @conitzer: I've been using a US version of this example (bar exam) but now this is being pursued in court in my native Netherlands! Als…
2023-04-09 17:01:39 @conitzer Thanks, Vincent!
2023-04-07 17:56:25 RT @mozilla: Securing a property can be a daunting task for renters, and many tenants face discrimination, keeping them from landing their…
2023-04-06 15:20:00 Interesting paper on an important topic! I first learnt about digital copyright by reading "The End of Digital Ownership" by @Lawgeek &
2023-04-04 20:23:03 RT @rachelmetz: A really smart, nuanced piece by @SashaMTL. As she notes, @timnitGebru, @ruha9, @rajiinio (and many more!) have pus…
2023-04-04 20:20:55 RT @dlberes: Love this story by @Saahil_Desai, which examines the unique human value of political polling and the limits of AI + big data.…
2023-04-02 15:09:44 @pcastr I know this isn't what you asked for but you can actually convert your Chromebook into a Linux machine pretty easily - I did this in uni &
2023-04-02 13:51:21 @scottniekum @pcastr @sethlazar Anyways, at minimum, you're right that people shouldn't be using "AI safety" as a pejorative - other people will also use "wokeness"/"AI ethics" as an insult in the same way, and I've never seen that kind of discussion yield anything productive. Truly sorry that happened to you!
2023-04-02 13:45:14 @scottniekum @pcastr Though tbh, practical collaboration between the two groups will be hard &
2023-04-02 13:41:37 @scottniekum @pcastr Though you're right that the reason they do this (ie. in order to unilaterally prioritize problems of system control) is something generally neglected by the AI ethics folks &
2023-04-02 13:39:23 @scottniekum @pcastr Yeah, I agree with this! I also don't like that "AI safety" now ~ AGI folks, &
2023-03-31 19:29:45 @Miles_Brundage LOL
2023-03-31 16:46:53 RT @RMac18: In Nov, a GA man was arrested for a crime in LA, a state he said he'd never been to. He spent 6 days in jail. We found his arr…
2023-03-30 19:57:30 @Aaron_Horowitz @mmitchell_ai "I'm sure they will be fine"
2023-03-30 18:15:00 @ImpossibleScott @mmitchell_ai Yeah, I say cynical because it's not the most generous characterization of those folks. It's not just billionaires that believe in the AGI doomsday scenario but also anyone that they've successfully convinced
2023-03-30 17:39:32 I've said this before but I really do hope for their sake, that there can emerge some safe forms of internal accountability within that group. Sometimes it can feel like watching the pied piper - it is fundamentally risky to not be able to question the decision-making of leaders.
2023-03-30 17:28:23 + as usual, even stranger than the offense is the community's complicated justifications &
2023-03-30 17:17:09 If you are casually advocating for air strikes in response to *anything*, then you clearly underestimate the tragedy of war. In the midst of all that's been happening in Ukraine/Russia, Israel/Palestine, it was truly horrible to read something like this: https://t.co/lxa38lvYPE https://t.co/oES494XsdT
2023-03-30 17:04:23 @cristina_elisav @mmitchell_ai Yep - and the same also applies to "AI safety", which can be broken down into even smaller sub-groups &
2023-03-30 15:45:33 @mmitchell_ai Again, this is my most cynical take lol, I get that it doesn't apply broadly. But I've had convos w/ participants in the AGI crowd where I've had to gently remind them that Black ppl, disabled ppl, etc exist, &
2023-03-30 15:41:00 @mmitchell_ai In my more cynical moments, I think there's another dimension of this as well - AI ethics folks typically talk about minority/marginalized populations that some technologists don't even want to acknowledge exists, while the AGI doomsayers are only really talking about themselves.
2023-03-30 15:36:05 @mmitchell_ai it's because of the functionality fallacy imo - AI safety fears are anchored to the myth shared by companies that the technology works and will only get better
2023-03-23 16:37:07 RT @PittCyber: Looking forward to a compelling conversation with @NTIAgov's @DavidsonNTIA and experts like Nat Beuse, Ellen Goodman, @rajii…
2023-03-22 22:28:55 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference (3) empowering civil society to scrutinize &
2023-03-22 22:26:32 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference For completeness, a summary of what I had said - there are potentially several ways for increasing participation in audit work: (1) involving broader perspectives in defining standards &
2023-03-22 22:12:35 @mer__edith lol truly wild
2023-03-22 22:11:40 RT @iamdaricia: The @mozillafestival has assembled a mix of truly engaging sessions this week but I want to highlight this panel on the alg…
2023-03-22 21:50:14 @mer__edith wow this is so interesting... how do you see the role of tech cos change over time from your view? Were they ever considered good guys? Or was the shift more from "under-estimated" to problematic?
2023-03-22 21:45:05 @mer__edith fwiw my reference is mid-2000s Global Network Initiative (GNI) type papers, where the main narrative was on government authoritarian abuse of the Internet. It's not that these issues don't exist, but even today, those movements have a lot of faith in the tech companies as allies
2023-03-22 21:41:30 @mer__edith That's interesting..some of my reading was that there was also a lot of concern for governments taking control of the internet in ways we don't really discuss today, ie. "China" vs. "US" narratives on internet ownership - some people seemed to see tech cos as allies in that fight
2023-03-22 21:38:29 @baricks @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference @bgeurkink
2023-03-22 21:37:58 @baricks @MarcFaddoul @dcalacci @seanmcgregor @Abebab @DJEmeritus Thank you! How am I not already following everyone? lol
2023-03-22 21:37:13 RT @Borhane_B_H: Fantastic end to the @mozillafestival OAT session: “The pain points for algorithmic audit tools to address are far from pu…
2023-03-22 21:36:44 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference also LOL @sherrying we will go to an art museum soon
2023-03-22 21:35:56 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference ops, got caught up in the discussion but a full recording of the discussion can be found here: https://t.co/TL4ZEoJN8L
2023-03-22 21:27:31 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference "There was a storytelling campaign about what people were experiencing on the paltform", Becca adds. "I'm interested in - 'How do we develop audit ecosystems that are more of a feedback loop? How do we make things more participatory and bi-directional?"
2023-03-22 21:20:43 @MarcFaddoul @dcalacci @seanmcgregor @Abebab @FAccTConference Brandi talks abt how advocacy vs. academia reaches different audiences due to different "communication strategies" and how that impacts how things are being used. She mentions that the Mozilla youtube study was one of the few cited in legal work, despite academic work being avail
2023-03-22 21:19:06 @MarcFaddoul @dcalacci @seanmcgregor @Abebab He discusses how presenting the community keynote @FAccTConference allowed his team to interact and connect with not just CS folks but also lawyers, community leaders, etc.
2023-03-22 21:18:07 @MarcFaddoul @dcalacci @seanmcgregor @Abebab Victor asks, "how do we translate audit outcomes into actual accountability?" Dan responds "Make it embarrasing" lol
2023-03-22 21:17:14 @MarcFaddoul @dcalacci @seanmcgregor @Abebab "The ideal thing to do for online platform audits would be to have something integrated into the browser for incident reporting,.. this would be something that scales completely."
2023-03-22 21:14:39 @MarcFaddoul @dcalacci @seanmcgregor @Abebab "Open source *audit* tooling is something we looked at in particular since it's something that has been relatively underserved up to date. It's an important area but often overlooked." He mentions how many of the projects are still works in progress &
2023-03-22 21:12:56 @MarcFaddoul @dcalacci @seanmcgregor @Abebab Now Mehan is discussing MTF and how "open source auditing and open source tools being used for auditing" is one way to address these serious issues of trust. "The challenges are not necessarily unique to audit tools, but reveal problems with open source development as a whole,"
2023-03-22 21:10:58 @MarcFaddoul @dcalacci @seanmcgregor @Abebab Becca Ricks adds, "As these researcher APIs and public APIs are being crafted in response to the DSA, we need to understand what effect this can have on the quality and type of audit work that comes out.." She discusses using the youtube API, but "not in the way they intended"
2023-03-22 20:39:23 HAPPENING NOW! https://t.co/wogfMnplSY
2023-03-22 18:05:43 This is actually so important. I was talking about this just the other day - the net neutrality movement was so worried that certain governments would attempt to take over the Internet, but all along.. it was the companies. A small set of cloud providers already own the Internet. https://t.co/OtpQXxZ6g5
2023-03-22 09:59:46 RT @schock: Freaked out by #GPT4? Wondering how to reign in powerful new #AI technologies? Proud to share I'm a coauthor, w/ @jovialjoy &
2023-03-21 20:48:43 OAT team members @OjewaleV &
2023-03-21 19:52:06 RT @OjewaleV: Looking forward to the panel session on Navigating the open-source algorithm audit tooling landscape at #MozFest tomorrow!…
2023-03-18 02:51:01 @DanielTrielli Congrats, Daniel!!
2023-03-17 15:50:21 Working with Abeba for OAT has been amazing - she has the kind of insights that make you pause everything and start over lol Could not recommend anyone more! https://t.co/ZRIIR1YesE
2023-03-17 13:58:35 RT @augodinot: @rajiinio @mozilla @trackingexposed @ErwanLeMerrer Thanks ! Then you might also like https://t.co/DbdhR0Tgh7
2023-03-16 13:03:34 @RemmeltE @NathanpmYoung @xriskology @ruha9 @timnitGebru @emilymbender @safiyanoble hm fwiw they do cite "Algorithms of Oppression" &
2023-03-16 13:00:00 @npparikh Sure - but just so you know, FRVT (started since at least 2013) is much older than Gender Shades (published 2018). And FRVT only started measuring demographic effects in 2019, citing the followup work to Gender Shades as direct inspiration for that. Age != influence &
2023-03-15 18:35:27 Simple risk assessments are still some of the most catastrophic AI deployments in the country today. Yes, most of these are nothing more than simple linear regression ++, but their influence on decision-making drastically alters the lived experience of millions of Americans. https://t.co/2yfFLrZxX1
2023-03-15 15:02:51 RT @Aaron_Horowitz: New AP news story out based, in part, on our work. It's a good reminder of why we spent so long auditing AFST- real fam…
2023-03-15 14:55:18 RT @merbroussard: I haven't talked about it much until now, but I had #breastcancer recently. I'm fine now thanks to excellent, hi-tech med…
2023-03-14 22:45:23 RT @_anthonychen: Under-rated is how hard it is to create datasets that stand the test of time. And DROP from my labmate @ddua17 has done j…
2023-03-14 22:30:58 @nsthorat ohhh - oh yes, this is a great point. I used to think those using these models downstream would naturally be doing this kind of local testing but in esp any low tech / low resource setting that doesn't seem to be the case. It's hard to design and build a meaningful benchmark :(
2023-03-14 22:13:40 @npparikh Not claiming GS was state of the art (it wasn't designed to be)! But FRVT evolved post-GS to include demographic analyses + a lot has happened since then on evolving testing procedures to reveal previously ignored weaknesses in the tech. An illustration of how imp benchmarks are.
2023-03-14 22:11:14 @alexhanna my nightmare!!
2023-03-14 22:10:54 @nsthorat how to..? cliffhanger lol
2023-03-14 21:28:34 I feel the same way about these large language models. Let's be serious - we won't be using these models to pass the bar, and that's not even what they're pitching them for. The actual applications are much more complicated and completely untested for with the current benchmarks.
2023-03-14 21:26:34 For a while, facial recognition was pretty much considered a solved problem because the benchmarks at the time made it seem like a solved problem. Then, challenges like Gender Shades and such came along, revealing the problem was actually a lot more complicated &
2023-03-14 21:13:43 LOOL https://t.co/wNarXpTi5m
2023-03-14 14:28:08 @merbroussard Congrats!!
2023-03-12 19:56:02 @augodinot @mozilla @trackingexposed @ErwanLeMerrer Nice resource! Thanks for sharing
2023-03-12 19:55:30 RT @augodinot: @rajiinio @mozilla Nice to see @trackingexposed in the list ! Might want to add some of these in https://t.co/K1GzD3vHyj @Er…
2023-03-11 23:43:19 ICYMI @mozilla has announced the amazing cohort of grantees for the Mozilla Technology Fund! Such a diverse set of audit tooling projects being supported through this program. https://t.co/8F4Azr0Mse
2023-03-11 03:45:01 @russellwald Yeah I agree there's a practicality to the approach but also wondering if we can get more ambitious about how to get to meaningful oversight! Lots of other technical industries (eg automobile, medical devices) are less reliant on industry cooperation. Though some (aerospace) are.
2023-03-10 15:29:48 @BlackHC Dang, sorry to hear - this is all kinds of disappointing.
2023-03-10 00:29:00 @BlackHC Wait, might be a silly question but why not work together on the idea?
2023-03-09 23:42:03 RT @dinabass: "“We’re talking about ChatGPT and we know nothing about it,” said @huggingface's Sasha Luccioni, who has tried to guesstimat…
2023-03-09 07:21:07 @andrewthesmart Ohh hm this looks interesting - thanks for sharing, will check it out!
2023-03-09 03:53:45 @ziebrah I wonder if we are reviewing the same paper rn lol
2023-03-09 03:53:08 @__lucab @UCBerkeley @GoldmanSchool @CITRISPolicyLab Wahoo! Let's get a coffee whenever you're around &
2023-03-08 18:09:51 @ecrws Yeah, this is an interesting point - I think there was the same critique levied at the use of ethical licenses for open source AI projects. A step in the right direction but a limited intervention, for sure.
2023-03-08 18:07:02 @jdp23 lol exactly. Lots of great work happening internally ofc, but also lots of corporations that can't be trusted at face value to provide reliable info on these things.
2023-03-08 18:04:47 @ziebrah why are we talking about regulation at all if incentives are so aligned??
2023-03-08 18:04:26 @ziebrah lol this is it!!
2023-03-08 18:03:52 @BlancheMinerva Also, you won't get any resistance from me on the "we need external evaluation" front, but afaik, this isn't what OP is proposing. Here, as w/ most industry consortium attempts, they talk about "consulting" independent researchers to set standards, not allowing them to get access
2023-03-08 17:45:31 @jdp23 Yeah it's not like Facebook hasn't already mis-represetned data presented to external stakeholders in the past or anything... lol https://t.co/S43y4wkFoQ
2023-03-08 17:33:25 Tech policy proposals that depend heavily on the voluntary cooperation of the tech companies being regulated are so frustrating to me. I get that there are many cases where "incentives align" but without meaningful external oversight, I'm immediately suspicious.
2023-03-08 17:21:09 @ecrws Curious what you mean here? As in "of course - this is the bare minimum" type thinking or something else?
2023-03-08 17:18:42 @BlancheMinerva lol it's not a take - it's just that people kind of tried the "self-regulatory consortium" thing before but when it happened no one really paid attention to what they had to say. My take is that regulators and civil society should be involved in setting actual legal guardrails.
2023-03-08 15:47:13 Most people forget that this already kind of happened last summer - Cohere, OpenAI, and AI21 Labs released a joint statement on guidance for large language models but it mostly slid under the radar: https://t.co/FFzR6Lblvm https://t.co/8gvmywYt27
2023-03-08 01:07:15 RT @glichfield: Today @WIRED runs the final two instalments in "Suspicion Machine," our joint investigation with @LHreports into how algori…
2023-03-08 00:35:20 @OjewaleV @brianavecchione @mozilla
2023-03-07 01:46:01 @acidflask @NeurIPSConf Yay! Congrats Jiahao
2023-03-06 23:56:19 RT @gabriels_geiger: In June of 2021, I sent a public records request to the city of Rotterdam. I wanted the code for an algorithm the city…
2023-03-06 14:33:46 I was more excited about Victor's acceptances than my own So excited to see that our research assistant for the @mozilla OAT project, @OjewaleV is headed from Nigeria to do a CS PhD in the US! One to watch!! https://t.co/qLsG6V9oUA
2023-03-06 02:57:48 Big fan of Rishi's work on this! If your data happens to be mislabeled/misunderstood by one foundation model that's used widely, then you're kind of screwed. https://t.co/5SE16OzvlN
2023-03-06 02:13:28 RT @irenetrampoline: One paper to recommend on societal bias in ML, health, and science? - @judywawira: "Reading race" Banerjee et al - @…
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-28 03:04:42 @hlntnr
2023-02-28 02:59:07 RT @random_walker: The FTC says it will ask a few q's about AI-related claims: –Are you exaggerating what your AI product can do? –Are yo…
2023-02-28 02:56:56 Ah, great news! https://t.co/6RCaiFcElg
2023-02-27 01:00:00 CAFIAC FIX
2023-02-20 15:53:51 @kdpsinghlab @RoxanaDaneshjou Curious - what do you consider reasonable use cases?
2023-02-19 15:07:56 Some version of "build an FTC office to focus specifically on tech issues" has been pitched in many of the current bills on algorithmic oversight - Algorithmic Accountability Act, PATA, Digital Services Oversight &
2023-02-19 14:14:11 RT @stephtngu: We are proud to announce the creation of the Office of Technology at @FTC, a team that will provide technical expertise acro…
2023-02-17 17:15:27 RT @geomblog: So much tech/policy news: the latest is @FTC setting up a new office of technology to help with FTC actions. This is amazing.…
2023-02-17 15:30:52 RT @harlanyu: This is a big deal: today's new @WhiteHouse EO on racial equity instructs federal agencies to affirmatively address emerging…
2023-02-15 20:16:03 @akanazawa Congrats, Angjoo!
2023-02-15 16:30:45 RT @conitzer: Hurray, the call for papers for the AI, Ethics, and Society conference @AIESConf is out! Deadline March 15, conference August…
2023-02-15 15:31:15 @chrmanning @npparikh Also...didn't you &
2023-02-15 15:28:23 @chrmanning @npparikh GPT-3 was a *controlled* release, mediated via the Open AI API, and I do believe that oversight prevented many inappropriate uses. IMO the concerns about GPT-3 weren't overblown - I think they informed a caution and concrete measures that protected us from those potential harms.
2023-02-15 00:34:48 @zacharylipton @andrewgwils @STS_News has actually written about this. He called it "criti-hype": https://t.co/0rWwo5zzHp
2023-02-14 14:34:10 Been thinking recently about expectations of evidence for hype vs critique. AGI people are literally operating in a realm of pure speculation yet they are so easily believed. Others will spend months on the ground, only for the concerns they surface to be dismissed as anecdotes. https://t.co/ZqArPDRg1z
2023-02-14 13:38:38 @RWerpachowski @mikarv The subsequent legal documents are all directly or indirectly derivative from that early work in 2020 - more importantly though, I shared the article bc he discusses an unchanged culture in EU policymaking of relying on a certain set of partial &
2023-02-14 13:25:49 @RWerpachowski Lol, please don't take my word for it. @mikarv looked into the "expert group" that shaped that directive - far from a qualified &
2023-02-14 13:16:51 @RWerpachowski Hm, not true from my experience. I attend many policy roundtables &
2023-02-14 13:13:36 Now, this is not to say methodology and rigor do not need to improve - I will be the first in line to challenge the quality of evidence we typically tolerate - but it's clear to me that this isn't a community to be dismissed - their perspective is an essential counter-balance.
2023-02-14 13:05:44 Something many often don't consider when discussing "Ethical AI" is the power differential - there is a multi-billion dollar apparatus marketing this technology as flawless and only recently has a critical mass of scholars &
2023-02-14 12:54:10 A false dichotomy. "Generative AI" can be fun &
2023-02-11 04:54:29 @NicolasPapernot @satml_conf @carmelatroncoso Congrats on putting this together!
2023-02-10 16:38:54 RT @MichelleCalabro: Thinking about shared responsibility between people and the systems we create. “It’s so much easier to point to an al…
2023-02-09 17:45:54 RT @dfreelon: As announced last week, Twitter will eliminate free access to its APIs this Thu (Feb 9). This thread collates alternative sou…
2023-02-08 19:56:40 RT @random_walker: Fascinating audit of social media "raciness" classifiers that don't understand context and are massively biased toward l…
2023-02-08 17:47:30 AI fairness for social bad https://t.co/meCkTTh7mK
2023-02-08 17:46:17 RT @b_mittelstadt: New piece in @WIRED on the harms of algorithmic fairness in #AI &
2023-02-07 21:35:36 RT @togelius: Nice article in @TheAtlantic about AI game playing and what it's for, including quotes from @polynoamial, @rajinio, @yannakak…
2023-02-05 22:28:19 Lol from my experience, this isn't only happening with students... https://t.co/aluYmJeBgx
2023-02-05 15:11:45 @yoavgo "Human-level" intelligence as a goal is strange though - there's many useful things humans are horrible at &
2023-02-05 14:51:11 Most incredible thing about having Alondra Nelson at the helm of OSTP was the impact of her socio-technical expertise. Here was someone that took time to deeply understand the science *and* the people - even as an outsider, I could see how much that benefitted her policymaking. https://t.co/OOr2fxsdVT
2023-02-05 14:37:14 @AlondraNelson46 @WHOSTP @POTUS @VP Thank you so much for your service!! An inspiration for years to come
2023-02-04 16:40:32 RT @jimtankersley: NEW: Black taxpayers are 3-5x more likely than everyone else to be audited by the IRS, a product of algorithmic discrimi…
2023-02-04 16:32:55 A pioneer! Thank you so much for your contribution to bringing methodological rigor, and a relentless perseverance to the tech accountability space! https://t.co/96A5c9AlcN
2023-02-04 09:43:22 RT @LauraEdelson2: The deadline to apply for TechCongress has been extended to Feb. 16! This program is doing so much to bring technical ex…
2023-02-04 09:10:43 RT @BelferSTPP: Thanks to @rajiinio for joining our AI Cyber Lunch on Wed. Her talk highlighted the urgent need for oversight of widespre…
2023-02-02 23:49:54 @ziebrah Lol didn't you write a thoughtful blog post on exactly this topic?
2023-02-02 23:46:57 RT @yaleisp: Thank you so much @rajiinio for sharing your wonderful work on audits and accountability for automated decision systems with u…
2023-02-02 11:09:30 One of the tools in the current Mozilla Technology Fund cohort. Very cool! https://t.co/K5yb3GIsUx
2023-02-01 22:39:05 @nsaphra @vonekels @ryanbsteed @emilymbender @mmitchell_ai @SashaMTL @enfleisig Lol thoughtful twitter takes are an unappreciated art these days
2023-02-01 22:31:58 @vonekels @ryanbsteed @emilymbender @mmitchell_ai @SashaMTL @enfleisig @nsaphra Also almost forgot the incredibly thoughtful @ria_kalluri has also done some recent work on this as well- clearly lots of great folks to highlight in this space! https://t.co/Cb5cZx9v4c
2023-02-01 22:27:36 @vonekels @ryanbsteed And @emilymbender @mmitchell_ai @SashaMTL have been warning about LLMs for a very long time. @enfleisig &
2023-02-01 22:20:33 @vonekels And @ryanbsteed wrote about the over sexualization of generative AI models long before Lensa was even a thing: https://t.co/BFKYxAsZDE
2023-02-01 22:15:23 By the way - if you are a journalist looking to make sense of the bias issues with generative AI, I highly recommend speaking to those that have been thinking about this much longer than I have: @vonekels for example has an excellent paper on bias in face generation models. https://t.co/joQ1z7x3CH
2023-02-01 22:06:11 Giving a (hopefully shorter ) version of the talk at Yale tommorrow as well for those that happen to be around! https://t.co/nxIPiypgo5
2023-01-31 02:32:53 RT @FAccTConference: To all the PhD students and researchers working on fairness, accountability and transparency (or related topics) in re…
2023-01-30 01:00:00 CAFIAC FIX
2023-01-13 23:05:12 RT @alesherasimenka: New Research Just out in Journal of Communication: One of the first academic studies uncovering the economy of d…
2023-01-13 23:05:01 RT @CatalinaGoanta: Fascinating research on the monetization of misinformation, which zooms into public health misinfo to unveil economic i…
2023-01-12 23:39:17 RT @jachiam0: Somehow it doesn't seem to occur to them that these beliefs are offensive because they're not only wrong but also immensely d…
2023-01-12 18:35:08 RT @jlkoepke: the EEOC's Draft Strategic Enforcement Plan squarely focuses on the use of algorithmic systems throughout the hiring process…
2023-01-12 18:32:23 RT @NicolasPapernot: Only a few seats left for SaTML 2023! Join us to listen to our keynote speakers @timnitGebru &
2023-01-12 18:17:35 @Akumunokokoro @sshwartz @chrmanning Hm do you have any insight into why they are so uncooperative with regulators? That behavior is so unusual and aggressively defensive, and is what raised my suspicions about them years ago
2023-01-12 17:47:02 @Akumunokokoro @sshwartz @chrmanning Interesting - though I'm not sure all drivers are aware of their liability to the extent they'd need to be to properly supervise. Also even a non-automated vehicle manufacturer still has requirements. I don't know if this completely excuses the more outrageous Tesla car failures.
2023-01-12 07:20:37 Final word: the fact that pretty much everyone agrees that this is an incomplete, partial apology, but the divide is between "yes, that is unacceptable" and "let me try to convince you that your race is intellectually inferior" is really throwing me for a loop right now.
2023-01-12 07:05:00 @TheKoopaKing1 I'm not sure what you're expecting, but I won't be debating with someone about the supposed intellectual inferiority of my race. Bostrom is not talking about education access, you know that. Feel free to agree with Bostrom, but for many these beliefs are prejudiced &
2023-01-12 06:54:12 @TheKoopaKing1 @jordan_uggla Jordan did not come into this thread to argue with anyone but was helping to translate the text for those with screen readers. He chose not to type out a slur and that's a completely reasonable thing for him to do.
2023-01-12 06:42:25 @RockstarRaccoon I posted the crop not just to point to the fact that the email is horrible, but to highlight that this is not just about language. I'm quote tweeting the original post, it won't be hard for folks to find his comments as well?
2023-01-12 06:36:33 @TheKoopaKing1 @jordan_uggla Please ignore this - and thank you @jordan_uggla for writing this alt text to make the conversation accessible.
2023-01-12 05:01:01 @MichaelD1729 Thank you for saying this.
2023-01-12 04:48:14 @flotsam70272377 @nsaphra @thebirdmaniac Also, saying this before I log off - you can be racist and donate money to Black people or pity them or even be nice to them. The only criteria for racism is seeing a fundamental difference and choosing to imagine one group as superior to another because of their supposed race.
2023-01-12 04:45:36 @flotsam70272377 @nsaphra @thebirdmaniac I understand that word is socially charged and this may be upsetting to you, but if you share those beliefs, you need to understand that those are by definition prejudiced beliefs. And for you + others in an EA community, that's something you need to either denounce or admit to.
2023-01-12 04:42:17 @flotsam70272377 @nsaphra @thebirdmaniac Racism is about believing that one race is superior to another. Bostrom's stated beliefs, which he still does not explicitly denounce, are about racial differences in intellect, implying that he believes some races are more intelligent than others. That is racism, quite literally
2023-01-12 04:32:16 Adding this to clarify that my goal is not some unjust character assassination of Bostrom. It's upsetting that someone would write this at all but what is *most* upsetting is how he currently remains equivocal about beliefs that are harmful &
2023-01-12 04:26:51 @flotsam70272377 @thebirdmaniac @nsaphra This is what I find very unsettling. If EAs are also equivocal about the statement "blacks are stupider.." then that is good for all of us to know. If they do not believe this, they need to denounce this and understand his apology is incomplete.
2023-01-12 04:24:30 @flotsam70272377 @thebirdmaniac @nsaphra This is not about his past comments but his present ones. In his present day apology, he does not denounce the first statement of the original email and remains equivocal about something that is understood to be prejudiced and harmful.
2023-01-12 04:07:53 @flotsam70272377 This is understood to be a harmful and prejudiced belief. I won't say more than that, but if this does not represent what most EAs believe then you need to denounce this. If it does, then that is good for all of us to know.
2023-01-12 04:06:16 @flotsam70272377 If this community won't hold him accountable for that, I'm not sure if there's anything left to say here. If that first statement is something members of the EA community actually believe then I am here to inform you that it's understood to be a prejudiced and harmful belief.
2023-01-12 04:04:32 @flotsam70272377 Now that he does apparently know better, he still currently does not apologize for the initial statement, and is in fact quite equivocal about it in his statement.
2023-01-12 00:53:56 Anyways, this is my cue to log off for a while. I literally can't stomach seeing something like this, and I have no interest in engaging with whatever excuses him and his followers come up with. That first statement is *racist* - not to mention deeply hurtful and dehumanizing.
2023-01-12 00:53:04 This is the old email that Nick Bostrom, a leader in Effective Altruism, is now apologizing for. Horrifying, yes, but I assure you his "apology" is worse - he walks back on his "invocation of a racial slur" without addressing the initial statement of a false &
2023-01-11 17:49:33 @RoxanaDaneshjou lol love this
2023-01-11 16:41:33 I could go on and on honestly. Most recently, NTSB is still fighting them to address the safety recommendations from over *five years ago*: https://t.co/uJFUFts5Fy
2023-01-11 16:41:32 One of Tesla's big arguments at the time was that "no one could prove autopilot was on at the time of collision", and ofc a few years later we find out this: https://t.co/76DhBenYW8
2023-01-11 16:41:31 But articles giving users tips on how to "work around" Autopilot's clearly dangerous failure modes is starting to sound like those advocating for ad-hoc car adjustments to fix the 60s Covair steering issues. At some point, it's clear that the problem is the car, not the driver.
2023-01-11 16:41:30 Like, yes, human users do hold some serious responsibilities, esp in the context of AI use. If you're going to turn an automated feature on, you typically need to monitor it and should not be negligent. @aselbst has actually written about this here: https://t.co/HWyAFX9BdZ
2023-01-11 16:41:29 The talk about these crashes is frustrating. Tesla is not a neutral actor &
2023-01-07 09:27:54 @KLdivergence Congrats, Kristian!
2023-01-07 09:09:04 @kashhill Congrats - I can't imagine how difficult it must have been to work on this story!
2023-01-05 21:46:09 @AmandaAskell @wsisaac @iamtrask Fair enough!
2023-01-05 21:45:17 @wsisaac @athundt @AmandaAskell @iamtrask Hm, I see what you mean - I'm not sure I agree but I also don't have a complete picture either. I guess I'm leaning towards being more cautious without the evidence, but understand those that see things differently.
2023-01-05 20:24:41 @athundt @AmandaAskell @wsisaac @iamtrask Also, I'll add that in my experience from an academic context, text does not need to be copied verbatim for it to count as plagiarism - in fact, in many failed attempts to cover their tracks, plagiarists will try to weakly re-phrase the text, though the content is the same.
2023-01-05 20:21:11 @wsisaac @iamtrask @AmandaAskell If they hadn't done anything, there would be a lot of cases of plausible deniability, a lot of "I didn't realize this was plagiarized or counted as plagiarism" and "I don't see anything about this technically being against the rules", so I understand their move to draw red lines.
2023-01-05 20:18:51 @wsisaac @iamtrask @AmandaAskell Also, by saying something about it, people now know which uses of LLMs are not endorsed - ie. that generating original text for papers is not something they should consider lightly, and that there are serious risks/consequences associated with the use of these tools in particular
2023-01-05 20:16:22 @wsisaac @iamtrask @AmandaAskell I feel like it isn't immediately clear to those using the LLMs that they are on the hook if their tools lead them to plagiarism (eg. using an LLM, they may not know of or recognize the source). This policy clarifies that this is a risk &
2023-01-05 19:07:29 RT @johnfsymons: It has happened. Just rejected a paper where format of large chunks of text indicated sloppy use of #LLM by the authors. C…
2023-01-05 19:06:06 @wsisaac @iamtrask @AmandaAskell +I'd argue that it was wise to get ahead of things &
2023-01-05 19:00:08 @wsisaac @iamtrask @AmandaAskell Hm, I'd argue it is an ethics matter - there's a research integrity issue at play here if people are generating content in papers from a large language model and potentially plagiarizing, compromising on correctness, etc.
2023-01-05 18:37:41 @KordingLab Hm what do you mean by "cross-cutting" thinking?
2023-01-05 18:09:47 This was such an interesting conversation and it's great to see it organized this way - ultimately, articulating clear community expectations around the ethical use of these LLM tools is important, and I'm glad to see ICML starting that discussion: https://t.co/vwTYrRgeGp
2023-01-04 16:44:25 @leonieclaude @RepublikMagazin @syllabus_tweets @AnnaNosthoff oh, congrats lol glad I could play a small part in your success here
2023-01-04 13:03:24 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran yeah, I've learned a lot from this thread on how people are using ChatGPT - I hadn't previously realized how much non-native speakers were already finding it helpful as an enhanced Grammarly
2023-01-04 13:00:17 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran curious - how are you thinking they should amend the policy? (in the context of this year, with weeks to the final deadline)
2023-01-04 12:58:19 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran We have proof that ChatGPT generates spam, though we're unsure how likely that spam is to fool reviewers, etc. I understand why you may not agree, but I do think ICML organizers giving themselves more time to prepare for and discuss how to handle LLM-enhanced papers is reasonable
2023-01-04 12:55:23 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran But I think my position is still the same. LLMs can be used for a variety of things outside improving text, adding that context is helpful but it's unclear what the reviewer/ACs, etc are supposed to do with that info &
2023-01-04 12:54:06 @yoavgo @RWerpachowski @boazbaraktcs @_onionesque @PreetumNakkiran Ops, yeah you're right, I think I misunderstood his tweet.
2023-01-04 12:46:27 @RWerpachowski @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran There's only a couple weeks before the submission deadline! And there's a lot of work still left to do to recruit reviewers, set up bidding, etc. I agree something more democratic would have been the right approach, but they didn't have time and had to make a call very quickly.
2023-01-04 12:43:33 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran Yeah, I see what you mean, but I admit I still worry about the chaos that would unleash, possibly giving an implicit green light to applications beyond the good use cases... Thanks for sharing thoughts on this though, it gave me a lot to think about!
2023-01-04 12:37:31 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran Because since it wasn't designed for X, it does many other things, some of which are actively harmful. I don't have a problem with anything - I'm not denying that this can be a useful tool, I just understand the perspective of those that choose to be cautious.
2023-01-04 12:33:22 @RWerpachowski @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran ok I definitely did not say this - I said improving communication skills in English will be helpful for getting more comfortable in English-speaking research communities. Even ChatGPT doesn't change this unfortunately, and this is why Grammarly is designed as an education tool.
2023-01-04 12:22:52 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran I like @boazbaraktcs 's proposal, but that would require months of setup &
2023-01-04 12:20:13 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran Yeah, I understand that. But I'm curious what you're thinking would have been a better position for them to take this year, under the short notice. Deadline is in just a couple weeks - would setting no rules not have led to chaos? Is there another approach that would fare better?
2023-01-04 12:15:01 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran Yeah, this is my suspicion of what I think might be best for language learning, but I definitely didn't mean this as advice. Anyone can do what they please! My point was that Grammarly is designed explicitly as a learning tool
2023-01-04 12:10:37 @boazbaraktcs @ducha_aiki @_onionesque @PreetumNakkiran Though I'll say I have no idea how anyone would restrict the use case for the current version of ChatGPT - ie. enforce using it for x but not y. That uncertainty &
2023-01-04 12:05:43 @boazbaraktcs @ducha_aiki @_onionesque @PreetumNakkiran Ok I see what you mean. I agree with that! Main point is that there should be some level of consequences for spammers once caught - though you're right that current policy does not differentiate adequately from other, more benevolent LLM use
2023-01-04 11:58:00 @thegautamkamath @_onionesque @boazbaraktcs @PreetumNakkiran ok, yeah, this is a great point!
2023-01-04 11:57:01 @RWerpachowski @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran Oh, I didn't realize this! That's disappointing to hear. I don't agree with that.
2023-01-04 11:56:05 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran I'm not giving advice, just trying to explain the differences that people see in the two tools. Others made an analogy to Grammarly, and I'm pointing out that this does not always hold. There are differences in these tools and reasons people are more worried about ChatGPT.
2023-01-04 11:52:49 @ducha_aiki @boazbaraktcs @_onionesque @PreetumNakkiran Amazing to hear! But is something like Grammarly not providing the same support? Is this just a quality difference? My concerns are of ChatGPT's ability to generate content from thin air - if we could ensure it could be restricted to use as a Grammarly 2.0, that's less worrying.
2023-01-04 11:48:54 @yoavgo @boazbaraktcs @_onionesque @PreetumNakkiran I get that, which is why I like tools like grammarly but this isn't really the main way I see chatgpt being used: https://t.co/goRvCcYWRA
2023-01-04 10:54:22 RT @FAccTConference: Reminder: deadline approaching for #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan…
2023-01-02 02:21:52 @ameliovr @tdietterich Ah actually seeing the other replies, I think others have addressed this - interesting discussion!
2023-01-02 02:19:26 @ameliovr @tdietterich Yeah this is my understanding as well - curious if you're seeing things differently @tdietterich ?
2022-12-29 18:30:07 @Abebab I've been thinking a bit about this lately - content moderation avoidance is very much a thing for malicious actors hoping to spread misinformation: https://t.co/In0mMqaM7y
2022-12-28 14:15:01 @yoavgo @evanmiltenburg sorry, not sure what this is referring to?
2022-12-28 09:53:47 @RWerpachowski @yoavgo @evanmiltenburg I agree with this, but not everyone is on the same page here. I've already heard of startups trying to use ChatGPT for mental health counseling and medical advice, both very high stakes applications. When released publicly without guardrails, that kind of thing will just happen.
2022-12-28 09:51:11 @RWerpachowski @yoavgo @evanmiltenburg Depends on your moral philosophy! Personally, I'm opposed to a strictly utilitarian view because consequences for that one person could be quite severe or unjust (eg. medical misinfo leading to death, etc.) and those experiencing harms are already the most vulnerable out there.
2022-12-28 09:42:21 @RWerpachowski @yoavgo @evanmiltenburg And the problems we see now are valid justification to delay widespread release. I personally don't think that's an unreasonable ask and don't quite understand the strong resistance to that position.
2022-12-28 09:40:45 @RWerpachowski @yoavgo @evanmiltenburg Fair! I'm perhaps conflating your position with others I've encountered recently - but imo the difference between some of the recent releases and what we'll see in deployment is not that great. What we see as issues now will only get worse given wide public releases of the tools.
2022-12-28 09:36:51 @RWerpachowski @yoavgo @evanmiltenburg Yeah, we're on the same page there - this is why I work on audits, to collect empirical evidence of real harms. I also know some warnings are being heeded &
2022-12-28 09:32:18 @RWerpachowski @yoavgo @evanmiltenburg My understanding is that the named products are beta releases - they release the products as such so that they can effectively stress test the product before wider release. I think you're anticipating huge changes for an "actual product" but historically that's not been the case.
2022-12-28 09:28:33 @RWerpachowski @yoavgo @evanmiltenburg Oh, I really didn't mean things that way! My point is that you don't seem like you're going to change your mind with the information we have, so best to end the conversation here, since it doesn't seem productive. You &
2022-12-28 09:24:48 @RWerpachowski @yoavgo @evanmiltenburg Trust me, Google does know - it is just not public.
2022-12-28 09:22:59 @RWerpachowski @yoavgo @evanmiltenburg "We" don't know anything? This is not how technology works - one can't make assumptions about the expertise of every user, esp when a tool is broadly available? I can counter your anecdotes w/ those of novice programmers unable to identify serious silent bugs generated via codex.
2022-12-28 09:19:43 @RWerpachowski @yoavgo @evanmiltenburg Either way, you seem committed to your position in spite of the available evidence, so I'm going to tap out of the conversation and wish you the best!
2022-12-28 09:18:10 @RWerpachowski @yoavgo @evanmiltenburg Not sure what you're expecting to see as a difference in the deployment of this product as a "finished product" vs a "research preview" lol - the same user interactions occur in both cases &
2022-12-28 09:13:00 @evanmiltenburg @RWerpachowski @yoavgo lol thanks. Though it's mostly the work of others as well - the cautiousness of the field isn't a given, and is the result of advocacy &
2022-12-28 09:07:21 @RWerpachowski @yoavgo @evanmiltenburg Codex is deployed via Github, Chat-GTP is deployed and we know these are problems clients actually experience. I'm not sure what you're expecting will magically be different in deployed products but it's not a remarkable change in circumstances - the harms are clearly still there
2022-12-28 09:04:46 @RWerpachowski @yoavgo @evanmiltenburg Only BERT seizure went viral on Twitter - issues with negation continue to happen today in search, but not reported as publicly. Also "not of deployed products" is false - gtp-x is deployed, Galactica was deployed and both have been found to obviously have these serious issues.
2022-12-28 08:54:29 @RWerpachowski @yoavgo @evanmiltenburg + we don't need to deploy something to anticipate realistic harms that can arrive as a result - that's safety 101. Those building these systems know of misinfo, bias, etc (see: https://t.co/PQFob7seSQ). Pretending this won't have disastrous consequences upon deployment is naive.
2022-12-28 08:50:19 @RWerpachowski @yoavgo @evanmiltenburg There's quite a lot of evidence of the harms they've already caused - especially given the actual use of BERT to some degree in Google search. We mention quite a few in here: https://t.co/FYlGlWilLg
2022-12-28 08:39:11 @yoavgo @evanmiltenburg Though personally, my take has always been "this should be built differently" or "this should not be deployed without being evaluated for x or y or z" - people are just worried about the harms that come from careless deployment, I doubt many take the stance of "never build this".
2022-12-28 08:35:52 @yoavgo @evanmiltenburg Yeah, I think there's arguments of the kind "perhaps our energy is better spent elsewhere / on different types of problem, since it doesn't seem like this is a good idea to build" and I wonder if impossibility proofs are necessary to make such arguments persuasive (probably not).
2022-12-27 22:06:57 RT @rmichaelalvarez: Next month we will launch a new initiative at @Caltech, the Center for Science, Society, and Public Policy. I'm excit…
2022-12-26 12:14:21 @yoavgo @CriticalAI @emilymbender @EmilyBender I find this line of reasoning v strange - at minimum, the paper at the core of the article quite clearly outlines the involved argument, ie. there are known modes of engagement in user interactions for information retrieval &
2022-12-23 13:25:42 @mchardcastle @bayesianboy +1, the product liability lens is present in the current EU AI Act draft but missing in a lot of US policy discussions which disproportionately focus on bias. That being said, there's definitely some room to consider functionality under disparate impact: https://t.co/PSNtGap1S5
2022-12-23 13:20:10 @Miles_Brundage @zhansheng @tshevl @AllanDafoe @Abebab LOL
2022-12-21 23:57:35 @littlebitofawk Completely entitled to your perspective - I was very careful in that tweet not to tell people how to vote! We can acknowledge wins regardless
2022-12-21 18:26:23 @realCamelCase lol is this a joke
2022-12-21 18:24:16 Kind of unreal how much the union has won for UC student workers through this strike - if the current contract is ratified, in a couple years, it will result in an over 50% wage increase! Very grateful to those that have been tirelessly organizing &
2022-12-21 13:17:15 RT @mozilla: Today’s social media status quo isn’t cutting it, so Mozilla is exploring an alternative. In early 2023, Mozilla will be testi…
2022-12-20 03:07:55 RT @msbernst: "Let's think step by step" increases the bias of large language models. Avoid if your task involves social inferences! Work…
2022-12-17 19:20:35 @NerdyAndQuirky @pcastr also, for ranting's sake: my issue isn't that RL benchmarks are *simple*, it's that they seem completely *disconnected* - they don't even pretend to be abstractions of real world problems So yeah I'm critical of eg. Meta's Habitat - nice graphics don't fix task design issues!
2022-12-17 19:15:52 @NerdyAndQuirky Not sure what a good reference to this problem is, because no one likes talking about this in machine learning. I wrote a position paper about the issue once: https://t.co/hMqXsydjar Wonder if there's anything RL specific? @pcastr probably has a clue of where that convo is at!
2022-12-17 19:13:08 @NerdyAndQuirky But I think the bigger issue they have is in (2) task design. Like, the benchmarks the community obsesses about making improvements on are completely arbitrary, typically just any random game with a clean set of rules, rewards and fixed actions (eg. Chess, Go / Atari, Dota, etc.)
2022-12-17 19:10:14 @NerdyAndQuirky Sure. (1) Most of RL papers are not reproducible research, and I believe that's what's concretely holding them back the most: https://t.co/J2Xw4jRJM4 There's been some recent progress on getting things to a better state, but long road ahead - see: https://t.co/G9F8JGtRzC
2022-12-17 19:03:54 @beenwrekt lol but meaningful, low stakes applications do not make for nice demos, Ben!
2022-12-17 18:35:08 Unpopularish opinion but I don't think it's mainly the sim2real problem that stunted RL's impact - that community tends to focus on the wrong problems. And I can see a similar issue blocking LLM's future impact. https://t.co/13YQhFw5i8
2022-12-17 02:58:45 @colin_fraser Totally agree and also a pattern that's evident with YouTube - a lot of why there's so much misinfo on there is because content creators who intend to deceive face no repercussions and in fact game the platform's features (inclu the algorithm) the most: https://t.co/2W57qA2pa5
2022-12-17 01:39:11 @JubaZiani @Aaroth @Adam235711 I'd agree with this! + FAccT tends to include a lot more empirical work (eg. audits, data releases, experiments, etc.) + AIES includes more participants from a philosophical/policy/law perspective. Though there's lots of cross-pollination, so may not matter too much actually
2022-12-17 00:59:46 There are other aspects of these platforms though - user interfaces, actual content format, nature of user interactions, etc - that *does* have a huge impact on these downstream behaviors &
2022-12-17 00:57:30 Research points to this: https://t.co/PUhrPeRGbH It's pretty much known at this point that targeted ads/recs don't actually work as well as we assume they do in influencing downstream behavior. If the algorithm can't even get me to buy a sofa, how can we say that it sways votes?
2022-12-17 00:50:59 Appreciate this. In fact, there's something I've been calling the "algorithmic irrelevance" theory, where I suspect that most of what is problematic about online platforms (ie. addiction, misinfo, radicalization) is actually mostly due to design elements outside of the algorithm. https://t.co/XlevqHLphe
2022-12-17 00:43:37 @followlori @brianavecchione Hope you have a great holiday as well!
2022-12-17 00:35:26 Glad to see OAT team member @brianavecchione receive some recognition for her role in this project! It's been a pleasure to work with her on this so far! Details here: https://t.co/d505vH7uUH https://t.co/5Kwo5J3hrs
2022-12-15 21:03:07 @overlordayn @neuralreckoning @pfau Intuitively, for this reason things should be opt in but things are pretty complicated...even Creative Commons has been really confused about what guidance to provide: https://t.co/Z7BMK5iUiy
2022-12-15 21:01:21 @overlordayn @neuralreckoning @pfau Fwiw this same issue came up with IBM's "diversity in faces" dataset &
2022-12-15 20:53:10 RT @NicolasPapernot: The list of papers accepted @satml_conf: https://t.co/23PLF2bqIh I'd like to extend a big thank you to all the PC me…
2022-12-13 16:18:55 @KordingLab @beenwrekt There's an interesting point here about accessibility though - ie. OpenAI has an API anyone can use, meaning the model's impact increases &
2022-12-13 16:16:22 @KordingLab @beenwrekt Sure, but I mean they drive research at least - and the way Deepmind rolls them out, they still make the news and break into mainstream consciousness.
2022-12-13 16:14:00 @beenwrekt @KordingLab Both of them have ridiculously flashy demos and that's been hugely influential - don't you remember? there was a whole *movie* on AlphaGo! And I think people severely underestimate how excited people used to be about BERT - it was everywhere! That set the blueprint for OpenAI imo
2022-12-13 16:08:35 @beenwrekt @KordingLab Yeah, I don't disagree - but in terms of "who started this madness" I still feel like it's Google/Deepmind? Even the original PR machine for AGI etc was coming from Deepmind before Open AI was even founded.
2022-12-13 15:31:00 @beenwrekt @KordingLab Hm - Deepmind/Google remain a pretty consistent source of flashy demos to this day, powered by GCP credits, TPUs and... nature pub hype. BERT wasn't the best ofc but it was the pioneer imo. I'm not arguing that gpt-x isn't an improvement, but it wouldn't exist without BERT.
2022-12-13 15:06:42 @beenwrekt @KordingLab A fun synopsis on that era here: https://t.co/jECntrLcqE
2022-12-13 15:04:53 @beenwrekt @KordingLab fwiw I think Google is really the first mover here, with BERT and the subsequent sesame street models. That is really the origin of this madness lol
2022-12-13 14:27:23 RT @sharongoldman: ***BREAKING UPDATE***: Enforcement of NYC's AI employment law is being delayed until April 15, 2023. It was supposed to…
2022-12-13 04:59:59 This over-sexualization of female subjects in generated images is something we've known since at least 2020 (see @aylin_cim &
2022-12-13 04:23:39 RT @Melissahei: I tried the viral Lensa AI portrait app, and got lots and lots of nudes. I know AI image generation models are full of sexi…
2022-12-12 21:18:38 This paper keeps coming up over &
2022-12-12 18:23:39 Excited to join this panel today! David's book was such a lovely and informative read - highly recommend. https://t.co/Qh6QU7mjOP
2022-12-12 13:16:53 @deingaraus @Abebab Thanks - will flag this for the copyeditor!
2022-12-10 02:21:50 @JoannaBlackhart @DocDre @Abebab Hm not sure - I can't even read it without signing in, sorry
2022-12-09 18:50:46 @andywalters @WIRED @Abebab @huggingface @SashaMTL @mmitchell_ai In response to this, we cite several instances where Meta leadership blame the *users* for what happened with Galactica - even though the scenario that played out was to be fully expected, given what we know about harms. This is where we came from - ofc, you don't have to agree
2022-12-09 18:48:10 @andywalters @WIRED @Abebab @huggingface @SashaMTL @mmitchell_ai I'm not talking about the quality of the technological output - I'm talking about the nature of the handling of involved harms
2022-12-09 18:34:21 @andywalters @WIRED @Abebab @huggingface @SashaMTL @mmitchell_ai But I don't think it's unreasonable to point out that we're not as far forward as we think - and that critics acknowledging these limits are still being dangerously dismissed. Years after Tay, and we're still choosing to blame users for what happened with Galactica? Frustrating.
2022-12-09 18:31:32 @andywalters @WIRED @Abebab Hm - I don't agree with this, though I see where you're coming from. We could definitely have done more to acknowledge some of the progress that's been led by eg. @huggingface folks like @SashaMTL, @mmitchell_ai etc.
2022-12-09 18:28:03 RT @struthious: 'it seems to be the job of the marginalized to “fix” them... The weight falls on them, not only to provide this feedback, b…
2022-12-09 15:51:36 RT @Abebab: "We critique because we care. If these companies can't release products meeting expectations of those most likely to be harmed…
2022-12-09 15:51:11 Me &
2022-12-09 08:32:55 @itsHabeeb_AB @OpenAI Nope, I did not!
2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!
2022-12-08 19:26:55 @jjvincent omg, congrats!!
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-11-15 18:00:11 RT @lkirchner: Great to see my work with @MattGoldstein26 at @nytimes and @themarkup cited in this new @CFPB report out today on errors in…
2022-11-15 17:49:44 @thatMikeBishop I notice a particular dogmatism common in that crowd but also, I can see this isn't something I'll be successful in convincing you of, which is fine. Wish you and your peers the best as you process your emotions!
2022-11-15 17:48:14 @thatMikeBishop I don't agree with this - I'm in a CS PhD program, I interact with non-EA "specialists" all the time, and have no problem having meaningful, respectful disagreements with them.
2022-11-15 16:52:47 RT @transparenttech: It's launch day for the Coalition for Independent Technology Research! Society needs trustworthy, independent resear…
2022-11-15 15:52:55 @thatMikeBishop Many accepted norms by EA folks (ie. malaria nets, AI x-risk, etc.) were just seen as given priorities that were difficult (at least for me) to meaningfully push back on - the pitch was to donate to supposed trustworthy actors, of which only the natural outsiders seemed to doubt.
2022-11-15 10:50:05 @AdtRaghunathan @_christinabaek @jacspringer Hey!!
2022-11-14 19:22:43 @ellis2013nz yeah, tbh I don't fully understand their logic here either - but for me, its another example (like their bet on crypto) that they've looked at at least one other situation and followed the "ends justify the means" argument before! It's been part of their playbook long before SBF.
2022-11-14 19:11:41 @ellis2013nz Oh, I believe there is a pretty direct link - EAs see advanced AI as an existential risk
2022-11-14 19:01:20 @ellis2013nz Think of GTP-3, DALL-E, etc at OpenAI as examples. Many have warned about the dangers of developing such under-specified &
2022-11-14 18:57:01 @ellis2013nz model = large machine learning models being presented as "AGI" by people in the effective altruism community
2022-11-14 18:56:10 I really hope, for the sake of the wellbeing of those still involved in EA, that their leaders take responsibility, rather than attempt to circumvent it. It's clear that changes need to be made in many ways, rather than just ejecting this one person as some anomaly when he's not.
2022-11-14 18:42:31 Example: We've been pointing out for years that the blind development of large "general" models pose a threat to real people. The EA-funded efforts to continue building such models despite known harms have been justified by the exact kind of reasoning that led to SBF's downfall.
2022-11-14 18:42:30 This is a textbook example of the "No True Scotsman" fallacy:https://t.co/n0NqOwlnuZI get that MacAskill &
2022-11-14 18:17:18 RT @FAccTConference: We're so excited about next year's #FAccT23 Conference! Taking place in Chicago, in mid June, the General Chairs are A…
2022-11-14 16:15:25 @yonashav All bystanders involved - esp those w/ institutional power - contribute to that environment. You can't point to a bad actor &
2022-11-14 16:10:25 @yonashav I'm quite familiar with this "bad apples" argument. What I've learnt from other contexts - eg. violent cops, abusive academic advisers, etc. - is that bad apples can only cause harm once enabled by an environment void of accountability.
2022-11-14 15:01:01 This is a catastrophic collapse of a community that clearly meant a lot to some, and I feel for them. But I will say this: the whole premise of EA, from the beginning, has been "trust us" - they need to acknowledge the value of the poc &
2022-11-14 13:44:14 @BetsyDupuis @avt_im @BlackHC @timnitGebru Also DAIR operates under a completely different inventive structure from academia. I've never had an encounter with her or anyone else there where they cared in the slightest bit about citation numbers or who is quoting them - they're certainly not scared to critique OpenAI lol
2022-11-14 13:39:05 @BetsyDupuis @avt_im @BlackHC @timnitGebru This isn't true? Timnit's regularly commented on the copyright issues involved with all the generative models developed by OpenAI, including Co-pilot (esp their use of open source code). Her lack of response to you specifically is likely just due to basic capacity constraints.
2022-11-14 13:17:46 RT @conitzer: New tenure-track position in Ethics &
2022-11-11 18:38:29 RT @agstrait: ALERTALERT@AdaLovelaceInst are hiring a Visiting Senior Researcher in Algorithmic Auditing - if you're interested in spe…
2022-11-10 16:27:42 RT @ellgood: Thanks for shout out @StanfordHAI: My paper with @juliatrehu on AI Audit Washing and Accountability. "This is an important pie…
2022-11-10 14:10:16 RT @schock: Okay @UCSD! "The Designing Just Futures Cluster Hire seeks to recruit diverse faculty engaging in innovative and interdisciplin…
2022-11-10 09:56:22 RT @benzevgreen: Deadlines coming up soon for two faculty jobs at Michigan focused on the intersection of technology and policy:1. Pr…
2022-11-10 09:54:11 RT @wihbey: Apply! @Northeastern Faculty position in AI &
2022-11-09 16:27:14 RT @natematias: How can software systems support citizen scientists to do causal audits of algorithm decision-makers?Excited to join CSCW…
2022-11-09 05:03:18 RT @sayashk: Our paper on the privacy practices of labor organizers won an Impact Recognition award at #CSCW2022! Much like the current m…
2022-11-08 20:14:08 RT @federicobianchy: Text-to-image generation models (like Stable Diffusion and DALLE) are being used to generate millions of images a day.…
2022-11-08 14:14:55 @voxbec Oh, amazing! Appreciate this so much
2022-11-08 09:41:55 @athundt Hey - we accepted everyone that sent us a request? Are you still waiting for a slack invite? If so, we must have missed you, please shoot us another email!
2022-11-07 23:36:52 @emilymbender Don't think anyone on our team sees ethical considerations as secondary to technical merit - in fact, @SashaMTL in particular fought hard for ethics reviews to factor meaningfully into the author/reviewer discussion period because of her belief in it as a primary consideration!
2022-11-07 23:33:22 @emilymbender I realize this wasn't the best wording for that though but the general idea was to avoid setting "red lines" via the ethics review process and to do that via norm-setting practices instead (such as community deliberation on the Code of Conduct, etc.).
2022-11-07 23:32:02 @emilymbender - it was meant to comment on the fact that legal &
2022-11-07 23:30:55 @emilymbender I understand your perspective here, and I realize how it could be read otherwise but AFAIK this paragraph was not meant to present a false dichotomy between technical merit &
2022-11-07 19:26:03 @justinhendrix Yeah, been feeling the same way lately. Wasn't built for this purpose but some of us are here if this overlaps with your interests: https://t.co/2Tr2BQFClg
2022-11-07 16:36:21 @suryamattu So excited to hear about this, Surya!+ you might be interested to join our slack community for those doing algorithmic audit work, as another way to stay in touch: https://t.co/2Tr2BQFClg
2022-11-07 15:35:04 RT @suryamattu: I am excited to officially announce the launch of the Digital Witness Lab, a new research lab I am starting @PrincetonCITP…
2022-11-07 14:17:03 Our thinking about the diversity of ethical challenges in ML research has also matured a lot over the years. There's an increasing awareness of how ethical oversight is meant to be integrated into the research process &
2022-11-07 14:17:02 To me, it's remarkable just how much the conversation has evolved in just a few short years - feels like just yesterday that @IasonGabriel pioneered the effort with broader impact statements for NeurIPS 2020 &
2022-11-07 14:17:01 Hard to believe but the @NeurIPSConf Ethics Review process is over - and has completed its third year! In a blog post, with co-chairs @SashaMTL, @wsisaac &
2022-11-06 13:39:51 Already amazed at who has joined this So incredible to see the diversity &
2022-11-06 13:30:35 @CatalinaGoanta Also feel free to link me to your papers
2022-11-06 13:29:32 @CatalinaGoanta Do you have some examples of this? It seems to lure folks into Youtube Red for example, they provide professionally produced content (ie. Youtube Red TV shows &
2022-11-06 13:20:29 @CatalinaGoanta Like, no one would pay a subscription for user generated content, right? (at least I can't think of a situation where this is the case...) Which is why those platforms tend to brand as social media platforms &
2022-11-06 13:16:11 @CatalinaGoanta Yeah, for sure! Though I think perhaps my intuition of a difference is more tied to diffs in content creation practices - in netflix/spotify, they operate as distribution platforms for professionally produced content vs. youtube etc where it's user generated content? Not sure tho
2022-11-06 13:12:29 @agstrait @carlykind_ @K_singh_P Thank you! Looking forward to checking that out!
2022-11-06 04:29:06 @yoavgo @K_singh_P ... possibly higher tolerance in the latter scenario!
2022-11-06 04:28:30 @yoavgo @K_singh_P yeah, exactly - since the friction to just hop off the platform is much lower than what's required to unsubscribe
2022-11-06 04:13:36 @K_singh_P Interesting. How is quality typically measured here?
2022-11-06 04:11:01 @841io Do you have a sense on how this impacts content creation, though? Clear differences of quality/flexibility in content under the sub model &
2022-11-06 04:07:52 @841io Oh nice - that's really interesting, thanks for sharing! Yeah, someone else also suggested that comparing the freemium / paid models on the same platform would be the kind of investigation you'd want to do on this (ie. Youtube vs. Youtube Red).
2022-11-06 04:04:49 @natematias Oh nice! Excited to check out that article once it's out!
2022-11-06 04:04:09 @K_singh_P Aha, I have all these possible intuitions but I'm genuinely not sure, which is why I asked aha
2022-11-06 04:03:45 @K_singh_P Also each has a very different mechanism for content creation (ie. more professional / less dynamic &
2022-11-06 03:58:49 @K_singh_P I'm not sure - both are trying to keep users on the platform, but for different reasons. One is about minimizing cancellation rate of subscribers &
2022-11-06 03:47:21 A random question but has anyone done research on the differences between the recommendation ecosystems for subscription-based media platforms (ie. Spotify, Netflix, etc.) vs. ad-revenue based user content platforms (ie. YouTube, etc.)? Often conflated but feels very different.
2022-11-04 17:52:08 @mmitchell_ai @KLdivergence +1! You all have our full support, Kristian. Let us know whatever you need!
2022-11-04 17:43:55 RT @WIRED: Breaking: As part of an aggressive plan to trim costs that involves firing thousands of Twitter employees, Musk’s management tea…
2022-11-04 17:24:39 RT @KLdivergence: All of twitter’s ML Ethics, transparency, and accountability team (except one). was laid off today. So much for that resp…
2022-11-04 17:20:01 RT @jackbandy: A sample of the team's contributions to platform transparency and responsible machine learning:"Candidate Set Imbalance an…
2022-11-04 17:16:35 RT @SashaMTL: Interested by the @NeurIPSConf ethics review process? Take a look at the blog post below and, more importantly, come to our…
2022-11-04 15:43:14 @KLdivergence I'm so sorry, this is awful Hope you're doing ok
2022-11-04 13:59:18 Man, I'm gutted about this Twitter META news - those guys were the reason I had such a blast @FAccTConference this summer! Truly amazing people, recruited &
2022-11-04 13:48:54 Hi people - so sorry for the delay
2022-11-03 17:23:37 RT @NatureNV: As part of @nature’s special issue on #racisminscience, @abebab looks at the massive effect that the #gendershades study had…
2022-11-03 17:23:12 RT @statnews: Opinion: STAT+: HHS’s proposed rule prohibiting discrimination via algorithm needs strengthening https://t.co/6CpDMlwTAV
2022-11-03 17:08:47 RT @christelletono: NEW REPORT ALERT: @ystvns @MominMMalik @SonjaSolomun @supriyadwivedi @sambandrey and I analyze the Canadian government…
2022-11-01 16:56:20 @timnitGebru @AJLUnited @jovialjoy
2022-11-01 14:30:12 RT @Wenbinters: new report out from @EPICprivacy Screened + Scored in D.C.https://t.co/7fDrM2LNhPthree main goals:-birds eye view of…
2022-11-01 14:19:21 RT @sarahbmyers: Excited to moderate a conversation on Automated Decisionmaking Systems this morning with @random_walker, @mikarv and @raji…
2022-11-01 14:19:06 RT @ambaonadventure: .@FTC #PrivacyCon22 is live! We're starting with two stellar panels on surveillance &
2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD
2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…
2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…
2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &
2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…
2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that
2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.
2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.
2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position
2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!
2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)
2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)
2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.
2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists
2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…
2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing
2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition
2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR
2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing
2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?
2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <
2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.
2022-10-20 17:32:27 RT @ruchowdh: The 8 bit bias bounty is now live!! Thank you @Melissahei for the article on what the bounty program means in context of the…
2022-10-18 21:56:01 I agree with this. But also, there's so many cases where we can put together an oversight board and just ...make standards.. I don't know why it is always presented as this impossible task. That's literally what happened in every other industry with a mature audit ecosystem. https://t.co/RMynT318ro
2022-10-18 21:49:36 RT @kanarinka: Landlords increasingly use Tenant Screening Systems to make decisions about prospective renters. In this paper @wonyoungso…
2022-10-15 17:57:52 @zacharylipton Phat! Lol who even taught him these words
2022-10-14 12:35:14 @a_bacci @hrw @F_Kaltheuner @AmosToh @deblebrown @kerinshilla Congrats Anna!
2022-10-14 11:11:16 @HaydnBelfield @mmitchell_ai @carinaprunkl @jesswhittles Was just about to link to this. My least favourite version of this discourse is the "technical vs non-technical" safety, "long-term vs short-term" harms talk, etc. Completely ridiculous false dichotomies.
2022-10-12 16:59:26 @daniellecitron @macfound Congrats @YejinChoinka!! So happy to see you on the list - well deserved
2022-10-11 18:40:09 @npparikh Most audit policies just happen to be incomplete, not accounting for the full range of factors it would take for the audits to not become shams. There's a lot we can learn about this from other industries, and imo it is worth getting this right.
2022-10-11 18:37:30 @npparikh I can see where you're coming from, but I don't agree. There's lots of precedent of third party audits playing a critical role in accountability in other industries, especially when standards are set for auditor conduct and audit expectations. Check out: https://t.co/nXiIE6ELAn
2022-10-11 18:30:30 @npparikh Yeah, I can also see this being a great space to discuss critiques as well!
2022-10-11 18:26:09 @npparikh I'm thinking both! We need better norms but also sometimes people need help on specific cases
2022-10-11 16:35:45 RT @BedoyaFTC: 35 years ago last night, my mother, brother and I landed at JFK on a long Lufthansa flight. My father had gone ahead of us
2022-10-11 13:51:29 @wsisaac @agstrait lol yeah I thought of you William - wondering if there can be a way to do this via REALML, but something open and low effort (like opening up a section of the Slack or something?)
2022-10-10 17:46:37 @_jasonwei hm, I don't think so - as far as I can tell, her first tweet was about having no evidence for claiming LLMs can get to above-human-level understanding/reasoning, which doesn't omit the possibility of LLMs doing cool things, and possibly beating humans on certain practical tasks.
2022-10-10 17:43:01 @sleepinyourhat I'm also unsure about how much we should actually expect in terms of differences from BERT failures &
2022-10-10 17:38:59 @sleepinyourhat Sure - but I don't know if there's actually any signal to any of those hints
2022-10-10 16:52:35 We want to potentially create a space for folks doing or interested in algorithm audit work. There's so many of us across disciplines (journalism, HCI, regulators, law, etc.) and not a lot of coordination, would be great to have some communal space to discuss &
2022-10-10 16:38:31 RT @DrMetaxa: A tweet for Algorithm Auditors &
2022-10-10 15:56:45 @raphaelmilliere @sleepinyourhat Hm but even the @sleepinyourhat paper points out that adversarial design isn't the same as working twds principled abstractions of the linguistic competences we hope models have. It's a step forward to incorporate robustness measures but this doesn't guarantee meaningful tasks.
2022-10-10 15:34:25 @raphaelmilliere @sleepinyourhat Great point! I agree you can't know one way or the other, but task artificiality isn't just about how "easy" the test is - but also how carefully constructed the test is. I notice a lack of principled justification in a lot of LLM task design &
2022-10-10 13:06:24 There are many tasks that LLMs are doing great at and for which scale helps a lot, but it's really questionable to claim these models are achieving some generalized "human-level" linguistic competency, esp when the vast majority of such tasks don't measure anything close to that.
2022-10-10 13:06:23 I don't think anyone disagrees that you can get an LLM to beat a human rater on some set of arbitrary challenges and in this case, yes ofc those wins can be attributed to scale and some of those tasks quite practically interesting &
2022-10-07 16:21:20 RT @StanfordHAI: Last chance to submit ideas by Oct. 10 The $71K #AIAuditChallenge invites individual researchers or teams to submit…
2022-10-05 15:19:44 @leeahdg It's a new guideline from the WH. Details here: https://t.co/samDHveWsd
2022-10-05 04:37:51 A great thread on the latest FDA guidance from @kdpsinghlab, who has himself been involved in calling out the risks of some of these AI/ML-enabled healthcare products that have been on the market for far too long without any proper regulatory scrutiny. https://t.co/1fpOQlpslf
2022-10-04 21:08:19 @MarkSendak Fair - but it's an important shift in comms imo. There was never going to be a one-size-fits-all model for this problem, and I think the investigations cornered them into admitting this.
2022-10-04 20:38:01 The new AI Bill of Rights is exciting - it's been difficult to get those in power to make such strong commitments.However, I admit I'll be *most* excited to see the first instances of recall, first successful cases for recourse, etc. Those concrete actions will be the real win!
2022-10-04 20:20:06 This is still my favourite kind of AI-related news: companies being cornered into taking concrete actions (ie. updating or recalling products/comms) in response to AI accountability efforts - in this case, empirical investigations into Epic's Sepsis tool.https://t.co/e6M9tYK8Iy
2022-10-04 19:44:52 @hannahsassaman Wow, this is so great to see - go Fabian!
2022-10-04 19:37:03 @Aaron_Horowitz @Wenbinters lol you and your supreme court rants yeah this is objectively interesting though - the question of intent seems pretty central to US anti-discrimination law, what a horrible precedent
2022-10-04 19:32:40 @schock lol love your hot takes :)
2022-10-04 19:17:21 @Wenbinters @Aaron_Horowitz Thanks!
2022-10-04 19:05:05 @Aaron_Horowitz What case is this about?
2022-09-30 22:19:26 @LeonYin @LoebAwards @adrjeffries @elarrubia @JuliaAngwin @themarkup Whaaatt- this is huge, congrats!! Very well deserved
2022-09-29 20:11:48 @_alialkhatib @schock @jovialjoy Thanks for hyping up the work, Ali! Glad you enjoyed the paper.
2022-09-29 20:10:02 RT @timnitGebru: "Open source software communities are a significant site of AI development, but “Ethical AI” discourses largely focus on t…
2022-09-29 20:09:52 @timnitGebru @FAccTConference lol @Abebab this is the Facct paper I was just talking about! Was just about to send it to you aha
2022-09-27 01:57:18 @adjiboussodieng @sh_reya Yeah perhaps we're talking over each other - though at least the "externally managed" bit of things seems to be what Kaggle does... Ie. https://t.co/6dDjRua7yN But yeah if I'm misunderstanding, let's just leave things here lol
2022-09-27 01:35:38 @adjiboussodieng @sh_reya But they also observed this with a lot of other well-used Kaggle datasets in the Roelofs et al paper... Not sure if it's exactly what you're looking for, but probably worth checking out as a starting point!
2022-09-27 01:30:01 @adjiboussodieng @sh_reya Yeah that's what you'd observe if you were "overfitting" performance on a given benchmark. We don't see that experimentally happen in this case, since the ranking of models on the validation set ultimately still matches the model ranking on the test set (at least with ImageNet)
2022-09-27 01:22:48 @adjiboussodieng @sh_reya Yeah, the Roelofs etc al paper is speaking to that case - I'd start there!
2022-09-27 01:20:16 @adjiboussodieng @sh_reya That being said, the Roelfs et al paper is discussing that static case, about when a benchmark is no longer useful (ie when we overfit to a static data benchmark)
2022-09-27 01:18:10 @adjiboussodieng @sh_reya As in you want to improve the static case? "Externally managed and updated often" is often only applicable when the data changes, no?...
2022-09-27 01:16:09 @adjiboussodieng @sh_reya They talk in those papers about over fitting from test set reuse? Perhaps I'm not getting what you're talking about?
2022-09-27 01:14:52 @adjiboussodieng @sh_reya A lot of the conversation on streaming eval in ML Ops can be seen as an alt to the static data benchmark paradigm. It's different from the adversarial benchmark setting of something like Dynabench.
2022-09-27 01:01:43 @adjiboussodieng @sh_reya Ultimately though, there's evidence that benchmarks are at least internally valid measurements &
2022-09-27 00:58:37 @adjiboussodieng There's a lot about the ML evaluation paradigm that can be improved - we've written about it here: https://t.co/XwEsptquC8+ I like @sh_reya's take on one way forward here: https://t.co/zUkRmCYvJO
2022-09-25 09:03:28 RT @natashanyt: A fascinating new study in Science details how LinkedIn ran social experiments on 20 million users over 5 years.It shows h…
2022-09-21 19:13:54 h/t @MicahCarroll for flagging this for me, and congrats to the team @mozilla for such a impactful audit study!More details here: https://t.co/telvA80zof
2022-09-21 19:13:53 Analysis of 567 million YouTube video recs from ~23k users revealed that participatory controls (e.g. dislike button, "not interested" pop-ups) are effectively useless - the most one can do to remove unwanted recs is...removing a video from watch history!https://t.co/d6e8HSHLXc
2022-09-21 00:22:28 @ZeerakTalat @mayameme @drlulzzz Wowowow! Congrats!
2022-09-20 19:36:55 @KLdivergence @RiceUniversity @Dr_TalithiaW forever young ~
2022-09-20 19:01:25 ICYMI Mozilla is funding the development of AI audit tools! For those in the algorithm audit space, this is a great way to access the resources (financial, and otherwise) to build or develop your projects.Please Apply! Applications close on October 5thhttps://t.co/lOlIV9HilH
2022-09-20 14:15:05 @KLdivergence @RiceUniversity @Dr_TalithiaW Congrats, Kristian! lol you ARE young what aha
2022-09-20 14:08:47 RT @annargrs: #NLPaperAlert #COLING2022Machine Reading, Fast And Slow: When Do Models "Understand" Language? TLDR: instead of claiming…
2022-09-19 00:08:50 RT @SymposiumML4H: @beenwrekt speaks at @SymposiumML4H this yearCommon ML assumptions do sometimes end up de-facto ML laws. Ben's track-r…
2022-09-16 19:55:14 @LauraEdelson2 Congrats!
2022-09-11 02:35:42 RT @kurtopsahl: The WSJ has written up a nice obituary for Peter Eckersley, recognizing his great work encrypting all the things, and being…
2022-09-09 01:51:23 RT @neerjathakkar: Our ECCV ‘22 paper “Studying Bias in GANs Through the Lens of Race” is now out! https://t.co/NWojyXq2yV This work was do…
2022-09-08 20:58:26 @Miles_Brundage So sorry for your loss Hope you can get some rest and the space you need
2022-09-07 21:02:55 Maybe it's a coincidence but in at least both of those cases, the tech was gravely under-vetted, failing to hold up to even the most minor form of external scrutiny. You would think for something so critically important to so many people, there would be more effort in evaluation.
2022-09-07 20:57:39 Such a great &
2022-09-07 19:27:01 @mdekstrand @jjoque @1roboter @Abebab @FrankPasquale @kaiy1ng @alexhanna @WolfieChristl @sayashk @random_walker @jw_lockhart @ShobitaP @LinaDencik @az_jacobs @stalfel @hypervisible @benzevgreen @geomblog @gleemie @danmcquillan @ProfFerguson @AngeleChristin +1, thanks for sharing - this looks great!
2022-09-07 19:25:39 RT @1roboter: Accuracy claims are also rhetorical tools to convince others that opaque algorithms work. In a new paper, I unpack how high a…
2022-09-07 14:09:26 RT @phillipdawson: For years I've been trying to get any proctoring company to agree to a study where I try to cheat. None have agreed. I'v…
2022-09-06 16:25:29 @mer__edith @signalapp Congrats, Mer! Such a good fit for your skillset!
2022-09-03 22:44:06 It's crazy to think that so many of the things we talked about then are making their way into the real world now. And I know as a fact there was so much more he still wanted to *do*...My condolences to his family and loved ones - this certainly feels like he's gone too soon
2022-09-03 22:38:09 I'm shocked &
2022-09-02 08:50:03 @Miles_Brundage @natolambert @_joaogui1 @negar_rz @bakztfuture lol I think this is the paper you're referring to: https://t.co/nXiIE6ELAn
2022-08-31 23:29:20 RT @Carlos_MFerr: If machine learning models and code are two different things, why should the former be governed by licensing mechanisms d…
2022-08-31 15:26:39 @mattierialgirl @timnitGebru @MilagrosMiceli Yeah, Timnit's advice is the best I've heard for dealing with this: "live as though this is the rest of your life" - ie. "Would you want to live the rest of your life this way?" That advice woke me up from over-doing it while I was at a startup, guess it's time to re-visit that
2022-08-31 14:42:51 @timnitGebru @mattierialgirl lol step by step
2022-08-27 20:54:48 @morgangames @andrewthesmart Lol I'm not actually talking about only myself here or any personal concerns re:productivity - a lot of students struggle to figure out a way to step away from things responsibly. What concerns me is how difficult it can be for many of us to navigate such requests in academia.
2022-08-27 19:10:34 @IAmSamFin For sure, but I really don't think toxic advisors are the main reason most people struggle with this. Like I said, my advisor is great! It's just genuinely much harder to set boundaries in an unstructured environment. It just takes a lot of communication to navigate responsibly.
2022-08-27 19:02:47 @IAmSamFin *env, as in environment
2022-08-27 19:02:26 @IAmSamFin Another challenge here is the pseudo-voluntary nature of everything. Technically anything is permissible but it's not all equally acceptable or well recieved. You do have real responsibilities, not just to your advisor but many others as well. It's just a tricky wnv to navigate.
2022-08-27 18:43:22 @IAmSamFin Yeah, I also figured that perhaps what you were really asking more about was how consequences would differ from industry vs what are the consequences in general
2022-08-27 18:39:15 @IAmSamFin In academia, externally imposed and less flexible deadlines tend to make things harder to navigate. And the lack of structure puts a lot of emphasis on personal responsibility, which can make it seem like some kind of personal failure to take a step back, even for good reason.
2022-08-27 18:13:45 @IAmSamFin I'm also not talking about taking a day or an afternoon off or working away or from home - ofc many have that flexibility there. It's about needing to halt project contributions completely for an extended period of time and needing the grace to miss the many, frequent deadlines.
2022-08-27 17:57:38 @IAmSamFin Yeah, I also have a great advisor - but not everyone else does. And even then it can be tricky to communicate about this to the plethora of other stakeholders you're involved with. Also depending on what stage you're at, it's hard do so without facing professional consequences.
2022-08-27 17:46:48 RT @dcalacci: one part of the gigaverse: auditing the pay algorithms that gig platforms use. listen to the latest @radiolab ep to hear how…
2022-08-27 17:41:35 @sh_reya Yeah - I worry so much about being perceived as disrespectful, lazy etc especially in moments when I know I'm struggling to communicate. It's so hard to get people to understand that you're actually unavailable, and not just trying to escape responsibility.
2022-08-27 17:30:07 @sh_reya Wow, love that you've been able to figure out what works for you -something else I notice about your approach is that you're on when you're on and off when you're off. That's something I want to lean more into and I think it'll help with setting boundaries &
2022-08-27 17:24:33 Personally still learning how to navigate such moments responsibly, but also deeply uncomfortable with current norms, many of which are institutionally reinforced. People shouldn't have to rely so much on the empathy and kindness of individual actors to get the space they need.
2022-08-27 17:24:13 @KarlTheMartian Yeah, ideally it does get somewhat easier over time though - as I get more familiar &
2022-08-27 16:50:45 Honestly, the scariest thing so far about academia for me has been how difficult it can be in many cases to truly take time off. And not just for fun vacation purposes - but even for important life events, family emergencies, health reasons, etc.
2022-08-25 15:51:44 @random_walker Congrats to you both! This book is sorely needed.
2022-08-25 15:49:07 RT @jshermcyber: Highly interesting and important paper by @schock, @rajiinio, and @jovialjoy published June 2022 on the idea of algorithmi…
2022-08-22 21:33:40 RT @emilymbender: Soo.... the Stable Diffusion model is now available (incl weights) for download from HuggingFace. On the plus side, it's…
2022-08-22 11:59:53 RT @danish_c: The exercise of developing a RAIL license at the @BigscienceW opened up interesting real-world questions -- what is the artif…
2022-08-18 11:48:09 RT @ang3linawang: This summer I went to my first two in-person conferences in grad school, FAccT and ICML, and you’ll never believe what ha…
2022-08-16 07:57:32 RT @brandonsilverm: This is a *great* write-up and a really useful overall frame for thinking about different transparency options for lawm…
2022-08-16 03:10:25 Yikes. https://t.co/VjekzrauSR
2022-08-14 22:15:23 @klakhani @janusrose @andrewthesmart @andrewthesmart made them for FAccT 2020 and gave a couple out (glad I have one!) - not sure what the status is now though, seems like there's so many copies floating around at this point
2022-08-14 20:51:17 @janusrose @andrewthesmart please, you've got to start selling these!! it goes viral every few months
2022-08-14 20:31:55 RT @kenarchersf: This is spot on. The “Closing the Accountability Gap” paper (@mmitchell_ai, @rajiinio, @timnitGebru et al) calls for FMEA…
2022-08-12 23:43:18 RT @RoxanaDaneshjou: I've talked a lot about the lack of representation in AI datasets in dermatology and the concerns around algorithm bia…
2022-08-12 06:16:04 @SubhankarGSH Yeah, I should have clarified I meant *external* funding options in my original tweet - folks will often apply to external fellowships to increase their independence and stipend
2022-08-11 22:44:40 @richardson_m_a Not sure if this is what you're looking for, but we attempted to taxonomize algorithmic failures in practice observed here: https://t.co/zrQTyxxBwC
2022-08-11 16:44:02 RT @sayashk: On July 28th, we organized a workshop on the reproducibility crisis in ML-based science. For WIRED, @willknight wrote about th…
2022-08-10 16:13:31 @AmandaAskell Though I don't know a lab or student doing AI work right now that would refuse an offer for free or discounted compute :) - it's just not the typical reason people seek out a fellowship for.
2022-08-10 16:11:37 @AmandaAskell Compute is usually managed by the lab and so the PI raises for that typically. In CS, people tend to seek fellowship funding in order to cover living costs and tuition independently of their PI, so they can have some more flexibility to work on and explore their own projects.
2022-08-10 16:02:57 @Javi_Rando Thanks for sharing!
2022-08-09 17:04:56 @iddux I am an international PhD student in the US and this isn't true? Funding can come from a variety of sources, including fellowships, though there are various nationality restrictions depending on the source.
2022-08-09 16:31:16 @SpeenDoctor_ Depends on the culture of the discipline and the department - of course your school provides some funding, but its common to apply for fellowships in CS in order to gain some research independence. Many international CS phd students feel like their only options are from tech cos.
2022-08-09 16:26:21 @tdietterich This is a reference to the fact that many funding options from foundations are either phd fellowships that are exclusive to US citizens / permanent residents or are geared towards practitioners and so only provide funding for the short term (ie. one or two years).
2022-08-09 16:24:27 To clarify: I'm talking about external fellowship funding options for international students in the U.S. Major government or foundation fellowships that fund multiple years of your phd are exclusive to U.S. citizens or permanent residents, the exception being tech fellowships.
2022-08-09 16:22:23 @tdietterich I mean multi-year funding for the duration of a phd program (ie. longer than one year or two)
2022-08-09 16:00:25 It's frustrating that the only really viable long-term funding options for international CS PhD students in the U.S. seem to be the fellowships coming from tech companies. Makes things especially difficult for anyone trying to do meaningful tech accountability work.
2022-08-08 16:04:04 RT @emilymbender: Why is "AI" the only thing we describe that way? No one says: This airplane has a superhuman flying ability! This jackham…
2022-08-07 13:18:30 RT @AIResponsibly: NEW WORK: Our interdisciplinary #audit of #hiring #AI is out! Watch: https://t.co/OdOcYGRPlT and read our @AIESConf pa…
2022-08-04 06:37:31 RT @jackbandy: It me! #AIES
2022-08-04 06:37:09 RT @shakir_za: Closing day of @AIESConf Amazing set of lightning talks from students covering all topics from fairness, debiasing, to edu…
2022-08-03 09:41:28 RT @NicolasPapernot: 3 weeks to go until the abstract registration deadline for the first IEEE conference on Secure and Trustworthy ML (SaT…
2022-08-03 01:01:20 RT @BenDLaufer: An amazing keynote by @karen_ec_levy at @AIESConf“Automation and surveillance aren’t substitutes. They are complement…
2022-08-02 11:53:02 @MarisaTPP @KerryMackereth @DrEleanorDrage @AIESConf Yes, loved this work as well! Raises so many important questions!
2022-08-02 09:31:03 RT @MarisaTPP: Hiring Tech - interesting study on how companies market their products. They promise objective hiring for a more diverse wor…
2022-08-02 09:30:26 RT @ziebrah: #AIES paper presentation today! this is work done while at @itsArthurAI last summer. we frame it as an "aligning of conversati…
2022-08-02 09:29:03 This conclusion was a shout out to @Abebab's great paper "Algorithmic injustice: a relational ethics approach". So sad she couldn't be here! https://t.co/hdHXEN1q4S
2022-08-02 09:27:10 RT @mjpaulusjr: Great opening talk at #AEIS by @rajiinio on algorithmic accountability and the role of audits as part of the practical shif…
2022-07-23 23:30:48 @zacharylipton @shiorisagawa I think @rtaori13 &
2022-07-23 22:36:07 Yep. I find it hilarious when people try to blame the "data" for harmful outcomes (eg. bias, inaccuracies)...As if the data is some disembodied object and not in fact the direct result of the many choices made by those very engineers and researchers. Just take responsibility! https://t.co/i6tihptAPv
2022-07-23 22:30:29 RT @iAyori: Hosted a spirited panel on this years ago. The number of engineers, data scientists and researchers who felt confident blaming…
2022-07-22 23:50:36 RT @tzushengkuo: Couldn't ask for a better way to wrap up the #DataPerf workshop with a panel on the future of data-centric AI!Thanks to…
2022-07-22 19:43:01 @npparikh yeah, actually just added it to the reading list lol
2022-07-22 19:05:27 @npparikh nice!
2022-07-22 13:07:11 @struthious Thanks for reading &
2022-07-22 12:53:00 @victorveitch @thejonullman yeah, I agree, honestly. Criticism doesn't have to be cruel. If communicated appropriately and kindly, Twitter is fine.
2022-07-22 12:37:31 Details of the workshop can be found here! Grateful for the organizers for creating a space to discuss this topic. https://t.co/CLCxlTB9Jc
2022-07-22 12:20:55 Data should not be considered a given, an afterthought or "someone else's problem" in ML - it's part of what the field needs to be actively thinking about. And I mean beyond hijacking it for optimizing performance - lots of issues beyond that to address!
2022-07-22 12:06:35 I'd be lying if I said this isn't at least a little personal. I'm frustrated - it's been years of discussion on this and ML people will still resist taking basic responsibility for the ethical decisions they make as researchers working with human data.https://t.co/0VklaKDeiO
2022-07-22 12:01:55 Giving a talk later today at the DataPerf workshop @icmlconf.ML researchers often view themselves separately from the eng issues they perceive as the cause of downstream harms - in reality, their decisions, esp when it comes to data, are just as responsible for these problems. https://t.co/uRXwyyZZVz
2022-07-22 11:46:11 So much "AI is unlocking enormous opportunities", "AI’s tremendous potential" for "societal benefits"
2022-07-20 21:28:37 RT @mgahntz: Hi @OpenAI, now that you're rolling out DALL•E at scale, how about a bias/toxicity/harmful content bounty program to go along…
2022-07-18 19:02:50 RT @jackbandy: Anyway if you know of any jobs starting Fall 2023, let me know!Also if you know of any land and/or a house I could have st…
2022-07-18 18:01:21 RT @random_walker: ML is being rapidly adopted in the sciences, but the gnarly problem of data leakage has led to a reproducibility crisis.…
2022-07-18 17:52:24 RT @random_walker: So we’d anticipated a cozy workshop with 30 people and ended up with 1,200 signups in 2 weeks. We’re a bit dazed, but we…
2022-07-18 15:36:48 @mmitchell_ai @huggingface @mkgerchick @_____ozo__ Wow, amazing work!!
2022-07-18 12:39:57 @PolisLSE Source here: https://t.co/LEeB0NoxlN
2022-07-18 12:38:58 Keep getting reminded about the importance of data journalists in algorithmic audit work. For instance, they regularly design &
2022-07-18 10:50:36 RT @rajiinio: So proud of @paula_gradu for all the work she's been doing to bring @WiMLworkshop to @icmlconf this year. If you're atten…
2022-07-18 10:46:36 @jackclarkSF So sorry you went through this! I cannot imagine how difficult it must have been to endure. So happy to hear you had the support of your partner and friends to make it through safely. Our bodies are so important yet so fragile!
2022-07-18 10:34:05 RT @mikarv: No legislation envisaged, just v general "cross-sectoral principles on a non-statutory footing". UK gov continues its trend of…
2022-07-18 10:31:18 RT @OfficeforAI: Establishing a pro-innovation approach to regulating AIA new paper published today outlines the Government’s approach…
2022-07-17 00:55:25 So proud of @paula_gradu for all the work she's been doing to bring @WiMLworkshop to @icmlconf this year. If you're attending, please make sure to check it out! Cannot stress how important it is to have &
2022-07-16 23:35:07 RT @WiMLworkshop: The 3rd WiML UnWorkshop at ICML is just a few days away! All of this is possible thanks to our sponsors @Apple @DeepMindA…
2022-07-16 22:13:50 RT @rosanardila: Important discussion about the reproducibility crisis of ML in science. Particularly when models are later used in medical…
2022-07-14 16:37:21 RT @oliviasolon: Wow. Per this analysis, 30% of a Google dataset intended to categorize emotions in comments (for training AI) mislabeled.…
2022-07-13 03:31:42 @danish_c @Ket_Cherie omg it took me a minute to realize what the confusion was - think Cherie quite reasonably thought this was an alias for a contract worker from Denmark aha
2022-07-13 03:28:22 RT @YJernite: Responsible AI Licenses (RAIL) rely on behavioral use restrictions to provide a legal framework for model developers to restr…
2022-07-13 00:54:06 It's also incredible to see how much the RAIL team has evolved their approach and refined the license over the years. I remember when it was just a draft markup file - now it's a whole organization. Those guys really took in all the feedback &
2022-07-13 00:51:47 Licenses have always struck me as an interesting approach to articulating and possibly enforcing some clear boundaries around what the model should be used for. It's a way for model developers to express their intent and have some legal leverage in the case of misuse.
2022-07-13 00:49:10 It's been interesting to read about Bloom, the open source 176B para large language model that was just released today. Rather than controlling use via an API product, they released the model w/ RAIL (ie. the "Responsible AI License") to minimize misuse:https://t.co/OxpnIv6k4e https://t.co/cSotWeMlqi
2022-07-11 18:21:47 @umangsbhatt @HCRCS @hseas @hima_lakkaraju @MilindTambe_AI you and @hiddenmarkov should hang out!
2022-07-11 17:08:59 RT @sebkrier: Thrilled to announce the @StanfordCyber and @StanfordHAI $71K multi-prize #AIAuditChallenge, designed with @MarietjeSchaake…
2022-07-11 14:32:08 @hiddenmarkov So sorry :( Hope your family is staying safe!
2022-07-09 22:38:52 @IEthics @Aaron_Horowitz Wow, this is an incredible effort. Hope this has been going well!
2022-07-08 19:15:23 @luke_stark lol thank you for your service tho, now we finally have something to cite instead of repeating the same points over and over
2022-07-08 17:52:38 @ruthstarkman Thanks - this is incredibly kind!
2022-07-08 17:48:57 @andrewthesmart @Aaron_Horowitz Yep but also philosophers and lawyers and social scientists not interested in sitting with the technology to learn how it works. The gap goes both ways imo.
2022-07-08 17:46:56 @CGraziul Yep I think it's more about having productive collaborations &
2022-07-08 17:42:32 @Aaron_Horowitz Totally agree, which is why it's helpful to have venues like @FAccTConference &
2022-07-08 17:34:25 Sat through so many meetings like this. It's incredibly frustrating how bad the field is at actual interdisciplinary engagement because AI's problems will require actual dialogue between disciplines to solve, not one group trying to absorb a SparkNotes understanding of the other.
2022-07-08 17:29:46 @mikarv @random_walker The "they are not reading legal scholarship on the topic" point is definitely true, and a more generally true point as it relates to interdisciplinary engagement in CS. When someone complains about social science work or regulation, I always ask "Did you read it?" Spoiler: no.
2022-07-08 17:25:30 When consulted on policy, technologists bring in proposals that are unrealistic or ineffective as it relates to how law actually works, while lawyers come in with a distorted &
2022-07-08 17:15:09 RT @random_walker: We like to complain that lawmakers don’t understand tech, but let’s talk for a minute about technologists who don’t unde…
2022-07-08 17:12:47 @DrZimmermann @UWMadison @LeonieEMSchulte Yay, Annette! Congrats!!
2022-07-08 04:56:08 RT @weidingerlaura: JOB ALERT Very, *very* excited that we're hiring for a new Ethics Research Associate at DeepMind - join our team of…
2022-07-08 04:48:20 @LeonDerczynski This should be reported directly to @icmlconf @NeurIPSConf cc:@shakir_za
2022-07-07 20:50:10 RT @natematias: Dream job alert for data scientists who want to work on consumer protection
2022-07-06 21:24:50 @realCamelCase lol I feel your pain though - any interdisciplinary endeavor always feels like it requires so much more learning
2022-07-06 21:22:38 @realCamelCase not to be that girl but being at the intersection literally means you do both https://t.co/KITHdHgaCH
2022-07-06 20:24:37 RT @kchonyc: “What I cannot review, I do not understand”#NeurIPS2022 14.39%
2022-07-06 14:37:48 RT @EPirkova: We just published a very first introductory guide into the #DSA! If you wonder who or what the law will regulate, how indivi…
2022-07-05 21:53:49 RT @brianavecchione: I'm on the job market!!! Looking for industry or foundations that intersect AI auditing/accountability, their social…
2022-07-01 18:38:29 First they came for... https://t.co/M28wpLHGLe
2022-07-01 18:34:57 @sh_reya Go Shreya!!
2022-07-01 18:33:51 RT @random_walker: There’s a reproducibility crisis brewing in almost every scientific field that has adopted machine learning. On July 28,…
2022-07-01 18:26:41 Ugh, officially losing control of my email if I owe you a reply from the last 2-3 months, I'm so sorry
2022-07-01 06:22:20 RT @FAccTConference: Ok #FAccT22 attendees we want to hear from you! Fill out our survey and help us figure out what worked and what didn't…
2022-06-30 23:50:05 RT @brandonsilverm: I've been offline for most of the last week but thought I'd jump in with a few thoughts about the article below. belo…
2022-06-30 15:08:45 RT @STS_News: Enjoyed this paper, "The Fallacy of AI Functionality," by @rajiinio, @ziebrah, @Aaron_Horowitz, and @aselbst. Too often criti…
2022-06-30 04:01:42 RT @STS_News: I enjoyed this WSJ piece, "Tech Giants Pour Billions Into AI, but Hype Doesn’t Always Match Reality"This excerpt is the hea…
2022-06-30 01:35:35 @realCamelCase ahahahah he needs to be stopped for real
2022-06-30 01:34:36 @sh_reya
2022-06-30 01:33:47 @evijitghosh Nothing will ever compare
2022-06-30 01:31:43 @undersequoias @andrewthesmart @KLdivergence @Aaron_Horowitz
2022-06-29 18:12:49 @Abebab It is always ok to get rest and set boundaries Your health and well-being will always be more important than whatever is being demanded of you!
2022-06-29 17:03:49 RT @NexusOfPrivacy: Algorithmic Justice League audits the auditors (and why it matters from a privacy perspective)Today's Nexus of Privac…
2022-06-29 16:34:24 RT @_KarenHao: I wrote about a topic I’ve been itching to address for some time: how AI PR hype, coupled with increasingly flashy AI-genera…
2022-06-29 09:21:12 @negar_rz So sorry to hear! Hope you feel better soon!
2022-06-29 04:07:19 @KLdivergence @Aaron_Horowitz lol gotta photoshop Luca in there
2022-06-29 02:57:22 @KLdivergence @Aaron_Horowitz What are you talking about? You always look amazing
2022-06-29 02:26:26 RT @timnitGebru: If you missed @DocDre's keynote at @FAccTConference I highly recommend that you catch up. Belief, and our discourses abo…
2022-06-29 02:23:49 @Aaron_Horowitz @KLdivergence We were so happy and carefree... We didn't know what was coming no regrets tho https://t.co/xuFmGC0mhi
2022-06-28 00:31:30 RT @FAccTConference: We hope you had a great time #FAccT22. We will send out a survey about conference experience soon (including about you…
2022-06-27 22:23:52 @KLdivergence Dang hope you're feeling ok
2022-06-27 00:03:46 RT @macfound: Worth checking out, @AJLUnited's first field scan of the algorithmic auditing ecosystem, complete with recommendations for co…
2022-06-26 16:28:27 RT @justinhendrix: This week's @techpolicypress podcast: Peering Inside the Platforms• A conversation with CrowdTangle founder &
2022-06-26 16:25:25 RT @techpolicypress: This week's @techpolicypress podcast: Peering Inside the Platforms• A conversation with CrowdTangle founder &
2022-06-25 23:23:06 @yelenamejova @RERobertson Also check out @natematias's thread, which goes over the audit study's methodology and hints at some policy implications. The researchers conducted thousands of queries from 476 locations over 14+ weeks to discover this and were incredibly thorough.https://t.co/GXXUNZmi1D
2022-06-25 23:16:30 @yelenamejova @RERobertson Whatever your stance, it's problematic to have Google returning CPCs as the closest result for searches for reproductive care - CPCs are *not* healthcare providers, and *not* abortion clinics. It's a dangerously misleading search result.Details here: https://t.co/7ECX5ZMeW6
2022-06-25 23:12:40 @yelenamejova @RERobertson "Crisis pregnancy centers" are NOT healthcare providers. They lure vulnerable women in &
2022-06-25 23:02:58 Now seems like a good time to remind people about this audit study done by @yelenamejova, Tatiana Gracyk &
2022-06-25 04:58:26 Thank you so much Seth for your heroic service and all the energy you brought to the conference (and to karaoke) #FAccT22 would simply not have happened without you! https://t.co/NLO5apasKz
2022-06-25 04:56:38 RT @KLdivergence: Huge thank you to Seth whose efforts to make FAccT happen this year we’re nothing short of heroic. Legend status
2022-06-25 04:54:40 @__lucab @evijitghosh @seanmmcdonald
2022-06-24 22:41:33 @frobnosticus @seanmmcdonald Ahahha it was a menu item called "world best pizza" and yes it was delicious lool
2022-06-24 22:28:09 I only really did one thing in Korea and that was EAT:@seanmmcdonald https://t.co/3hE46exSYd
2022-06-24 21:54:44 RT @aylin_cim: “Markedness in Visual Semantic AI” w/ @wolferobert3 today #FAccT22The default person in CLIP, the language-vision AI model,…
2022-06-24 21:45:15 @dallascard @FAccTConference Thanks for capturing this, Dallas! And it was lovely meeting you this week
2022-06-24 21:43:42 I had so much fun and learnt more than I could imagine this week! Thank you so much to those that made this happen, those that shared their work, those that commented on ours. Every time I attend this conf, I leave hyped &
2022-06-24 21:32:15 @KLdivergence Thank you for your service - lol now please get some rest
2022-06-24 21:25:55 RT @megyoung0: It is impossible to overstate the triumph that was this year's FAccT conference.THANK YOU and congratulations to @sethlazar…
2022-06-24 21:21:46 RT @schock: I'm on @Marketplace talking about our new study, "Who Audits The Auditors" just launched at #FAccT2022, w/ @jovialjoy @rajiinio…
2022-06-24 04:10:25 @thegautamkamath yeah I was told something about the scale of papers submitted making it difficult to submit each paper to a plagiarism checker
2022-06-24 04:00:02 Jokes aside, plagiarism is actually such a ridiculously prevalent problem in the machine learning community.Conferences should at minimum check for this at submission or prior to publication. https://t.co/BWv7DNs6qc
2022-06-24 02:56:44 RT @RebekahKTromble: Let's be clear. The system proposed to replace CrowdTangle is--so far--terrible. But most importantly, it's inaccessib…
2022-06-24 02:47:28 @_KarenHao @wsisaac @png_marie @shakir_za @FAccTConference Question about dealing with AI hype and @_KarenHao responds by saying researchers with meaningful perspectives should put themselves out there. + about Chinese context: "Researchers are worried about being critiqued in the West but also worried about getting flak from the govt"
2022-06-24 02:41:37 @_KarenHao @wsisaac @png_marie @shakir_za Karen notes on @FAccTConference weaknesses: "There seems to be a lack of Chinese research participants, &
2022-06-24 02:38:47 @_KarenHao @wsisaac @png_marie @shakir_za But clarifies that the government participation in China is "sweet and sour", overreaching in certain ways that are inappropriate, while also providing certain reasonable regulations that have just yet to arrive in Western contexts.
2022-06-24 02:36:49 @_KarenHao @wsisaac @png_marie @shakir_za On China: "There is so much more optimism about what the technology can do for them. Much less skepticism... it's a very different conversation in this context."+"In China, govt is a huge part of the conversation - in the US, we talk about not having enough govt participation."
2022-06-24 02:34:24 @_KarenHao @wsisaac @png_marie @shakir_za @wsisaac notes how journalism is better positioned than even academia to tell these personal stories, and bring some of these observations into mainstream conciousness. More on Karen's reporting here on colonialism &
2022-06-24 02:32:56 @_KarenHao @wsisaac @png_marie @shakir_za She discusses what it meant to sit w/ data labelers in *crisis* in Argentina, who wake up &
2022-06-23 19:37:51 RT @Abebab: A Sociotechnical Audit: Evaluating Police Use of Facial Recognition, Evani Radiya-Dixit #FAccT22audits on:1)Legal standards…
2022-06-23 15:33:02 RT @fborgesius: 'CounterFAccTual: How FAccT Undermines Its Organizing Principles', presented by @bengansky &
2022-06-23 15:30:46 RT @MarthaCzernusze: Tuning in to @AJLUnited’s Who Audits the Auditors at an Internet cafe! #FAccT2022 #FAccT22 https://t.co/1UvJWEtrvy
2022-06-23 15:21:03 @chels_bar Yeah, noticed this as well and reached out to an author, @Aaron_Horowitz about this! I don't think the oversight was malicious - it was mentioned that they actually weren't aware of your paper. Hopefully they can update the text with a citation soon.cc:@KLdivergence, @mmeyer717
2022-06-23 11:29:22 RT @ClarissaRedwine: Holy moly, @megyoung0 gave an amazing talk at #FAccT2022 that had people on their feet https://t.co/M7qQA0TiZJ
2022-06-23 11:20:42 RT @JesseDodge: excellent talk by @mmitchell_ai at @facct on data governance!https://t.co/492r3scuEa https://t.co/4GnNH0tp8o
2022-06-23 06:19:37 RT @rajiinio: @ziebrah @Aaron_Horowitz @aselbst @schock @AJLUnited @jovialjoy @s010n @RosieCampbell + After the events of this week alone,…
2022-06-23 06:07:16 Whoa it's incredible listening to the presentation about this project, which is effectively an implementation of @chels_bar's suggestion in the "Studying Up" paper (https://t.co/QQBBSnoByT), to effectively create a risk assessment of those in power (judges) and not defendants! https://t.co/nhGVAkGwjQ
2022-06-23 05:41:06 RT @KLdivergence: Coming up soon, mikaela meyer’s @mmeyer717 talk in room 202 at #facct22. https://t.co/cRLgC07KT5
2022-06-23 05:40:52 RT @KLdivergence: Risk assessment instruments are used in the criminal justice system to estimate 'the risk a defendant poses to society'.…
2022-06-23 05:39:28 @realCamelCase I'm disappointed your favorite continent was not Africa, though I'm happy for the mention loool
2022-06-23 04:58:35 RT @fborgesius: Really like this panel &
2022-06-23 04:56:54 @ziebrah
2022-06-23 04:42:55 RT @Abebab: The fallacy of AI functionality, @rajiinio &
2022-06-23 01:10:50 Fave #FAccT22 moment #ootd https://t.co/QJW2M0oenR
2022-06-23 01:06:49 @Combsthepoet omg
2022-06-23 01:06:15 Such a good session. Technologists "co-designed a tool - an SMS chat bot - that collected &
2022-06-22 23:36:47 There's already been great discussion about this software from the legal side (see Katherine Kwong's great work in @HarvardJOLT: https://t.co/J5VJQkHrxg)
2022-06-22 23:36:46 Super excited to attend Angela's presentation of a new audit framework for evidentiary statistical software (eg. DNA profiling algos, etc). These models determine the diff between freedom &
2022-06-20 16:34:20 @aylin_cim Sorry to hear hope you feel better soon!
2022-06-19 12:21:34 RT @Borhane_B_H: Folks in the #AIAuditing space, this #FAccT2022 paper by @schock @rajiinio &
2022-06-18 23:15:05 RT @FAccTConference: Our #FAccT CONFERENCE GUIDE is available here: https://t.co/kykwPyeeHq Check it out for useful tips about both the in-…
2022-06-16 18:48:55 @schock Also @schock is so careful with methodology -- I learnt a lot just hanging around and observing the care with which this investigation was approached. Glad to have been able to contribute anything at all
2022-06-16 18:44:24 @Borhane_B_H @schock @jovialjoy Thanks for reading!
2022-06-16 18:44:11 I'm so proud of this work, led by @schock. Tracked down an interdisciplinary cohort of algorithmic audit practitioners to determine what things actually look like on the ground. Unexpected trends were discovered through interviews and survey responses - an essential resource! https://t.co/LtE4fy9HDA
2022-06-16 18:07:36 @tejuafonja @Onyothi So happy to hear, hope she has a great experience at CVPR!
2022-06-16 15:15:04 RT @FAccTConference: REMINDER: our conference platform is live! https://t.co/MWM3ldWDAj Live scheduling begins on June 21 (KST). But please…
2022-06-16 14:12:45 RT @mathver: Google Search, Youtube, Facebook, Instagram, Twitter, TikTok, Microsoft Bing and Linkedin make significant new commitments to…
2022-06-15 20:08:56 @IreneSolaiman @jackclarkSF Lol Irene, so dramatic But seriously, hope you feel better soon, Jack!
2022-06-15 14:43:55 RT @DrZimmermann: Getting ready to to Seoul for @FAccTConference! Can’t wait to FINALLY hang out in person with my amazing Publicity…
2022-06-15 13:40:28 RT @FAccTConference: BOOOOM our conference platform is live! https://t.co/MWM3ldF2IL Live scheduling begins on June 21 (KST). But please he…
2022-06-14 22:58:35 RT @KLdivergence: #FAccT2022 PC Co-Chair here: seeking volunteers to session chair for all sessions on Day 2 of the conference. Responsibi…
2022-06-13 06:21:54 RT @rajiinio: Once we characterize AI as a person, we heap ethical expectations we would normally have of people - to be fair, to explain t…
2022-06-12 03:52:19 This is not a dig on those that work on this - it would just be nice to hear about other things, also.
2022-06-12 03:51:18 I wish we spent even 10% of the time being used to discuss large language models talking about literally anything else.
2022-06-11 21:27:14 @jeffbigham @AiSimonThompson @karpathy Yep, and in the deployment context, these evals are conveniently ignoring the impact of interactions, etc as well. This is one of the things that led me and @beenwrekt to write this on perhaps re-framing to the broader scope of external validity: https://t.co/oJcWQM7KBL
2022-06-11 20:27:48 @annargrs Oh, glad to hear! Excited to check that out :)
2022-06-11 20:27:08 @jeffbigham @AiSimonThompson @karpathy Also there's a big difference between human *performance* (ie. accuracy outcomes) and human *competence* on such benchmarks. For eg. humans are much more robust to distribution shift and this isn't well captured in evaluations at all: https://t.co/f0uNbxvbl1
2022-06-11 20:21:55 @jeffbigham @AiSimonThompson I was shocked to discover from this paper (https://t.co/QfehD0ow3h, where they actually develop a proper human baseline for ImageNet performance) that the former baseline for human performance on ImageNet was...just @karpathy LOL
2022-06-11 20:19:32 @annargrs BIG Bench seems like an incomplete solution -- a sea of under-specified &
2022-06-11 20:16:03 @annargrs There's a separate question to ask about *task design* though, where it's clear that not all datasets are evaluating the same model capabilities &
2022-06-11 20:06:52 @annargrs Interestingly, there's been a lot of recent work revealing that this isn't quite the case - the order of the models' performance is preserved ood (that is, even if we do eval on the same data, the best model is still the best model even on a new dist): https://t.co/CgKkGlTJod
2022-06-11 16:25:29 Many of ML's major benchmarks have already become obsolete. It's getting pretty urgent to re-think ML evaluation. https://t.co/9LkfZow69Z
2022-06-11 16:15:08 @mdekstrand Ah, will check this out! Thanks so much for sharing!
2022-06-11 15:55:52 The difference between statistical inference and prediction is so poorly explained to students in the classroom that the prevalence of these kinds of misconceptions is pretty unsurprising to me. Wondering if there's a good resource that adequately breaks down the distinction. https://t.co/9uth1y7jFN
2022-06-11 01:46:13 RT @yy: Check out "network cards" for documenting metadata (not only stats but also data generation process &
2022-06-10 18:45:53 Will be talking about algorithmic auditing next week! Excited for the conversation, please tune in if interested https://t.co/44Pprr8jAt
2022-06-10 18:45:20 RT @GMFDigital: Register now for our webinar, "Opening the Black Box: Auditing Algorithms For Accountable Tech," happening 6/15 at 11a ET.…
2022-06-10 18:44:31 @emilymbender @timnitGebru I still don't understand why the approach is to replace humans whole-cloth. There's so many sub-tasks that are low stakes &
2022-06-10 16:10:41 @alexhanna @LuizaJarovsky @EmeraldDeLeeuw This website is a good starting point on auditing specifically online platforms: https://t.co/8r3CSbhUcy, tho it's fairly outdated now. @d_metaxa led this more recent effort: https://t.co/DeTKnVLAt2+ @sapiezynski has developed a great syllabus, I'm sure he'd be happy to share.
2022-06-10 12:37:43 @Aaron_Horowitz honestly the most productive thing I've ever tweeted
2022-06-10 12:25:39 RT @n3ijoy: AI as the snake oil of the digital era. Let’s start pointing out the absurdity of many AI-based promises. Thank you @F_Kaltheun…
2022-06-08 15:50:39 RT @jennwvaughan: So excited I can FINALLY share our new work on machine learning practitioners' data documentation perceptions, needs, cha…
2022-06-08 07:40:44 RT @black_in_ai: From now until the 17th of June submit your travel grant applications to attend the Black in AI + Queer in AI Social @icml…
2022-06-07 15:25:49 RT @FAccTConference: Financial support alert we are offering (1) BANDWIDTH grants covering internet access costs
2022-06-03 22:05:18 RT @sethlazar: It has been pretty exhausting for everyone bringing @FAccTConference together but I am so looking forward to it! The program…
2022-06-03 17:38:30 @hipsterelectron lol no, I don't think you mansplained at all -- what you're saying makes sense. I didn't realize it was an actual idea worth implementing. If so inclined, I fully support you building this out somewhere aha
2022-06-03 17:36:47 @KarlTheMartian LOL no one would watch it, but this would give us a clear window into the human condition
2022-06-03 17:35:29 @randtke lol or not - this just happened to me, and I was both thrilled and the most detail oriented and nitpicky I have ever been
2022-06-03 17:19:56 An idea: a conference paper assignment matching system where you get matched to review the papers that cite your work lol
2022-06-02 20:50:09 @random_walker How do you manage this with collaborators in different time zones? I'd like to keep mornings open but find they are the easiest to fill because it's when people are more available to meet :(
2022-06-02 14:20:44 lol instead we do this: https://t.co/3T6XMMWPtr https://t.co/J6YumHENlz
2022-06-02 13:26:01 RT @botherder: We are looking for 5 people working at the intersection of human rights and tech to join our new Digital Forensics Fellowshi…
2022-05-31 16:49:44 RT @lvwerra: Evaluation is one of the most important aspects of ML but today’s evaluation landscape is scattered and undocumented which mak…
2022-05-31 16:41:40 I am so excited to learn from Irene!! https://t.co/qhuDPMye3Q
2022-05-31 16:30:11 This is my go-to cite for why we should actively *vet* AI vendor claims as part of the regulatory process. People are literally out there selling pseudo-science! For this kind of tech, it doesn't even make sense to talk about other problems like fairness. Just throw it away! https://t.co/xwVfS8o1QU
2022-05-31 16:25:45 @irenetrampoline Yayayy!
2022-05-31 16:19:35 @certifiablyrand Also I should probably admit that I was intentionally being a bit cheeky in the OG tweet, aha, and wasn't expecting to be taken as seriously as I was by everyone that replied. Don't mind this outcome though -- ended up being fairly informative for me!
2022-05-31 16:17:45 @certifiablyrand It seems there are many facets to EA, and the community I have the most exposure to in AI is quite forceful about their priorities (ie. "everyone should do x bc it does the *most* good"). Beginning to realize that's not always the case though, so will be thinking more about this.
2022-05-31 16:12:56 @certifiablyrand lol I see your point, and it's well noted. Yeah, I don't think my goal was to criticize the desire to do good, just the notion of optimizing for the "most" good, in a world where it's really hard to just minimize the harm one causes, and do any good whatsoever.
2022-05-31 14:55:37 @certifiablyrand Because I think it's important and interesting work! Like others said, there's nothing wrong with aiming to have positive impact, but the framing of an optimization problem with answers that are meant to apply to what *everyone* "should" be doing is where problems seem to arise.
2022-05-31 00:17:46 @OtterElevator @GiveWell yeah, no worries at all -- totally understood!
2022-05-30 20:22:08 @OtterElevator @GiveWell I'm not annoyed with anyone lol. I think what people here are saying is that there is no neutral objective to optimize. Doing the "most" good = "more saved lives" for you, but others may see it differently. Creating local community support networks, etc. are worthy positive goals
2022-05-30 20:12:09 @AmandaAskell I empathize with this, honestly. I think the problem some have is when the triaging decided upon by one group is imposed on others as the "most" good thing for everyone to be doing. That can become problematic, especially when that group does not adequately represent everyone.
2022-05-30 18:04:07 @IAmSamFin @MarkSendak @timnitGebru @emilymbender It's easier for some rather than others to "believe" in the potential of philanthropy, depending on who they are &
2022-05-30 18:00:52 @IAmSamFin @MarkSendak @timnitGebru @emilymbender Rich people don't pay their taxes, but hoard their wealth to spend as they please instead of contributing to shared resources. Even when "researched", it's an exclusionary and harmful practice -- "Winners Take All" is a critical resource here: https://t.co/042xwNjVzW
2022-05-30 17:57:16 @IAmSamFin @MarkSendak @timnitGebru @emilymbender lol no, you're fine - I think these interactions are quite productive! You've been one of a few to clearly articulate your use for EA in a way I can understand. I think the consolidation of wealth management into the hands of a few is actually *the* main issue with philanthropy
2022-05-30 17:35:55 @MarkSendak @IAmSamFin @timnitGebru @emilymbender Of course anyone can do as they please w/ their money, but there is something unsettling about encouraging / persuading those with these resources to all contribute towards a small number causes, while ignoring the concerns of others that perceive such causes as possibly harmful.
2022-05-30 17:31:10 @MarkSendak @IAmSamFin @timnitGebru @emilymbender (4) is fine imo - my ideal notion of "cost-effectiveness" is determined democratically, esp. in the context of public funds. Little good is done by the assumptions of a few determining how resources should affect the many. When deciding on private funds, things becomes less clear
2022-05-30 17:27:42 @MarkSendak @IAmSamFin @timnitGebru @emilymbender Honestly, I don't know enough about EA to comment on (2) &
2022-05-30 16:35:39 @sadiaokhan Optimizing for doing the most "good" is great but I'm not sure one can do that independently of being aware of not causing new problems.
2022-05-30 16:34:17 @sadiaokhan This is a good point, tho I happen to disagree. I'm coming from a place where some will discount other people's work (ie. climate change) under the pretense of it being less "good" than perhaps what they are working on (ie. AGI), w/o adequately reflecting on the harms they cause.
2022-05-30 16:28:41 @sh_reya Yeah, I feel you. To be fair, I don't think it's just ego, tho. Ego exists but also academia breeds a certain kind of desperate insecurity that makes people act out irrationally &
2022-05-30 16:13:48 @typo_factory @timnitGebru @emilymbender @IAmSamFin Yes, this is a great point! "Winners Take All" by @AnandWrites opened my eyes to a lot of this.
2022-05-30 16:05:26 @timnitGebru @emilymbender @IAmSamFin Hm. The issue for me is the framing of their spending choices as some universally appreciated "good" for the world - it's a fundamental issue in philanthropy, where the harm caused in the acquisition of the funds are discounted, and the perspective of those impacted are dismissed
2022-05-30 15:49:56 @sh_reya Completely agree, and think this is true for all research, actually. Any time "saved" by not *thinking* of the consequences early in the process will be spent many times over *dealing* with the consequences later on.
2022-05-30 15:45:15 @emilymbender I think this depends on who you talk to. Optimizing the allocation of a fixed set of funds makes sense - @IAmSamFin had a decent take on this. But "how do I spend the money I already have?" is very diff from "how do I do the most good?" &
2022-05-30 15:32:14 Why are there even people optimizing to do the "most" good? Gosh, it's hard enough to just live and die unproblematic.
2022-05-30 15:29:27 Wait, what?? No, the answer to "can you do X with deep learning?" is NOT always yes! https://t.co/NQnFBgnuMb
2022-05-25 23:02:44 @dpatil Honestly, they are barely paid enough to teach
2022-05-25 20:12:45 @mmitchell_ai @JesseDodge @kotymg @karlstratos @haldaume3 Congrats, Meg! So well deserved
2022-05-23 15:46:02 RT @timnitGebru: Thank you Time for having me on this list. And I had no idea the one and only @safiyanoble was the one who was going to wr…
2022-05-23 00:15:19 RT @srivoire: was just thinking about this classic Philip Guo article (sorry for the paywall) FOR ABSOLUTELY NO REASON WHATSOEVER https://t…
2022-05-21 16:00:30 @adjiboussodieng @Abebab Lol same here...you are literally the most chill person I know
2022-05-20 16:58:36 @alvarombedoya @FTC @BedoyaFTC @linakhanFTC @RKSlaughterFTC @FTCPhillips @CSWilsonFTC Congrats on the new role! Looking forward to seeing your impact!
2022-05-20 08:11:00 CAFIAC FIX
2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD
2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…
2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…
2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &
2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…
2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that
2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.
2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.
2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position
2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!
2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)
2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)
2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.
2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists
2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…
2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing
2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition
2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR
2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing
2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?
2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <
2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.
2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD
2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…
2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…
2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &
2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…
2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that
2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.
2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.
2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position
2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!
2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)
2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)
2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.
2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists
2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…
2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing
2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition
2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR
2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing
2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?
2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <
2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.
2022-10-28 22:48:51 Hi folks - we have actually done this! Those that are interested in joining should email algoaudit.network@gmail.com to get added into the Slack space! Also completely unintentional but for those leaving Twitter at least for a bit, this is one new option to stay in touch aha https://t.co/wBcNpmWDZD
2022-10-27 07:55:40 RT @hutko: The final text of the Digital Services Act was published this morning https://t.co/8pF9Y3pH0f Get used to Regulation (EU) 2022/2…
2022-10-25 16:58:32 RT @sapiezynski: FB algorithms classify race, gender, and age of people in ad images and make delivery decisions based on the predictions.…
2022-10-24 21:55:27 An interesting application of participatory design principles to algorithmic auditing. Much of my motivation for thinking through the audit tooling landscape is lowering the bar of what it takes to execute these audits - and thus widen the scope of who can engage &
2022-10-24 21:38:53 RT @DrLaurenOR: When you study how to make #medical #AI safe, your work needs to reach beyond academia to have an impact.Very excited to…
2022-10-24 17:52:29 @yoavgo @zehavoc @pfau @memotv ok, yeah I'd agree with that
2022-10-24 17:51:45 @yoavgo @zehavoc @pfau @memotv And before the mass take-up of transformers, a lot of modeling strategies involved linguistic concepts - even the associative nature of word embeddings is indicative of that. Even post GTP-x, seems like many tweaks are operationalize some prior knowledge of the language form.
2022-10-24 17:49:09 @yoavgo @zehavoc @pfau @memotv I feel like a lot of especially NLU tasks are anchored to pseudo-linguistic concepts (eg. "inference", "entailment", "negation", etc.) - I find it hard to think that the field hasn't impacted NLP to a large degree.
2022-10-24 17:43:41 @yoavgo @pfau @memotv yeah can't recall the exact thread either but that was my understanding of your position
2022-10-24 17:38:18 @pfau @memotv + yes, the "recent trend" point is one I'm now re-evaluating...I didn't notice it until recently but, yes, clearly this has been the situation for a while. To be honest though, I'm disappointed - imo there's no clear advantage to dismissing the participation of other disciplines!
2022-10-24 17:35:47 @pfau @memotv lol depends on how you interpret that quote - some don't see that as a dismissal of linguistics but a note of the lack of self-awareness in NLP that gets heightened once you take linguistics out of the equation (ie. its easier to convince yourself you're making progress w/o them)
2022-10-24 17:33:05 @pfau @memotv @yoavgo I disagree with both of you though :)
2022-10-24 17:31:53 @pfau @memotv Oh sorry, to clarify: didn't mean to imply you were involved in the linguistics spat at all - that was another debate that happened a couple months ago, with I believe @yoavgo or someone else indicating linguistics hadn't done as much for NLP as they thought they did.
2022-10-24 13:57:51 @memotv The latest iteration is this beef started with neuroscientists
2022-10-24 13:55:21 RT @vonekels: We’ll be presenting our Bias in GANs work at @eccvconf on 25/10 at 15:30.One of our findings ~ Truncation commonly used to…
2022-10-24 13:53:03 @iamtrask wait this is incredibly disappointing
2022-10-23 17:48:37 @neuropoetic @pfau @martingoodson Yeah I'm thinking this is just trolling at this point - even the Zador paper that prompted all of this provides this context in its exposition
2022-10-23 17:42:56 @pfau @martingoodson Please read anything on the internet available to you - the facts of Hinton's career aren't even worth debating about: https://t.co/g5UW0wHLxR
2022-10-23 17:35:36 @pfau @SiaAhmadi1 Hinton's degree was in cogsci but computational neuroscience was the subfield he was most active in for a long time. Things like neural nets were initially derived as models of the brain to better understand the brain - he just had suspicions it could also inform info processing
2022-10-23 17:24:41 @pfau @SiaAhmadi1 Still such a bizarre and incorrect take - where do you think that intuition comes from..?
2022-10-23 12:08:40 Annoyed by this latest trend of machine learning researchers insisting that they absolutely did not need anything that came before. It's obvious that <
2022-10-23 11:53:50 @pfau @martingoodson Pretty sure Geoff has read many neuro papers - his background is literally in cogsci? Also, conferences he founded like NeurIPS began focused on attempts at modeling brain behavior using computers to better understand the brain - for a while, there was still a comp neuro track.
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-28 18:10:10 RT @JessicaHullman: Call for papers for the 2023 ACM @FAccTConference is now live! https://t.co/cW3o1WFUb8 Abstracts due Jan 30, Papers due…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-11-28 18:10:10 RT @JessicaHullman: Call for papers for the 2023 ACM @FAccTConference is now live! https://t.co/cW3o1WFUb8 Abstracts due Jan 30, Papers due…
2022-11-21 09:50:40 @maribethrauh hey @maribethrauh btws this is now a thing - pls send us an email if still interested! https://t.co/2Tr2BQnt78
2022-11-20 04:38:51 @jquinonero @emilymbender Hm I don't think they have a responsible AI team anymore: https://t.co/tblnZi5vrL
2022-11-17 18:30:49 RT @knightcolumbia: RESERVE A SPOT: On December 12 at 3 pm EST, we're hosting an online panel with @dgrobinson @rajiinio @natematias @rando…
2022-11-16 18:17:40 RT @A__W______O: We’re about to launch Algorithm Governance Roundup, a monthly newsletter bringing together news, research, upcoming events…
2022-11-16 02:34:56 @jachiam0 Lol I think the only accurate piece of this is "governed like a church" - churches ask for voluntary tithes of about 10% &
2022-11-16 02:23:12 RT @Wenbinters: new newsletter on algorithmic harm and policy out! feat. coverage of our screened and scored in dc report, @OPB's deepdive…
2022-11-16 02:22:43 RT @natematias: Dream job The MWI Fellowship is an opportunity for a research scholar to build and engage with a community of majority w…
2022-11-15 22:41:10 RT @mattgroh: Our paper on how to increase transparency in machine learning applied to dermatology diagnosis has just been published in…
2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!
2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!
2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!
2022-12-08 19:26:55 @jjvincent omg, congrats!!
2022-12-08 11:22:04 RT @FAccTConference: Submit your excellent work to #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan 3…
2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!
2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!
2022-12-08 19:26:55 @jjvincent omg, congrats!!
2022-12-08 11:22:04 RT @FAccTConference: Submit your excellent work to #FAccT23! Our CfP is available here: https://t.co/iTWjkOt47f Abstract deadline: Jan 3…
2022-12-07 20:04:37 @undersequoias @chicanocyborg @Abebab @png_marie - feel like you would love this!
2022-12-08 19:27:14 @jjvincent Agreed with everyone else - well deserved!
2022-12-08 19:26:55 @jjvincent omg, congrats!!