Découvrez Les IA Experts
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Profil AI Expert
27 AI Experts l'ont reconnu
Les derniers messages de l'Expert:
2025-01-14 23:18:00 RT @atroyn: they turned the agi into a todo list app :( https://t.co/IYc4ziezHZ
2024-12-31 22:06:54 @rajiinio @gdb
2024-12-30 15:39:30 And there I am complaining that lots of people seem to think that artificial intelligence began in 2012! (But, @rajiinio, typo in this tweet: That first hearing with @gdb was in *2016*, as in the tweet you quoted.) https://t.co/ONYSCyg3kw
2024-12-21 17:37:53 RT @rm_rafailov: Despite all the twitter hype there still hasn't been public proof that the "reasoning" models have any emergence. I.e. is…
2024-12-20 04:48:46 RT @pika_labs: Beyond Scene Ingredients, Pika 2.0 also includes upgrades to our text-to-video and image-to-video models, delivering higher…
2024-12-19 14:31:50 RT @random_walker: New AI Snake Oil essay: Last month the AI industry's narrative on scaling suddenly flipped. This has left people outside…
2024-12-14 17:05:10 RT @pika_labs: Today we launched our Pika 2.0 model. Superior text alignment. Stunning visuals. And Scene Ingredientsthat allow you to up…
2024-12-11 16:56:26 20+ years late, I see Gosse Bouma @gosseb wrote a lovely review of @AveryAndrews &
2024-12-06 15:52:09 RT @prakashkagitha: Very prominent @chrmanning has advised at least 86 PhD students directly or indirectly and have academic "descendants"…
2024-12-05 16:35:46 @prakashkagitha Thanks! But @percyliang really needs to complete his @openreviewnet profile!
2024-12-05 03:53:25 @lateinteraction @overleaf You can do a git checkout of your overleaf repositories!
2024-12-03 00:21:57 RT @antgoldbloom: My Kaggle experience suggests more than 75% of the machine learning models in production or written up in academic papers…
2024-12-03 00:21:52 RT @antgoldbloom: I use Marimo every day and love it! I hardly ever write throwaway notebooks anymore. Everything becomes a re-usable mini…
2024-11-21 23:48:26 RT @natolambert: I've spent the last two years scouring all available resources on RLHF specifically and post training broadly. Today, with…
2024-11-21 21:21:02 Looking for synthetic web interaction data to improve your web agent? NNetnav can auto-generate complex demonstrations for any website. We’ve released NNetnav-6k, 6000 demonstrations, which can tune LLama-8B to 10.3% success on WebArena, SoTA for open weight LLM/synthetic data. https://t.co/SUOv6I0fOS
2024-11-21 16:38:13 Stanford RegLab is hiring predoctoral Research Fellows! RegLab has been a wonderful place for work spanning AI ML/NLP/policy to real world impact, successfully working with government organizations. It’s a place to apply technical skills to new areas. https://t.co/fPIBacFFu1
2024-11-20 01:49:11 RT @akshaykagrawal: @howdataworks @chrmanning @themylesfiles @marimo_io @antgoldbloom @shyammani_ @aixventureshq I took natural language pr…
2024-11-20 01:28:45 RT @akshaykagrawal: My co-founder @themylesfiles and I have started Marimo Inc. to keep building the @marimo_io notebook and other Python d…
2024-11-19 15:00:55 @realohtweets @fchollet Fair. But do you think JAX is 10x better than PyTorch? Maybe same-same.
2024-11-19 04:59:16 RT @landay: Great story featuring Jay McClelland's seminal contributions to Neural Networks. "From Brain to Machine: The Unexpected Journey…
2024-11-18 18:01:55 Here are the stats I was able to get about the status of my accounts on each platform and how much engagement my 3 #EMNLP2024 posts got. The accounts are all of different ages and with different investments, some stats you can’t get, and I forgot to record followers before the… https://t.co/yfChoIHs2e
2024-11-18 18:01:54 I did an unscientific, uncontrolled experiment for #EMNLP2024—details in . I posted my conference &
2024-11-18 17:08:34 @fchollet Julia vs. Python?
2024-11-15 14:21:50 RT @davidhogg111: Wait you mean- the solution to expensive housing is to just build more housing???
2024-11-14 17:22:08 RT @oanaolt: Most startups these days use DPO, tonight prof @chrmanning is hosting a discussion at the AIX office in South Park with @shaun…
2024-11-14 01:44:29 RT @aixventureshq: AIX Ventures Investing Partner @chrmanning, along with Eric Mitchell, and Archit Sharma, are at our HQ tonight to talk a…
2024-11-11 18:21:56 Papers at #EMNLP2024 #3 A counter-example to the frequently adopted mech interp linear representation hypothesis: Recurrent Neural Networks Learn … Non-Linear Representations Fri Nov 15 BlackboxNLP 2024 poster https://t.co/csKUpERrsX CC @robert_csordas @ChrisGPotts https://t.co/k0RXsnq3tQ
2024-11-11 17:21:20 @BlackHackOfDoom Thx!
2024-11-10 23:44:39 Papers at #EMNLP2024 #2 MSCAW-coref: Efficient and high performing coreference, extending CAW-coref multilingually and to singleton mentions. Fri Nov 15 CRAC workshop 14:10-14:30 https://t.co/altH7RyyLB In Stanza: https://t.co/hJ9b7Kq8pA CC @KarelDoostrlnck @ChrisGPotts https://t.co/7DlBGVHaWG
2024-11-10 00:32:04 Papers at #EMNLP2024 #1 Statistical Uncertainty in Word Embeddings: GloVe-V Neural models, from word vectors through transformers, use point estimate representations. They can have large variances, which often loom large in CSS applications. Tue Nov 12 15:15-15:30 Flagler https://t.co/ROEMbdjOmd
2024-11-09 23:05:19 I’ll be discussing DPO (Direct Preference Optimization) with @ericmitchellai and @archit_sharma97 at @aixventureshq in San Francisco this Wednesday evening Nov 13. Interested in going? See: https://t.co/yJ004apquS
2024-11-09 01:46:00 RT @mrm8488: New paper w/ interesting insights for #finetuning and #gpupoor folks Full Fine-Tuning vs. LoRA for Large Language Models…
2024-11-07 19:43:57 RT @aixventureshq: “There will be, and should be, some regulation of AI. But it’s unclear in the U.S. whether this will significantly affec…
2024-11-01 19:03:39 RT @antgoldbloom: The overlooked GenAI use case: cleaning, processing, and analyzing data. https://t.co/klQjXiyODl Job post data tell us…
2024-10-31 23:11:46 RT @jinpa1345: @celinehenne I finished my Philosophy Ph.D. in 1994, and never went on the academic market - decided during grad school that…
2024-10-31 21:19:06 RT @tom_doerr: That sounds much better than normal Python notebooks https://t.co/liHeHCgb9L
2024-10-31 16:35:24 RT @aixventureshq: Congratulations to AIX Ventures portfolio company @atmo_ai and @alevy for making @TIME’s 200 Best Innovations of 2024! A…
2024-10-25 20:35:35 RT @aixventureshq: Our Investing Partners are present founders, practicing faculty and active members of the AI community, pushing the fiel…
2024-10-25 02:56:08 RT @binarybits: I wrote about a new book by @random_walker and @sayashk that includes my new favorite case against existential risk from AI…
2024-10-25 02:25:13 RT @npparikh: This is what it costs to go to Stanford if your parents make $80K a year household income and have $30K in savings. $5K a yea…
2024-10-25 02:24:12 RT @npparikh: No, there isn’t. This person just does not exist. You can get into college if you want, period. You could also take a random…
2024-10-23 20:17:40 RT @KQED: Santa Clara County’s effort to find racially restrictive covenants in the county’s property deed records has been accelerated by…
2024-10-23 18:42:53 Congratulations to @petewarden on Moonshine ASR! Simple, fast, accurate speech recognition on low-powered devices can enable the “every device speaks and understands language” world that our children want/expect. https://t.co/44OTplbgni
2024-10-18 19:58:50 RT @aixventureshq: We’re still energized coming away from our annual general meeting last week! The AIX Ventures community came together to…
2024-10-18 19:57:55 RT @oanaolt: Are Semantic Layers the treasure map for LLMs? I asked myself this question as I was riding in a waymo back from the first us…
2024-10-18 19:50:37 @David_desJ
2024-10-18 14:22:12 @David_desJ True, though I think you undervalue the impact their existence still has on minoritized populations. Because of this, California (AB1466) is requiring their removal. So the impact is that either your county is spending millions of $$ or they need better technology like this.
2024-10-18 14:04:09 @sawa7_fil_bilad No. They were ruled unenforceable in 1948, but were still sometimes used as a signaling device. They stopped appearing after the Fair Housing Act of 1968 introduced a nationwide prohibition. Of course, that doesn’t mean that individual prejudice necessarily disappeared.
2024-10-17 21:07:57 @michaelbolden Same for the faculty lead on this, Dan Ho! See: https://t.co/iZJ8glVXyj
2024-10-17 20:11:35 @AiSimonThompson Not effectively enough
2024-10-17 20:05:36 Blogpost about the work: https://t.co/iZJ8glVXyj Also, the model is available for others to use: https://t.co/rh4vxCU7nR CC: @PeterHndrsn
2024-10-17 19:56:52 @caribouwireless A large part of why LLMs help is that they can handle corrupted text amazingly well: Many of the documents are close to a 100 years old and were poorly scanned, so OCR is very noisy, which badly impacts most classifiers but not modern LLMs.
2024-10-17 19:51:08 @caribouwireless Yes. Please read the paper. https://t.co/Mvtzus5oI6 We could test more ML methods, but I think it is clear that LLMs are more accurate. This is a needle-in-a-haystack problem: With any significant rate of false positives, the amount of work for people becomes onerous.
2024-10-15 00:28:39 @ShumingHu @ylecun @dgreller @we_arent_here @Chris_Armstrong @WSJ Agreed!
2024-10-14 23:53:07 @ylecun @dgreller @we_arent_here @Chris_Armstrong @WSJ I think the “high bandwidth” argument is mainly invalid. What is important is how much accessible, high-level information is conveyed. After all, if I give you a read speech file rather than a text document, it’s a lot higher bandwidth, but not much additional high-level… https://t.co/eFWQel8fp2
2024-10-14 20:23:57 @erikbryn @us21c You’d think Americans would be a little less unhappy with the state of things!
2024-10-14 20:09:50 @dgreller @we_arent_here @Chris_Armstrong @ylecun @WSJ Yes, see my earlier tweet! I also think @ylecun is (wrongly) too dismissive of language in intelligence. His thinking is just too vision-focused, and he ignores how language can describe more of the world with adjustable detail level. https://t.co/AvcbZRdVxg
2024-10-14 20:02:46 @arnicas I’m serious. It’s not my politics, but I think it’s a societal problem when most all universities are so much to the left. It’s part of why college education has lost Republican support. And think long term: it’s not like Stanford, Carnegie, Rockefeller, Rice, … were on the left
2024-10-13 21:24:47 RT @JamesFallows: I watched Coachella speech last night, on a Trump captive network. Media points: 1) Appears to be zero mainstream follow…
2024-10-13 21:14:18 It’s great to see (right wing) billionaires backing a new university! Top private universities are the jewel of America! Some have religious roots but most came from huge gifts by the rich. With population growth and college widespread, we could use more! https://t.co/ii62UZwlra
2024-10-13 21:02:02 Cats “have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning. None of these qualities are present in today’s ‘frontier’ AIs.”—@ylecun “This AI Pioneer Thinks AI Is Dumber Than a Cat” @WSJ https://t.co/1xdbtVKDAu
2024-10-12 21:07:17 RT @vamsi_nimmala: Time has proven this tweet right. @chrmanning said it best—excellent at NLP, but still far from true understanding.…
2024-10-12 01:43:37 RT @martin_casado: I don't recall another time in CS history where a result like this is obvious to the point of banality to one group. And…
2024-10-10 20:04:03 @AveryAndrews @adelegoldberg1 @tlonic That ChatGPT answer seems impressively good to me!
2024-10-09 14:47:32 @zhksh @jhuclsp @COLM_conf Yes!
2024-10-08 11:58:59 RT @re5e1f: (1/n) Want to train a better video generation model? You need a better video captioner first! Check out AuroraCap! Trained on…
2024-10-05 02:12:37 RT @atroyn: hello, chroma is hiring. since july we've doubled our engineering team, and continue to grow. our team is small, formidable,…
2024-10-03 15:24:52 RT @emollick: Not what massaging the data means. (I have been randomly animating scientific diagrams with Pika) https://t.co/dXrj6bx5ZS
2024-10-03 15:17:33 RT @AliciaCurth: When Double Descent &
2024-10-03 15:13:04 RT @VoyageAI: Thrilled to share that we've closed $28M in funding, led by @CRV, with continued support from @wing_vc and @saranormous. Also…
2024-10-02 15:00:53 RT @pika_labs: Sry, we forgot our password. PIKA 1.5 IS HERE. With more realistic movement, big screen shots, and mind-blowing Pikaffects…
2024-10-02 04:07:36 What a lot of Democrats still haven’t come to terms with – you need to build housing. Luckily some understand this. https://t.co/Db0EIx3Gy3
2024-10-01 04:11:27 RT @pitdesi: California bans legacy admissions at private universities Will mostly impact USC and Stanford https://t.co/iyjAWfdOFT
2024-09-30 16:47:39 RT @rajiinio: This is huge -- and such a serious testament to the importance of data documentation &
2024-09-30 02:43:28 RT @MelMitchell1: @shiringhaffary Sadly, the term”AI safety” has been co-opted. Many of the bill’s opponents are advocates of creating AI…
2024-09-30 00:50:53 RT @rao2z: The argument is never that all published research is great--but that the field (as against your private group) can only build o…
2024-09-30 00:40:57 RT @soumithchintala: Lifecycle of SB1047: * First-draft written by a niche special-interest stakeholder * Draft publicly socialized too qui…
2024-09-30 00:37:31 @rao2z I think you are too generous in evaluation of your peers. While there are certainly many who do do this work (thank you all!), I think, if you look, you would find that there are also many who are not doing it.
2024-09-29 15:53:03 It’s great @GavinNewsom signed AB 2013 (by @jacquiirwin), requiring disclosure of a summary of the data used to build Generative AI systems. There is huge uncertainty about GenAI. Legislation that brings sunlight is great
2024-09-28 17:32:47 RT @BillSun_AI: We had a really full house of great guest yesterday at AGI House private dinner @agihouse_org
2024-09-26 15:40:34 @beenwrekt @srush_nlp Regardless of whether you see a double descent loss bump, isn’t the fundamental thing to teach that with massively overparameterized and very flexible function fitting models, you should expect better and better performance with scale via interpolation/averaging, not overfitting?
2024-09-19 19:55:10 It’s a positive development that European voices are saying that it’s time to build! https://t.co/F4KOSMhQWc
2024-09-16 16:41:19 RT @pfau: I can't think of another branch of engineering that has a distinct discipline called "safety" that is separate from "capabilities…
2024-09-05 00:59:47 RT @aixventureshq: AIX Partner @chrmanning says looking behind the “veil of language” will be key to giving us “intelligent machines and in…
2024-08-26 15:30:18 RT @jamescham: @chrmanning I think that we’re in a awkward time in the history of business schools. A lot of the ideas from the 2000s are i…
2024-08-24 18:52:32 Wrong time to be an MBA, right time to be an AI builder? https://t.co/EDoVy7oPAV
2024-08-24 03:02:44 RT @jamescham: Feels like all the cool kids are in Hayes Valley but many of the smart hard working ones are hanging out next to Backyard Br…
2024-08-19 23:34:52 @rao2z Yes. I deliberately chose ACL venue papers, but all of these people also published papers at AAAI/IJCAI in the 1970s/1980s — which was basically my point about ACL and AI having been connected just as much as ACL and linguistics, for a very long time.
2024-08-19 21:11:44 @rao2z Not coincidental, @rao2z, since James Allen was top of mind following your talk!—I guess you could regard this as a subtweet. But there was a whole group of people involved: https://t.co/5jlnKgxiLm https://t.co/3Nmm3KXppM https://t.co/QlLsc8BxIj https://t.co/Cy3hpsWWi2
2024-08-18 22:28:09 @yoavgo I agree SCiL has a distinctive focus—and that’s great!—but I reject that this focus delimits the scope of CL. Computational linguistics is defined much more broadly—including on the ACL’s webpages (even if they haven’t been updated for a bit!): https://t.co/RNDeBO0IOb https://t.co/9lvKbK6K0G
2024-08-18 18:53:19 @tallinzen Thank you, @tallinzen!!! I also had a go at capturing the huge progress from LLMs and thoughts on them capturing meaning for a more general audience in Dædalus: https://t.co/H9BGtzCXOu (though it was written a year before the release of ChatGPT, and so is starting to age).
2024-08-18 18:29:17 RT @paulg: I think there will be a second wave of social networks that are explicitly designed to suppress trolls instead of treating this…
2024-08-16 18:46:56 @thedansimonson Come on! It’s your choice whether regarding CL/NLP as synonyms implies intersection or union—I was advocating for union. I do see papers at ACL venues on CL/Ling questions that are not NLP engineering, e.g. https://t.co/M7eWm1W7EE https://t.co/RRFFM2Nc8Q https://t.co/XrgKgm7qyl
2024-08-16 16:22:14 @mmbronstein @arturovllacanas @timnitGebru It looks like access is restricted to all the course materials and so non-Oxford people can only read 2 paragraphs of course overview. So it’s a little hard to draw any conclusions either way.
2024-08-16 00:39:10 RT @martin_casado: Huge! Multiple democratic California members of congress have submitted a letter to @GavinNewsom against SB 1047! Signa…
2024-08-15 13:57:03 RT @xuanalogue: Wish I'd cited this 1979 paper when working on inverse planning from natural language!! In CLIPS, we combined STRIPS-style…
2024-08-15 05:05:44 As NLP/CL has grown, people also gather in focused venues, for MT, dialog, etc. But we should not see the existence of these venues or SCIL as shrinking the domain of CL/NLP. CL/NLP—as synonyms—should be a big multidisciplinary field, reaching out into both linguistics and AI.
2024-08-15 05:05:43 The history of ACL has productively mixed people and viewpoints for decades. Last century there were papers that were just linguistics, on syntactic formalisms like TAG or HPSG. This century still has computational linguistic papers aiming at linguistic questions.
2024-08-15 05:05:42 Due to differences in background/interests, the topic distribution of people using each term is likely to differ: A CL person is more likely to use or address linguistic ideas
2024-08-14 01:37:15 RT @rm_rafailov: Super excited to announce what we have been working on in the last six months - Agent Q is out now! This is a framework fo…
2024-08-12 05:28:55 @priyankaonair @PierreColombo6 @allen_ai Oh, sorry, I misinterpreted “legal” as the LLM using data legally not being focused on law…. Okay, not much else I know of. A couple of Legal-BERT’s is basically it.
2024-08-12 02:58:25 @priyankaonair @PierreColombo6 .@allen_ai’s OLMo and Dolma doesn’t count for you?
2024-08-12 01:17:10 @prasanth_lade Yeah, I still don’t quite understand it. Maybe planted by a guest?
2024-08-11 13:25:13 One of these is not like the others #oddoneout https://t.co/kQ856SuhOR
2024-08-11 13:12:34 It’s great to be back in Lands With Passionfruit #ACL2024 https://t.co/0xJkT4s6Cg
2024-08-05 00:26:00 It’s the day of the event at last! Happy to be at the @IJCAIconf @STAIWorkshop on Sustainability and Artificial Intelligence. First thing I learned today: Jeju Province/Island is positioning itself to be carbon-free with 100% renewable energy by 2030. https://t.co/SwSe0x3S3T https://t.co/KKs6TYVW8g
2024-07-31 00:07:46 Big congrats to Kaylee McKeown on also successfully defending her gold medal in Paris, in the women’s 100m backstroke final! Such finishing momentum! https://t.co/WcZGLsEF1n
2024-07-29 20:21:23 First and second place to Australia in the women’s 200m freestyle final – congratulations to Mollie O'Callaghan on her first individual gold medal! https://t.co/wGKoBYYiXz
2024-07-27 20:46:22 And gold (for the 4th time in a row) in the women’s 4x100m freestyle relay team of O’Callaghan, Jack, McKeon, and Harris! https://t.co/1o5cLI938P
2024-07-27 20:38:41 Great start for the Australian women’s swim team with Arnie Titmus again getting gold in the 400m freestyle https://t.co/oK7mbeLAkT
2024-07-27 13:19:29 RT @random_walker: In a new AI Snake Oil essay by me and @sayashk, we do a deep dive into AI existential risk probability estimates. We fin…
2024-07-23 23:14:54 RT @antgoldbloom: .@benhamner and I have been quietly working on our newco, Sumble. We are getting meaningful traction so we are hiring gre…
2024-07-23 15:46:38 And, exciting to see their success with Direct Preference Optimization: “We also explored on-policy algorithms such as PPO, but found that DPO required less compute and performed better, especially on instruction following benchmarks like IFEval.” https://t.co/mZnkH1WdAk https://t.co/mrmrNzU8a4
2024-07-23 15:46:37 The performance of the new Llama 3.1 LLMs is superb, completely impressive vs the leading closed, proprietary LLMs https://t.co/OFWyRRnqoc
2024-07-23 15:46:36 Congrats @AIatMeta, on the awesome Llama 3.1 LLMs! Many contribute to open LLMs: @Alibaba_Qwen, @MistralAI, @01AI_Yi, @huggingface, @allen_ai, …. The open source vs open weight distinction remains important. But @AIatMeta has led in developing an amazing open LLM ecosystem! https://t.co/rHbaihJsnn
2024-07-17 03:14:02 But, unfortunately, a complete “wiring diagram” of a brain still gives almost no idea of how it works for learning, memory, reacting, … “We don’t even understand the [302 neuron] brain of a worm” scoffs Christof Koch, Chief Scientist of @AllenInstitute https://t.co/k77J2bzWmd https://t.co/YhpoFmNWn4
2024-07-12 00:12:28 RT @stephenroller: @chrmanning I kinda wish groups did a *secret* held out test set and told no one about it and then 6 months later releas…
2024-07-12 00:11:35 @giffmana Agree!
2024-07-11 20:08:40 @giffmana Divide both your dev and test into halves, so you have dev, dev2, test, and verification test. Train can all stay together.
2024-07-11 14:49:55 RT @ysu_nlp: As I reflect on my experience hosting benchmarks in the past year (MMMU, Mind2Web, TravelPlanner, etc.), I agree with @chrmann…
2024-07-10 19:36:49 ³A private verification set lets the dataset authors detect with high confidence people over-fitting the public test set or LLM training contaminated with test set examples. However, it isn’t the official test set, so, for general use, there are none of the problems from fn. 1.
2024-07-10 19:36:48 I agree: private test sets sound good but end up a fail.¹ How should you make a dataset? Define train, dev &
2024-07-09 19:33:41 RT @atroyn: today i'm pleased to present chroma's next technical report, our evaluation of the impact of chunking strategies on retrieval p…
2024-07-05 23:56:59 @Xaberius9 No, I think lots are, e.g., from 2010 … although I admit lots also think is good because it has always been so (just like the fairly broken system in the US): https://t.co/RsLqBuATcM
2024-07-05 20:39:05 @Xaberius9 Most states are too large for US senators to be plausible local members. (Indeed, with the US freezing the number of congresspeople, really even congressional districts are getting too large: the UK has a larger house of commons despite having only 20% as many people.)
2024-07-05 15:02:31 @Xaberius9 There are advantages to proportional representation, but there are also advantages to people having a local district member representing them. New Zealand much more recently introduced a pretty clever system that melds the two together (but unfortunately FPTP for local districts)
2024-07-05 13:36:46 Some former colonies worked out how to fix this … (checks on web) … slightly more than a century ago https://t.co/MEOnS43rle https://t.co/vnZqmwYrDM
2024-06-30 17:35:45 RT @Abebab: kinda mad how the so called godfathers of AI managed to convince seemingly smart people within AI field &
2024-06-29 16:30:38 @RohanAwhad @saraha_swe @jeremyphoward @HanchungLee @Samhanknr You might also look at some of the materials at https://t.co/8ZWHhuaFsl, which is more up-to-date on pre-neural IR methods, including emphasis on BM25, WAND, more modern evaluation methods, etc.
2024-06-23 21:25:27 @jobergum My take: ① Credible search/IR voices always promote evals &
2024-06-23 21:24:43 My take: ① Credible search/IR voices always promote evals &
2024-06-19 03:53:52 RT @minilek: The University of California Office of the President just emailed high schools across CA with an update on high school math an…
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-22 15:16:39 RT @jamescham: @chrmanning @erikbryn The terrific thing about this generation of AI is that it is so accessible to try, build, and learn. T…
2023-05-21 21:02:13 @boazbaraktcs @percyliang There’s still time!
2023-05-21 20:49:03 “Everybody I talk to first goes to, ‘Oh, how can generative A.I. replace this thing that humans are doing?’ I wish people would think about what new things could be done now that were never done before. That is where most of the value is.”—@erikbryn https://t.co/2eJFU61Agj https://t.co/JbveewaYyV
2023-05-20 01:24:22 RT @s_batzoglou: The voices against open AI and for premature regulation are too loud and unrepresentative. The majority of AI researchers…
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-04-22 16:07:29 RT @russellwald: ATTN: U.S. Congressional staff!! Starting next week you can apply to the @StanfordHAI AI Boot Camp for Congressional Sta…
2023-04-21 00:00:01 CAFIAC FIX
2023-04-12 17:06:53 RT @i3invest: In episode 82 of the [i3] Podcast, we look at #machinelearning and #AI with Kaggle Founder @antgoldbloom and Stanford CS Prof…
2023-04-08 07:29:46 RT @jobergum: Vector DB Chroma raises $18M at a $75M valuation. Uses hnswlib for vector search. https://t.co/eDdPCtiXRZ https://t.co/4Z0h…
2023-04-08 05:59:53 @jerome_massot I think @lilyjclifford can address this better than me but, despite individual variation, much work in sociolinguistics shows there are shared speech traits of communities, and LGBTQ people, at least in a region, form one sort of community. Perhaps see: https://t.co/NQ4YNVA3DR
2023-04-05 21:41:36 RT @lilyjclifford: Today speech synthesis takes a leap forward. https://t.co/kCrFGy8GVA is now live. Rime has the world's most expansi…
2023-04-02 12:02:18 RT @xriskology: Lots of people are talking about Eliezer Yudkowsky of the Machine Intelligence Research Institute right now. So I thought I…
2023-03-28 03:38:00 RT @russellwald: What could government be doing regarding tech that it isn’t? "Educate themselves. I’m not saying they need to go learn how…
2023-03-24 04:41:03 @roger_p_levy @weGotlieb @xsway_ Pushing the debate in Linguistic Inquiry! Good on you!
2023-03-23 14:51:21 RT @minilek: Now the front page article on https://t.co/FhQv6QmTvx. Quote 1: "Ironically, despite reviews and blog posts pointing out Boal…
2023-03-21 15:18:39 Web content tricks that search engines “learned” to be wise to in the 2000s decade (well, were programmed to not work by human beings) are still alive and well with 2020s LLMs https://t.co/fkhaVcKFfD
2023-03-20 19:57:49 @ruthstarkman @davidchalmers42 Still a nice intro to the problems of natural language understanding and it well represents the viewpoint I try to capture in tweet #4 in the original reply sequence. But in 2016, I just don’t say that we are or should be building huge LMs to make progress on #NLProc.
2023-03-20 14:39:36 @jonas_eschmann @RandomlyWalking @davidchalmers42 But it’s also based on the idea of building a practical compression tool that does lessless compression and runs on 1 CPU core using at most 10GB of memory, which ends up completely inconsistent in direction with modern work in neural networks.
2023-03-20 14:18:27 @Ollmer @davidchalmers42 It was an awesome post! But it just doesn’t suggest that building these RNNs ever bigger is a promising path to near-HLAI. And looking at the jumbled phrases of the Wikipedia page generated by the character-level LSTM in that post you can see why people weren’t yet thinking that.
2023-03-20 04:50:17 @michalwols @davidchalmers42 @ilyasut Yeah, but, in the early days of @OpenAI, perhaps influenced by @DeepMind’s work with Atari games, etc., the big push was RL (OpenAI Gym, Dota 2, etc.) and work on language was not seen as key but very marginal. The original GPT was clearly a small effort by a young researcher.
2023-03-19 21:02:16 @davidchalmers42 Apple’s Knowledge Navigator video was prescient but draws from symbolic AI/knowledge integration
2023-03-19 20:58:50 @davidchalmers42 Examples that others cite in replies are to me unpersuasive, and mainly too recent
2023-03-19 20:57:22 @davidchalmers42 There was work arguing that NLP is an AI-complete task but that conversely text accumulates the knowledge to address it, especially in the 2010s (Halevy et al., Etzioni, me), but we didn’t see just building a big LM as the best way to unlock the knowledge 4/
2023-03-19 20:56:39 @davidchalmers42 Work going back to at least the 1980s (or sort of to Shannon in the 1950s) promotes statistical models of text prediction and their usefulness for NLP tasks. We said that these models learned knowledge of the world, but we really didn’t see them as a clear path to near-HLAI. 3/
2023-03-19 20:55:19 @davidchalmers42 There was definitely a more general claim that exponentially increasing data and computation will lead to AGI, but work such as Kurzweil, etc. lacked any specificity as to mechanism (and in particular didn’t suggest LMs). 2/
2023-03-19 20:54:51 @davidchalmers42 For statements from 2017 or earlier, I’m voting that the right answer is “nobody” (which may be what you’re wondering with your question, I guess). I think one has to distinguish a somewhat precise version of “saw LLMs coming” from more general claims: 1/
2023-03-16 20:17:44 @sleepinyourhat @shaily99 @andriy_mulyar @srush_nlp @mdredze @ChrisGPotts Yeah, I agree, but a lot of time there is no feedback loop from the company lab back to the people in academia, which tends to be unfortunate.
2023-03-16 02:49:19 @azpoliak @dmimno @srush_nlp @ChrisGPotts @andriy_mulyar @sleepinyourhat @mdredze @tomgoldsteincs @PMinervini @maria_antoniak @mellymeldubs Yeah, I think there are still tons of examples like these of academic researchers doing cool original work. Slightly further afield, where do you think diffusion models were developed?
2023-03-11 20:25:24 RT @luke_d_mutju: Decades of work has gone into this incredible piece of work. A huge amount of work by Yapa and Kartiya to keep #Warlpiri…
2023-03-11 20:25:16 RT @Redkangaroobook: #diaryofabookseller the Warlpiri Encyclopaedic Dictionary was launched this week at Yuendumu. I wasn’t there but it wa…
2023-03-11 20:23:40 RT @chanseypaechMLA: The Warlpiri Encyclopaedic Dictionary is here. It’s the distillation of over 60 years of work and the contribution of…
2023-03-11 20:22:12 .. and people might also like Scott Aaronson’s blog post – not that he’s really someone with linguistic qualifications, but, hey, he’s a smart guy who can see changes happening in the world https://t.co/FkGh32I5rK
2023-03-10 18:04:00 @zyf I actually agree with nearly all of what you write
2023-03-09 16:53:31 As a Professor of Linguistics myself, I find it a little sad that someone who while young was a profound innovator in linguistics and more is now conservatively trying to block exciting new approaches. For more detailed critiques, I recommend my colleagues @adelegoldberg1 and https://t.co/2Lr9MBQBBX
2023-03-09 16:53:30 This is truly an opinion piece. Not even a cursory attempt is made to check easily refutable claims (“they may well predict, incorrectly”). Melodramatic claims of inadequacy are made not of specific current models but any possible machine learning approach https://t.co/Dd7rplkG6p
2023-03-06 19:54:09 RT @Redkangaroobook: Werte (hello) book lovers! We've got the Warlpiri Encyclopaedic Dictionary back in stock. This incredible language res…
2023-03-06 16:57:16 @random_walker I suspect it’s just that their shortened URL expander is down
2023-03-06 16:24:44 March 2023 AI vibes: Fanciful visions of AGI emergence sweep through the @OpenAI office
2023-03-06 04:34:49 @jerome_massot Jérôme, answer: I’m Christine Bruderlin prod manager Aboriginal Studies Press. You can buy straight from us. Email asp@aiatsis.gov.au with Warlpiri Dict in subject line. Send yr address &
2023-03-05 16:43:07 RT @gruber: The official Twitter iPad app is so bad it doesn’t even support any keyboard shortcuts at all. Not even ⌘N for New Tweet. Quite…
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-27 18:08:48 @BlancheMinerva I should maybe also mention that we’re still using even older GPUs than those mentioned above. I’m not sure what she’s doing, but as I write, @rose_e_wang is running some job over a bunch of 4xTitan Xp 12GB machines
2023-02-27 17:58:37 @BlancheMinerva I’d cover what you can do with 80, 40, 24, and 16GB GPUs. Like, the tables in this article were good (even though it didn’t do the 80GB case): https://t.co/XlePNDhSYu And also like it both current and older generation.
2023-02-27 17:56:36 @BlancheMinerva Smaller configs include: 8xA6000, 4xA5000, 8xRTX3090, 4xTitan RTX. It is common for academia to have consumer GPUs. Single GPU options are important both because of ratios of students:GPUs and because single node requires less technical expertise (even though it’s gotten easier)
2023-02-27 17:51:52 @BlancheMinerva Hi Stella, I think most university labs have motley collections of hardware and students variously end up dealing with many configs. They can’t all use the A100s at once! Certainly for us, we have 5.2.3 and 5.2.2 but students would also commonly fine tune on smaller machines …
2023-02-27 01:00:00 CAFIAC FIX
2023-02-19 15:34:29 RT @sama: the adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly
2023-02-19 15:33:03 RT @MelMitchell1: This discourse gets dumber and dumber. Soon all news will be journalists asking LLMs what they think about stories by o…
2023-02-15 15:17:37 RT @atroyn: announcing the chroma embedding database. it's the easiest and best way to work with embeddings in your a.i. app. https://t.co/…
2023-02-15 14:33:51 @landay I guess my original tweet suggests that we’ve found out that, in the hands of users, these models haven’t been that dangerous so far—not like TNT or even autonomous vehicles.
2023-02-15 03:51:30 @npparikh @rajiinio Hi Deb, I also agree with your tweet. ChatGPT in high-stakes uses would appall me—e.g., giving medical advice. But we can also contrast the predictions of danger &
2023-02-15 01:20:32 @MadamePratolung @marcslove I like collegiality too, honest. Lots of people talk about “the AGI crowd”, including @random_walker, @pfau, @yoavgo, and @rodneyabrooks. It’s not so marked. https://t.co/z4l5AYNm88 https://t.co/geMQ9EbRXx https://t.co/GcMNp18uFb https://t.co/hLyMC1aOG9
2023-02-15 00:14:43 @RoseAJacob Yes, “but” sets up a contrast, which relates 2 statements, but they need not be a priori related, conflicting or contradictory, eg: “There’s a war raging in Ukraine, but I’m making coffee”
2023-02-14 22:59:34 @landay Indeed, there is no conflict. But that suggests a generative AI model is a tool like … a hammer. You can do a lot of damage with a hammer—they should carry a warning against striking living creatures with them—but mostly they are a great tool with all sorts of productive uses.
2023-02-13 22:11:50 @mmitchell_ai p.s. With all the usual qualifications about how people belong to multiple groups and groups have central and peripheral members, I nevertheless feel that there is a defined enough AI Ethics group that it is okay to refer to it, just like you might refer to “the JAX crowd”, etc.
2023-02-13 22:07:09 @mmitchell_ai Hi Meg, there’s no disliking. I like most everyone in the AI Ethics Crowd, in particular, you! And I believe the AI Ethics crowd has done much important, impactful work. Nevertheless, I do stand by my original post, and think the large gap it points to undermines credibility.
2023-02-13 21:00:23 RT @aixventureshq: We’re hosting an AI event for founders in the Bay Area on 3/11 Details 1⃣ AIX inc @antgoldbloom, @chrmanning, @pabbeel…
2023-02-13 17:02:13 Early 2023 vibes: The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable &
2023-02-09 15:18:01 I think I can imagine the bureaucrats’ brainstorming session—no bad ideas!—on their new residential neighborhoods lacking character at which someone suggested “I know, we could organize ideation sessions for the neighborhoods to choose some traditions!” https://t.co/cxAG2k5a2X
2023-02-09 04:40:59 @deliprao I think that last issue was the main reason for the stock losing $100B in value. “We still have an internal language model that we’re still not going to let anyone else try out, but it’ll be called ‘Bard’ when we do release it” just didn’t quite cut it as a major announcement.
2023-02-08 19:27:32 RT @aixventureshq: Check out the Australian Financial Review’s article on the most cited NLP researcher, AIX Ventures Investing Partner @ch…
2023-02-06 20:33:38 RT @atroyn: announcing stable attribution - a tool which lets anyone find the human creators behind a.i generated images. https://t.co/eHE…
2023-02-06 00:50:05 RT @RishiBommasani: @anmarasovic When I started in NLP research, I knew no Python/PyTorch/ML and never had done research. @clairecardie to…
2023-02-05 16:09:08 @yoavgo @ylecun It’s an interesting reversion in terminology! When interviewing for my Stanford job in 1999, precisely what the Stanford old guard wanted to know was my take on reaching human level AI, as opposed to the simple ML of the 1990s. https://t.co/I3Zx9yd1Jh https://t.co/fL44s3lTt0
2023-02-02 15:27:08 @sama I seem to remember that someone suggested “Foundation Models” as a good name
2023-02-02 15:04:31 @roydanroy Isn’t that unclear? If the context is answering a question or obeying an instruction, can’t an LM learn to condition on that and answer much as the RLHF humans are doing? This seems to require only that most conversations in the original data follow Grice’s Cooperative Principle.
2023-02-02 03:40:30 @peterjansen_ai I repeatedly feel that Martha Palmer hasn’t got the credit she deserves (CC @BoulderNLP)
2023-02-02 03:25:44 RT @random_walker: Yeah no. Most of us outside a certain clique don't insist on putting plucked-out-of-the-air probabilities on eminently u…
2023-02-02 03:23:18 I’d thought this swing to self-supervised learning was meant to reduce the need for data annotation? https://t.co/VPwtNEZHnY
2023-01-30 01:00:00 CAFIAC FIX
2023-01-14 18:23:54 RT @bradproctor: As Elon tweets about transparency, developers and users of @tweetbot, @Twitterrific, and @echofon sit waiting 24 hours lat…
2023-01-13 17:34:54 RT @UvA_Amsterdam: Het eerste eredoctoraat gaat naar wetenschapper Christopher Manning @chrmanning @Stanford, vanwege zijn ongekende bijdra…
2023-01-13 17:34:44 RT @UvA_Amsterdam: De Dies Natalis is begonnen! Na lange tijd weer met een volledig cortège. Kijk live mee via https://t.co/s0HCXYGlVo of v…
2023-01-13 15:41:37 RT @DLDConference: "AI should be inspired by human intelligence. The human brain is the single most sophisticated device in the world. #AI…
2023-01-11 18:02:43 RT @rajiinio: Tesla has been pushing "driver liability" for years to shift the blame &
2023-01-11 01:27:40 Here’s some tweeting of the talks by me and @MarietjeSchaake on “Humane AI” in Amsterdam on Monday by @Mental_Elf (thx!) https://t.co/OOl4mBL8dO
2023-01-09 21:46:35 @MarietjeSchaake @UvA_Amsterdam @wzuidema Thanks, great seeing you today, and looking forward to your making it back to @StanfordHAI, @MarietjeSchaake!
2023-01-09 21:44:56 RT @MarietjeSchaake: Wonderful to see @chrmanning in Amsterdam where he will receive an honorary doctorate at my alma mater @UvA_Amste…
2023-01-03 22:13:09 RT @wzuidema: Final call for participation: join us this Friday and Saturday in Amsterdam for a workshop celebrating the honorary doctorate…
2023-01-03 16:51:59 @ScaleTechScott
2023-01-02 21:05:35 @DeeKanejiya That was true in the 2000s, and even for most of the 2010s, but I don’t think we have seen good evidence of it from explicit human engineering in the 2020s, only from models learning linguistic structure themselves, as in https://t.co/iEg7L3BZp9
2023-01-02 20:58:27 @AiSimonThompson @NIST That’s why it “can be an appropriate response”. But, mostly, when people ask a search engine, e.g., when did ABBA’s Dancing Queen come out, they just want the answer.
2023-01-02 20:49:03 @yuvalmarton @marian_nmt I sort of agree, but maybe only soft inductive biases
2023-01-02 19:32:51 RT @marian_nmt: @yuvalmarton @chrmanning The relation of NLP and linguistics seems to be one where having a good background understand of l…
2023-01-02 19:22:34 @yuvalmarton @yoavgo Oops, I meant to write “descriptive” not “observational”. Where is that edit button again?
2023-01-02 19:21:53 @yuvalmarton @yoavgo One definitely needs a place to start! Delineating the Chomsky hierarchy was a huge contribution—though recognizing mildly context sensitive languages came from outside the Chomskyan core. But having “perfect” CFGs only gives observational adequacy! See: https://t.co/MQLBH1bv9l
2023-01-02 18:50:29 Reviewing older work—here’s Ellen Voorhees @NIST in CIKM 2001. 20 years on, we’re almost there! “Traditional text retrieval returns a ranked list of documents. While it can be an appropriate response, frequently it is not. Usually it would be better to provide the answer itself.”
2023-01-02 18:23:45 This does show something fascinating! But not that linguists’ knowledge of language is “bunk”. Rather, what has mainly been a descriptive science—despite what Chomsky claims!—hasn’t provided the necessary insights for engineering systems that acquire and understand language use. https://t.co/p0Tka9WhZz
2023-01-02 18:11:42 RT @lipmaneth: 1/ Open source businesses are fascinating. Here's a quick history on how @huggingface built a $2B co by giving away its so…
2022-12-30 00:29:39 RT @petewarden: Wish you could control your TV or speakers with gestures? Check out the new online demo of our Gesture Sensor and then come…
2022-12-28 23:51:19 @AmandaAskell Yeah, that could actually be right!
2022-12-28 04:47:52 Wondering how regarding someone with an MLitt in Philosophy as a CS/Math techbro is going to go down
2022-12-21 18:20:12 Some of the paper is tied to a now-dated specific context. But promoting the task, emphasizing pragmatics and speaker meaning, and incorporating world knowledge and uncertainty were all good moves! For a modern take, see Ellie Pavlick @Brown_NLP’s paper: https://t.co/3bOJZBLXG1
2022-12-21 18:20:11 I wrote this paper 17 years ago—December break 2005—advocating for a new NLU task introduced by Ido Dagan @biunlp: He called it Recognizing Textual entailment
2022-12-21 00:50:18 RT @russellwald: Closing out the year strong @StanfordHAI w/2 important policy pubs on fed AI policy. A HAI/RegLab white paper finds the f…
2022-12-20 22:17:05 @yoavgo @csabaveres @cohenrap This isn’t “Language forces us to commit to an idealized expression". The author of the last one could have used “Indicates that execution of the code should be terminated if it ever loses its validity”. Human minds like metaphors, so indeed, LLMs need to learn to interpret them!
2022-12-20 21:25:10 @yoavgo @cohenrap I think some of my code wants to be killed too
2022-12-20 21:02:56 @yoavgo @cohenrap Similarly with “understands”. E.g., compiler people often talk about what a compiler understands: “the compiler understand it is a pointer”, “the compiler understands the lifetime of every variable” [all real textual examples in these two tweets! Corpus linguistics, yay!!!]
2022-12-20 21:00:01 @yoavgo @cohenrap It’s not a differing fact about English. Metaphorical sense extension is common in language. People talk about the behavior of all sorts of inanimate things: “the behavior of the sea”, “the behavior of the acoustic distortion product”, “the behavior of the producer price index”
2022-12-17 21:56:36 RT @MeigimKriol: La Katherrain la kaunsil offs dei garram nyuwan sain gada Kriol! Wani yumob reken? Im gudwan? https://t.co/ZcAfWkV9G9
2022-12-17 20:41:42 RT @Abebab: longtermism might be one of the most influential ideologies that few people outside of elite universities &
2022-12-17 20:35:50 @ChrisGPotts @KarelDoostrlnck Entre les tours de Bruges et Gand…
2022-12-14 15:33:51 RT @timnitGebru: Read this by @KaluluAnthony of https://t.co/bf2uXxretK. "EA is even worse than traditional philanthropy in the way it ex…
2022-12-10 20:07:57 RT @cHHillee: Eager mode was what made PyTorch successful. So why did we feel the need to depart from eager mode in PyTorch 2.0? Answer: i…
2022-12-10 19:27:40 RT @jaredlcm: @chrmanning While this paper of mine isn't generated by a silicon language model I do think it captures the kind of balanced…
2022-12-10 19:23:19 @NatalyFoucault @americanacad A philosophical question—it’s not clear! It may only be possible to learn textual meanings of further words due to grounded experience of some
2022-12-09 03:50:09 @KordingLab That I really talked about the right topic at that last CIFAR LMB meeting?!?
2022-12-08 22:41:41 @emilymbender Anyone taking things out of context⁈ Looking at the talk—from an industry venue: https://t.co/UMlczZZJR3 after a detour on what self-supervised learning is, exactly where it goes is that big models give large GLUE task gains but at the cost of hugely more compute/electricity… https://t.co/kWESAvHXkl
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-11-15 16:08:46 @FelixHill84 @gchrupala There is indeed a lot of excellent work there! However, the OP had asked for “breakthroughs in theoretical linguistics”, and while what counts as “theoretical” is a value judgment, my gut sense was that Lakoff et al. definitely wouldn’t count.
2022-11-15 16:05:28 @kadarakos @gchrupala Yes, indeed.
2022-11-15 02:34:00 @FelixHill84 @gchrupala Wait, I thought I wrote a tweet each on work on formal semantics &
2022-11-14 19:17:22 @gchrupala OK fair enough. I hope my list was useful. In reverse, of the linguistics in NLP, till 1957, you can have PoS, morphology, phrase structure, dependency and unconstrained transformations, if you want them. But most all else came later, whether HMMs, PCFGs or other linguistic ideas
2022-11-14 18:25:04 @gchrupala - Not only did sociolinguistics overall grow up in the 60s, but the development of formal probabilistic models of variation (variable rules) and code-switching also dates to the 60s- (I’ll stop here, but one could add even more areas of linguistics.)
2022-11-14 18:22:23 @gchrupala - Phonology: Everything from Chomsky and Halle’s Sound Pattern of English through many useful concepts in metrical phonology, autosegmental phonology, and optimality theory or harmonic grammar came out from the 60s through the 90s
2022-11-14 18:20:48 @gchrupala - Syntax: A lot of very good foundation material about how languages work, how to describe them and their cross-linguistic patterning was developed in the 60s, 70s, and 80s: X’ theory, the phenomena originally called raising/equi, grammatical relation changing operations
2022-11-14 18:20:07 @gchrupala - Pragmatics: This only emerged as a field with theoretical content in the 1960s, starting with Grice’s work and the 70s through the 2000s saw the development of formal accounts of pragmatic phenomena such as implicatures and presuppositions (Karttunen, Potts, etc.)
2022-11-14 18:18:22 @gchrupala - Syntax: A lot of very good foundation material about how languages work, how to describe them and their cross-linguistic patterning was developed in the 60s, 70s, and 80s: X’ theory, the phenomena originally called raising/equi, grammatical relation changing operations
2022-11-14 18:16:11 @gchrupala - Semantics: Basically all work on formal semantics is after 1957, including Montague’s Proper Treatment of Quantification in Ordinary English and all the subsequent development of formal semantics by Partee and others.
2022-11-14 18:14:56 @gchrupala - Mathematical linguistics: Most of the work developing properties of formal languages was done after Chomsky started things off in the mid 50s, including the work by CS people like Hopcroft, Aho, Ullman
2022-11-14 18:12:28 @gchrupala Even if you believe that mainstream theoretical linguistics has lost the plot in the 21st century, I think the presupposition of this question is quite absurd. Almost nothing was known about theoretical linguistics in 1957! So there are examples in every direction you might look:
2022-11-14 16:59:21 Human-in-the-loop reinforcement learning—DaVinci instruct—may be the most impactful 2022 development in foundation models. What can we achieve by reinventing the AI design process to start from people’s needs?Watch tomorrow’s @StanfordHAI conference 9 PSThttps://t.co/CCbFEnqklS https://t.co/XyFAAElmCJ
2022-11-13 15:51:38 RT @henrycooke: New Zealand is at 99% renewable electricity generation right now. You read that right. https://t.co/7Wzf9HGlvz
2022-11-13 15:50:31 RT @NC_Renic: Academics undaunted by the news that being unverified will mean that no one reads your tweets. We’ve been training for this…
2022-11-12 03:41:33 RT @petewarden: We need help turning our fugly, airport-security-unfriendly, held-together-with-Blu-Tack prototypes into clean looking sale…
2022-11-11 15:39:46 @sethlazar @robreich @mehran_sahami @landay @drfeifei Not yet….
2022-11-05 21:22:05 @yoavgo I like that one too. But you were asking for ones from 25 years ago.
2022-11-05 21:20:48 @zehavoc @yoavgo @wzuidema I agree it’s a good example of an attempt to carefully replicate a model, but I honestly couldn’t imagine asking a student to read it these days. Full of in-the-weeds details of modeling methods that no one in their right mind should have to care about in 2022.
2022-11-05 21:17:57 @yoavgo It’s a great example of a complex generative model from the probabilistic era
2022-11-05 21:08:56 @yoavgo @zehavoc @wzuidema No
2022-11-04 05:31:09 @zehavoc @stanfordnlp Thanks! Funny coincidence: I was just learning about Arabizi today from Mona Diab. Unfortunately, I didn’t know about your paper and I guess we were more searching for refs on “traditional “ creoles rather than this “new” creole.
2022-11-03 00:36:59 RT @sundarpichai: 1/ From today's AI@ event: we announced our Imagen text-to-image model is coming soon to AI Test Kitchen. And for the 1st…
2022-11-02 22:22:59 RT @StanfordHAI: This year’s HAI Fall Conference on Nov. 15 will focus on design principles, values, and guidelines to ensure that AI syste…
2022-11-02 20:12:43 I’ve found a gaggle of twitter spam accounts because one of their tweets matches Stanford NER: The weird way they write “Gard(i)ner” word-breaks the “ner”.Like they say, you’d think coordinated inauthentic behavior like this would be easily detectable!https://t.co/qaX3Bg1ORl
2022-11-02 19:52:15 Good thread! https://t.co/BGKYh8lGgP
2022-11-02 16:25:37 @yoavgo Collins 1997 Three Generative, Lexicalized Models for Statistical Parsing!
2022-11-02 16:24:41 @wzuidema @yoavgo Yeah, that was a good one!
2022-10-23 02:01:14 @ChrisGPotts @RishiBommasani @tallinzen @NYUDataScience @cocoweixu @david__jurgens @dirk_hovy @percyliang @jurafsky @clairecardie It feels a little unfair to be comparing a posed picture to an out-of-focus video capture, but there’s no denying @RishiBommasani’s shirt is a bold color!
2022-10-20 16:57:06 RT @StanfordAILab: Last night, 50 years to the day after the pioneering Intergalactic SpaceWar Olympics first video game contest (https://t…
2022-10-20 16:56:16 RT @petewarden: Launching @UsefulSensors! https://t.co/WUcGnF8Mky
2022-10-19 19:42:16 RT @petewarden: I'm finally able to talk about what I've been up to for the last six months! https://t.co/4qibCjUCIT
2022-10-19 15:29:54 RT @michiyasunaga: Excited to share our #NeurIPS2022 paper “DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining”, w/ the amazin…
2022-10-18 16:28:03 RT @lilyjclifford: i'm announcing the company we've been building.it's called rime.here's a tiny demo of what truly generative text-to-…
2022-10-18 15:26:56 RT @landay: Write up on our upcoming fall conference: “AI in the Loop: Humans Must Remain in Charge” https://t.co/5POjonMDtY
2022-10-18 14:55:19 RT @antgoldbloom: Tonight I attended a @StabilityAI event where they previewed generative animation. On the way home, l passed a @Cruise ca…
2022-10-18 14:36:57 RT @jugander: Looking forward to this @StanfordHAI workshop Nov 15 on "AI-in-the-loop": https://t.co/ZfeHcD4R8Q And resurfacing @ChenhaoTan…
2022-10-18 03:09:23 @RPuggaardRode @gwalkden @JennekevdWal @Lg_on_the_Move As a person whose most-cited first-author publication is “Why most published research findings are false”, Ioannidis now seems to be working to stack the deck to support his prior conclusions!
2022-10-18 02:50:42 RT @shaneguML: Attended @aixventureshq’s first community event a week.I entered deep learning in 2012 after ImageNet. I saw the craze bac…
2022-10-13 14:35:32 RT @StanfordHAI: Rethink the loop: At our fall conference this November, we challenge the phrase “humans in the loop” and advocate for keep…
2022-10-12 02:33:58 RT @robreich: GREAT fellowship opportunity at @StanfordPACS &
2022-10-10 04:10:22 @geeticka1 @rayruizhiliao Yes, indeed!
2022-10-07 15:09:56 RT @curtlanglotz: Some thoughts on the industry approach to AI in radiology (to date). A thread:@stanfordAIMI @StanfordHAI
2022-10-06 18:40:57 @michael_nielsen Thanks! And published versions of my Human Language Understanding &
2022-10-05 16:41:33 RT @HannaHajishirzi: My alma mater, Sharif University of Technology, Iran's premier university, was under siege yesterday. Many students we…
2022-10-03 03:45:11 RT @SoloGen: If you're a professor or a student in STEM in a Western country, you probably know someone from the Sharif University of Techn…
2022-09-28 04:54:16 @3scorciav @CVPR @jcniebles Thx!
2022-09-28 03:37:26 @3scorciav @CVPR for 8k subs, you have 16K+4K+2K=22K authors, and using everyone experienced you’d have 4K+2K=6K reviewers, and, if the number of experienced PhDs is similar to the number of postdocs, then 1/2 the reviews are being done by PhD students. Pretty similar to the reality!
2022-09-28 03:34:56 @3scorciav @CVPR Also, I don’t think the stats are so surprising when you think them through. A simplistic rough model of academic CVPR papers might be that each has 4 authors: 2 young inexperienced students, 1 experienced PhD/postdoc who is on 2 papers and a faculty PI who is on 4 papers. Then…
2022-09-28 03:31:31 @3scorciav @CVPR I’m actually not against the idea that people should be required to give back by reviewing. However, I think a successful system needs a careful blend of carrots, sticks, and flexibility, and at least the passed CVPR motion used the second without any of the first or the third.
2022-09-28 03:00:56 @yoavartzi Great visualization, but, really, these aren’t Likert scores at all!
2022-09-27 20:22:35 RT @BigCodeProject: print("Hello world! ")Excited to announce the BigCode project led by @ServiceNowRSRCH and @huggingface! In the spiri…
2022-09-25 15:35:37 RT @sethlazar: Time to retweet this! Another big year for junior recruitment in philosophy (esp ethics) and AI. There still hasn't been eno…
2022-09-23 16:15:22 @jonilaserson Oh, interesting
2022-09-23 16:05:38 Coincidentally coming out right after this tweet thread, there’s a new review of Multimodal Biomedical AI in @NatureMedicine, which has a very nice paragraph covering ConVIRT. Many thanks @jn_acosta, @GuidoFalconeMD, @pranavrajpurkar, @EricTopol! https://t.co/mr2hTroXKU https://t.co/1vKnJSiX30
2022-09-22 15:56:21 @EugeneVinitsky @davidchalmers42 @ylecun @GoogleAI That it is more effective in representing a world than models trained multimodally like (the smaller) CLIP, in particular better modeling object relationships such as spatial position, indicates that surprisingly good world models can be trained from text alone. 3/3 https://t.co/2TGhcJOpdk
2022-09-22 15:53:40 @EugeneVinitsky @davidchalmers42 @ylecun The best current evidence of LLMs considerably modeling world situations is @GoogleAI’s Imagen model https://t.co/udBdkKCCWP — which provides the surprising result that a frozen pre-trained LLM is very effective as a representation from which to generate images by diffusion. 2/3
2022-09-22 15:51:47 @EugeneVinitsky @davidchalmers42 @ylecun As often in tech development, we’re at a stage where LLMs have a glass-half-full language model. They certainly can’t do reasoning or assembly problems of the sort @ylecun mentions but there is strong evidence that current LLMs have more of a world model than you might think! 1/3
2022-09-22 01:21:27 “Fasten the nut and washer back in place with your wrench” — Well, I must object that it’s a “spanner” in my dialect! (And outside North America in general.) https://t.co/8aVfGUlsoj
2022-09-21 20:17:34 RT @gruber: The best thing about this copy-paste permission alert that is driving everyone nuts in iOS 16: the apostrophe in “Don’t Allow P…
2022-09-21 20:16:03 @panabee This problem has largely been solved: Papers get put on arXiv. But paper acceptances are still important both for student recognition and careers, and for wider paper dissemination.
2022-09-21 19:57:01 @kdexd Yeah, there’s randomness and luck, and, in the final reckoning, science advances either way. So, it’s best to be philosophical. But it can be very hard to take when you’re a student doing your finest work.
2022-09-21 17:36:11 Meanwhile, @ESL_Sarah, @gdm3000 &
2022-09-21 17:36:08 Meanwhile, colleagues at Stanford further extended and improved ConVIRT, leading to the approach GLoRIA by Shih-Cheng Huang, @syeung10 et al. at ICCV2021 and CheXzero by Ekin Tiu, @pranavrajpurkar et al. in Nature Biomedical Engineering 2022 https://t.co/SeqUWVGR5F
2022-09-21 17:36:05 And that led to a lot of other vision work exploiting paired text and images to do contrastive learning of visual representations, such as the ALIGN model from Chao Jia et al. at Google (ICML 2021) https://t.co/4tlyUDwwQx
2022-09-21 17:36:04 Luckily, some people read the paper and liked the idea! @AlecRad &
2022-09-21 17:36:02 However, sometimes you don’t get lucky with conference reviewing—even when at a highly privileged institution. We couldn’t interest reviewers at ICLR2020 or ICCV2021. I think the fact that we showed gains in radiology (x-rays) not general vision seemed to dampen interest….
2022-09-21 17:36:01 The paper (Contrastive Learning of Medical Visual Representations from Paired Images and Text, @yuhaozhangx @hjian42 Yasuhide Miura @chrmanning &
2022-09-21 17:36:00 I’m happy to share the published version of our ConVIRT algorithm, appearing in #MLHC2022 (PMLR 182). In 2020, this was a pioneering work in contrastive learning of perception by using naturally occurring paired text. Unfortunately, things took a winding path from there. https://t.co/CUwAZftKlV
2022-09-19 22:27:34 @roger_p_levy @zehavoc (Having now read OED entry:) I guess either sentiment could be intended!But the negative sense 1 really does seem to dominate (my original negative sentiment definition is all NOAD gives). Or maybe externalism has just “broken out” again with no sentiment intended?
2022-09-19 22:14:53 @roger_p_levy Oh! That’s certainly not how I took it! Negative connotations really do seem to dominate (in both English and French – see the sub-thread with @zehavoc).
2022-09-19 19:50:19 @zehavoc Isn’t it probably from the medical sense where it’s the re-emergence with visible symptoms of something bad like a rash or malaria, so it’s necessarily negative?
2022-09-19 17:44:17 @zehavoc Sounds plausible. I’ve now looked through results 11–30. All in French. Many more non-medical usages like you suggest: la recrudescence de la croyance dans la sorcellerie, Face à la recrudescence de ces escroqueries, …
2022-09-19 17:37:26 RT @PaulaBShannon: @chrmanning This usage made my day — in fact, my month. I was tickled to see your third thread as I had dropped my phon…
2022-09-19 17:36:57 @zehavoc Searching on Google in English, looking at the News tab so I don’t mainly get dictionaries, 8 of top 10 results are in French—pretty unusual. Many are the medical use—including 1 in English. The other English use: song title of a Francophone Canadian. So, yeah, rare in English.
2022-09-16 00:09:14 @David_desJ I’m happy to agree with this. I’m not one of the people who believes that we’re 10 years away from AGI (whatever exactly that means). But we have still seen a very substantial step – larger than any that occurred in the preceding 40 years.
2022-09-15 14:19:36 It must take a very particular kind of blindness to not be able to see that we have made substantial steps—indeed, amazing progress—towards AI over the last decade … https://t.co/eAIwUloRM4
2022-09-15 02:22:31 @kchonyc @srchvrs @xiaochang @deliprao @earnmyturns Sure, but he (or Markov) didn’t use the term “Language Model”
2022-09-15 00:46:12 @xiaochang @srchvrs @deliprao @earnmyturns Oh, and that should be “Bahl”. *Autocorrect
2022-09-15 00:45:19 @xiaochang @srchvrs @deliprao @earnmyturns But the bigram was used earlier. It seems common in translations of Russian, e.g., of I. Mel’chuk 1961 Some Problems of Machine Translation Abroad refers to Chomsky’s “language model” of immediate constituents and there are other usages in psycholinguistics and education papers
2022-09-15 00:39:56 @xiaochang @srchvrs @deliprao @earnmyturns You’re right to trace Language Model in the probabilistic sense to Jelinek, @deliprao. Indeed Jelinek took credit on behalf of his IBM group in his ACL LTA address: https://t.co/JSXdw7Gw12 fn. 3 but a slightly earlier reference is Jelinek, Baal, and Mercer 1975 IEEE Trans on IT
2022-09-11 17:02:15 RT @byersblake: At Google Venture a decade ago we searched for AI enabled companies and came up dry. That has changed. AI is going to eat s…
2022-09-09 16:14:27 Many small private schools have CS enrollment caps
2022-09-09 16:11:24 I apologize that this tweet was insensitive to the challenges of others—I do see in hindsight how it appeared elitist. However, there are real institutional choices here! It’s not only that smaller and richer makes life easier. 1/2
2022-09-09 16:04:53 RT @bianca_caban: This is a great overview from @aixventureshq Investment Partner @chrmanning on the most recent advances in large language…
2022-09-06 20:04:47 Meanwhile at @Stanford, we just encourage all students to take as many CS courses as they would like … https://t.co/MZppLiqetu
2022-09-03 18:34:01 RT @bianca_caban: It was great getting together with our team, founders, and LPs to celebrate the launch of @aixventureshq. Shoutout to @sh…
2022-08-27 16:45:25 RT @wzuidema: @chrmanning @soumithchintala @tdietterich @roydanroy @karpathy @ylecun @percyliang @RishiBommasani More support for the jazz…
2022-08-24 19:42:39 RT @StanfordHAI: Help define the future of human-centered AI. We are seeking a Deputy Director to oversee research, education, policy, part…
2022-08-22 19:59:15 @adawan919 @MasoudJasbi But some things I won’t use: E.g., although I love Unicode and Ken Lunde’s book, it’s just not right for this course, and I’m going to avoid where possible anything that’s 21st Century, since I just think this course gains from the complementarity of focusing on older work.
2022-08-22 19:55:51 @adawan919 @MasoudJasbi Thanks for all the work on this, Ada! I’m now myself going off on vacation in a couple of days, so probably a bit before I go through this in detail, but definitely some good suggestions here that I can use.
2022-08-19 01:49:11 @elgreco_winter Amen
2022-08-18 00:52:53 @soumithchintala @tdietterich @roydanroy @karpathy @ylecun @percyliang @RishiBommasani While I only came up with the jazz analogy this week, I think it’s not a bad one: People observed something new and majorly different happening in music and they gave it a name. At that point, it’s like all linguistic change: some names stick and some don’t. I’m hopeful.
2022-08-18 00:48:50 @soumithchintala @tdietterich @roydanroy @karpathy @ylecun @percyliang I think you’re mainly right, @soumithchintala. But there was no flag planting or cookie licking. We didn’t claim to have invented anything. Rather, as @RishiBommasani said, we observed a broad paradigmatic shift with many dimensions, with no good name, and sought to give it one.
2022-08-17 20:17:51 @roydanroy @tdietterich @ylecun @percyliang Ah yes, but how do you refer to jazz, now 100 years later?
2022-08-17 01:28:20 RT @karpathy: !!!! Ok I recorded a (new!) 2h25m lecture on "The spelled-out intro to neural networks and backpropagation: building microgra…
2022-08-15 15:15:18 @bugykoda @AbraxisSoftware @StanfordAILab This isn’t top journals
2022-08-14 23:17:22 RT @RoKhanna: We need term limits for Supreme Court Justices. My bill calls for 18 years. They can stay as judges on lower courts for life.…
2022-08-14 23:05:03 @sudhirPyadav @tdietterich @RishiBommasani @percyliang But a language model isn’t that. It’s a probability distribution over strings—as @tdietterich wrote. Common word meanings would give the broad general meaning of a model of human language but no—an LM says nothing about phonetics, pragmatics, sentence structure, social usage, etc
2022-08-14 17:52:39 @tdietterich @ylecun @percyliang (And I should add that the reason that the data scale has gotten a bit smaller is that people have started paying a bit more attention to filtering data—not before time!)
2022-08-14 17:44:52 @tdietterich @roydanroy @ylecun @percyliang Maybe the two aren’t so different really? Putting a name on a profound shift that was already happening in a domain — music and machine learning, respectively
2022-08-14 17:41:40 @tdietterich @roydanroy @ylecun @percyliang “When Broadway picked it up, they called it 'J-A-Z-Z'. It wasn't called that.”—Eubie Blake https://t.co/5PvJH2Pp5n
2022-08-14 17:37:16 @tdietterich @ylecun @percyliang I agree with this—in language, meaning is contextual. But, here, the scale of data hasn’t changed recently. The 2007 Large Language Models were already being built on 1 trillion words of broad coverage language data—a bit larger than The Pile or PaLM or GPT-3’s training data
2022-08-14 17:26:47 @roydanroy @tdietterich @ylecun @percyliang https://t.co/Y5Rrmk2xRk
2022-08-14 17:16:05 @tdietterich @ylecun @percyliang Receipts:https://t.co/t9hUChDs9qhttps://t.co/P6rZtKKB2B
2022-08-14 17:14:05 @tdietterich @ylecun @percyliang That may be the history seen from ML, but it isn’t the #NLProc history where “Large Language Models” were used since 2006—using that name! But without today’s representation learning neural net magic, they didn’t provide the revolutionary multitask abilities of Foundation Models.
2022-08-14 16:53:28 @sudhirPyadav @RishiBommasani @tdietterich @percyliang I think I’ll mainly just sit back with popcorn and watch, but … if this is the criterion, the term “language model” should have been banned 40 years ago! Surely it is way worse in having a broad general meaning from normal English that confuses and misleads people?!?
2022-08-12 14:36:15 RT @antgoldbloom: Just finished v1 of the new recommender system I'm building. Results so far are incredibly promising https://t.co/5QdiaYX…
2022-08-11 20:16:15 @AmandaAskell But, at the end of the day, I’m certainly not a philosopher and I agree that quite a lot of it comes down to which beliefs an individual feels ring true. So, peace! And, for me, I’ll stick with @ShannonVallor 2/2https://t.co/bgoIdDLeiK
2022-08-11 20:10:18 @AmandaAskell I agree there’s a very broad range of views among philosophers and that we should evaluate arguments by quality not appeal to authority but philosophers—including Parfit—do have a disciplinary depth that I don’t see in many discussions on these topics around Silicon Valley 1/2
2022-08-11 03:20:25 @AmandaAskell Derek Parfit is most certainly a real philsopher. But the argument gets more complex: AFAIK, he argues against a pure social discount rate, but to the extent we all have so little idea what the world will be like in 100s of years, he’s fine with a Probabilistic Discount Rate.
2022-08-11 02:43:38 Maybe we should pay more attention to real philosophers rather than wannabes?( on EA, longtermism, and AI) https://t.co/U1ERFXJae3
2022-08-10 17:04:03 RT @realTinaHuang: Made it to day of the @StanfordHAI Congressional Boot Camp on AI! Staffers are learning about the Silicon Valley e…
2022-08-10 14:20:49 @maximelabonne @Cappuccino_Math @stanfordnlp @HannesStaerk Yes, after decades of inculcation of the importance of data structures in CS, it’s unsettling but somehow exciting that you can do so well by using a Transformer model for “everything” with just a simple, minimal encoding of the original data
2022-08-09 23:28:30 @Cappuccino_Math @maximelabonne @stanfordnlp @HannesStaerk Yeah, Transformers are Graph Neural Networks, cf. https://t.co/BTORTrtJqe, but beyond their being a rather special particular architecture, the interesting thing is whether you do just as well with an off-the-shelf transformer as with the many bespoke GNN architectures proposed
2022-08-08 16:54:15 @adawan919 @MasoudJasbi Well, I’m talking about my Stanford class Linguistics 200, but it’s essentially the same as @MasoudJasbi’s in content and title “Foundations of Linguistic Theory”. It’s for grad students.
2022-08-08 16:46:44 Behind the AI hype, increasingly capable AI systems are rapidly being deployed. They can hugely improve human health, creativity, capabilities &
2022-08-08 16:46:43 While simultaneously launching this week our @StanfordHAI AI Policy Bootcamp to try and increase the understanding of AI among policy makers and politicians, and proposing concrete actions like a National Research Cloud and a Multilateral AI Research Institute https://t.co/iO29Avuw7f
2022-08-08 16:46:42 However, an attempt to bridge can leave you in a lonely place in the middle: not fully on side with companies, too pro-tech and close to industry to not be pelted by “AI Ethics” full-timers, and simultaneously too close to and too far from international relations policymakers https://t.co/VBWaNwpRcd
2022-08-08 16:46:41 This tweet-outpouring from @jackclarkSF’s brain is very good. Well worth a read! However, it’s also such a large and freewheeling smorgasbord that it’s hard to take it all in! A few riffs on it with respect to @StanfordHAI below https://t.co/TpVp2ltIF9
2022-08-08 15:57:22 RT @robreich: The journey of effective altruism, from bed-nets to bio-risk and beyond.Fantastic profile of @willmacaskill in the @NewYork…
2022-08-07 23:11:08 @adawan919 @MasoudJasbi It’s for 2nd/3rd year PhD students to give them some historical foundations and context beyond standard grad classes. Barebones program overview:https://t.co/r8rfmkZoh0Here’s a list of classes—though many take others beyond the Linguistics dept:https://t.co/0tbvbHEdbP,
2022-08-06 18:22:16 @adawan919 @MasoudJasbi Do these thoughts lead to any concrete suggestions of proposed readings?
2022-08-05 22:19:03 RT @realTinaHuang: So I know you all have been asking “what’s it like being @StanfordHAI ‘s policy program manager a day before hosting o…
2022-08-05 16:44:39 @MasoudJasbi Would be very happy to get your materials from Paul!
2022-08-05 16:44:02 @MasoudJasbi I do have rough thoughts of a reading list. My hope is to emphasize original materials—except for pre-1900—to read nothing from the 21st century and to regard the 1990s with suspicion. I’ll also have a bit of a lean towards symbols, computation, etc. Here’s what I have so far. https://t.co/guRuCVVkcJ
2022-08-05 16:40:28 @MasoudJasbi Sorry, mega-slow reply, but would be happy to share stuff. I was even hoping I might be able to find my notes from Paul from 1992. I probably won’t really sort things out until early September (since Stanford starts late and I’ve got some things to finish this month), but …
2022-08-03 20:12:04 RT @realTinaHuang: T-minus4⃣days until we kick off the @StanfordHAI congressional boot camp on AI! We're welcoming2⃣6⃣staffers to campus…
2022-08-01 04:58:01 RT @zeynep: @NateSilver538 Are there viable third parties anywhere with a first-past-the-post system? Regardless of ideological coherence?…
2022-07-31 20:31:44 @CirnoBaka6 Yes
2022-07-27 00:40:58 @MasoudJasbi Hey, so am I…. It’s complex what to choose and how to structure.
2022-07-26 18:41:25 A bit more nuance could be added to this 2nd para on Supervised Learning. Initial breakthroughs _were_ made in #NLProc via unsupervised learning prior to AlexNet—the word vectors of Collobert&
2022-07-26 18:41:24 I finally read @boazbaraktcs’s blog on DL vs Stats.A great mind-clearing read! “Yes, that was how we thought about NNs losing out due to bias/variance in ~2000”“Yes, pre-trained models really are different to classical stats, even if math is the same”https://t.co/8IjnMJjfc9
2022-07-26 02:09:15 RT @antgoldbloom: Been spending the last few weeks speaking to data scientists working on demand forecasting. Some interesting things I lea…
2022-07-22 21:55:19 @BogdanIonutCir2 @GaryMarcus @Meaningness How indeed?!?
2022-07-17 23:04:32 This seems like an important contribution to the external validity of the (big) recent line of work on long-context transformer models.https://t.co/LlsXuCoJCD https://t.co/9ZsP6sxsGY
2022-07-16 16:33:26 @ZhunLiu3 There was food that went with it
2022-07-13 22:22:28 @ryandcotterell @ChrisGPotts @adinamwilliams Yes, they are: Using linguistic theory to understand properties of singleton entities that can be encoded into ML features added a component that directly improved coreference models. Hence, it provided a new method that helped coreference systems.
2022-07-13 22:14:49 @ChrisGPotts @ryandcotterell @adinamwilliams Isn’t singleton mention detection for coreference a pretty nice example of something linguistically motivated that helped? (Though perhaps it hasn’t survived into the era of E2E neural coref models.)https://t.co/ZfuGc5lQs5
2022-07-13 18:23:54 RT @petewarden: I've always dreamed of seeing @TensorFlow Lite on a Commodore 64! https://t.co/0l7tQV233V
2022-07-11 17:08:43 RT @StanfordHAI: Introducing the #AIAuditChallenge – a $71K competition to design better AI audits. @StanfordHAI &
2022-07-11 15:56:58 RT @stanfordnlp: .@stanfordnlp grads at work: Congratulations (and a big thank you) to @MarieMarneffe (at The Ohio State University) on bei…
2022-07-11 15:27:39 @e96857c58f71610 I hope so!
2022-07-10 23:59:29 @Hassan_Sawaf We can catch up during the conference — but had to dash off to see my kid today….
2022-07-10 18:47:39 RT @ThingsCanberra: Bus stop - late afternoon https://t.co/dURm77KOVU
2022-07-10 15:05:19 Heading to Seattle for #NAACL2022. This will be my first travel to an in-person conference in over 2 ½ years (NeurIPS2019 in Vancouver to NAACL2022 in Seattle—but not via Puget Sound) https://t.co/XGX3cbDlMj
2022-07-03 15:21:42 RT @fictillius: Sydney aquarium staff on their way to deal with this at a Sydney train station. https://t.co/QXSlbu4uCv
2022-06-28 04:46:31 @tejuafonja FWIW, this happens to (some) Europeans too. I had a student on a J visa, due to fellowship reasons
2022-06-28 04:36:07 @roydanroy @kchonyc It also occurred to me after posting that there's a bit of a definitional question as to what counts as learning, but I meant to differentiate learning from things like making a markov assumption via backoff or mixing different order models
2022-06-27 17:35:10 @kchonyc Fair enough, though you could have avoided one or two strong statements like “Count-based language models cannot generalize”. At any rate: I agree it is an interesting to better understand how neural LMs generalize and act and how that splits between the model and the decoding.
2022-06-27 16:40:53 @kchonyc (At the risk of appearing a grumpy old guy:)The discussion of autoregressive neural language modeling is interesting but these slides do totally elide the 30 years of work on how count-based language models can generalize without learning by smoothing the probabilities!
2022-06-26 20:23:51 @ryandcotterell @alexandersclark @trochee @jasonbaldridge @LiUNLP I’m digging back a bit, but I agree with @alexandersclark that the right place to look is the Lambek Calculus/Type-logical Grammar take on Categorial Grammar. I think you’re wanting a multimodal system. Perhaps start with Morrill 1994 or Moortgat 1996: https://t.co/Kh6ICzuwQC
2022-06-23 15:43:59 We’re still offline: Stanford lost power Tuesday 3 pm.It’s still out, except limited power for hospital, etc. Mail to @cs.stanford.edu doesn’t work—use manning@stanford.eduPower at home, Twitter, Github, Huggingface, texts, basic NLP website do all work!https://t.co/6UjKlRuZEH https://t.co/0PNmbaouT3
2022-06-23 15:28:44 RT @ItaiYaffe: (1/9) #1DatabrickAWeek - week 44Last week (https://t.co/DxQz9mR8OI) I focused on the awesome #keynote speakers at the upco…
2022-06-21 20:49:33 RT @robreich: Can the GPT-4chan episode be counted as a part of the responsible practice of AI research?More than thirty AI researchers h…
2022-06-21 18:19:01 RT @percyliang: There are legitimate and scientifically valuable reasons to train a language model on toxic text, but the deployment of GPT…
2022-06-20 15:12:35 RT @minilek: Stanford made an important update to its admissions site this week (before/after photos attached). First, they state statistic…
2022-06-16 15:52:46 RT @chelseabfinn: Want to edit a large language model?SERAC is a new model editor that can:* update factual info* selectively change mo…
2022-06-15 17:33:54 @JustinMSolomon @stanfordnlp I agree it’s a bit of a loose analogy, but, still, there’d be nothing to stop it.
2022-06-14 23:26:54 RT @StanfordHAI: If the cake tastes a little salty, that’s just our tears. Many thanks to @MPSellitto, our departing HAI Deputy Director! M…
2022-06-14 22:29:23 RT @sebkrier: Somehow missed this, but yesterday the Chancellor announced a review of the UK's compute capacity led by @ZoubinGhahrama1. Th…
2022-06-14 02:49:32 RT @etzioni: I'm speechless. Please RT. https://t.co/gwvNeh8z3c
2022-06-07 01:05:38 RT @antgoldbloom: .@benhamner and I are stepping down from @kaggle as CEO and CTO to return to our startup roots. Excited to share that D.…
2022-06-06 21:43:39 @AnimaAnandkumar @Caltech @bjenik Congratulations!
2022-06-04 01:12:39 RT @ivanzhouyq: Here we have - no others but @chrmanning and @karpathy! https://t.co/VBKgTAPcrX
2022-06-02 23:40:27 @ambimorph @joelgrus Not Naive Bayes classification!!!
2022-06-02 14:38:08 @joelgrus Oh—wow—I had no idea!It by no means solves all problems in education, but the impact from good quality free educational resources on the Internet is heartwarming.I hope foundation knowledge isn’t actually “defunct” but for modern #NLProc material, see:https://t.co/j9rAkVxLwQ
2022-06-02 00:53:33 @csabaveres @rogerkmoore @GaryMarcus Symbolic grammars capture only a small part of human knowledge of language and they do it poorly. This isn’t an observation of the neural era. Sapir (1921) noted “All grammars leak.” This motivated probabilistic models of language before neural era—see https://t.co/n7jj5CSh6R 2/2
2022-06-02 00:49:59 @csabaveres @rogerkmoore @GaryMarcus Insofar as modern LLMs are universal associational learners, not attuned to the constraints of human language processing, we can agree, but I’m just not on board with the privileged position your paper gives to symbolic grammars. 1/2
2022-06-01 17:03:14 RT @raohackr: @GaryMarcus Except the essay mischaracterizes the NRC proposal: make or buy compute, whatever is cheaper (making). Lots of…
2022-06-01 15:58:21 RT @StanfordHAI: Highlights from last week: @stanford and HAI-affiliated faculty met with members of the European Parliament to discuss the…
2022-05-31 21:02:50 @LingMuelller @LanguageMIT @ryandcotterell All my recent UD papers use tikz-dependency. It’s serviceable but not awesome—I do a lot of hand-setting dependency arc heights since it won’t do them in tiers in the “obviously right” way for compact display. https://t.co/dfHGJOe9o5
2022-05-31 14:21:23 @rogerkmoore @GaryMarcus Maybe the name “language models” was prescient? When simply Markov models, yes, they were just models of word sequence probabilities. But now Neural LMs are models of language, which is why their distributed representations excel at machine translation, QA, anaphora, parsing, etc
2022-05-29 17:45:13 @wm @ibuildthecloud Let me know if you find a better solution! As organizations grow larger and I grow busier, Slack seems a less and less good solution. It’s making me think more favorably of email—it actually scales better in some respects.
2022-05-29 17:42:20 @wm @ibuildthecloud This shows Discord’s gaming roots—Nitro subscriptions are the main way Discord makes money, but if you don’t need animated gifs or other cosmetic perks, you can just ignore the occasional suggestions. They only turn up once per version. Better than paying monthly for every user.
2022-05-28 18:49:22 @wm @ibuildthecloud Seriously, Discord gets more normal every year and isn’t so different from Slack – the only thing that I still feel is fundamentally more right on Slack is the implementation of threads
2022-05-28 18:33:09 @wm @ibuildthecloud Try Harder ™
2022-05-26 15:05:16 RT @WomensAgenda: Dr @NeelaJan treats more avocado-related injuries in Australia than gunshot wounds. She first highlighted this in 2018,…
2022-05-26 14:28:35 RT @fchollet: Reminder that if you want access to more fine-grained political parties that better represent your views, you first need to s…
2022-05-23 15:37:53@NickATomlin @roger_p_levy @juanbuis @Christophepas @callin_bull At any rate, it compares all the text in a light font weight vs. half the text in bold (and the rest in light font). A fairer comparison for traditional reading would put all of the original text on the left in a regular/medium weight font? Cf. https://t.co/K4hvn5pWAc
2022-05-23 14:42:28I’m on board for the Ineffective Altruism movement! (HT @timnitGebru) https://t.co/OixfBTOW3t
2022-05-20 08:11:00 CAFIAC FIX
2022-10-23 02:01:14 @ChrisGPotts @RishiBommasani @tallinzen @NYUDataScience @cocoweixu @david__jurgens @dirk_hovy @percyliang @jurafsky @clairecardie It feels a little unfair to be comparing a posed picture to an out-of-focus video capture, but there’s no denying @RishiBommasani’s shirt is a bold color!
2022-10-23 02:01:14 @ChrisGPotts @RishiBommasani @tallinzen @NYUDataScience @cocoweixu @david__jurgens @dirk_hovy @percyliang @jurafsky @clairecardie It feels a little unfair to be comparing a posed picture to an out-of-focus video capture, but there’s no denying @RishiBommasani’s shirt is a bold color!
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-23 21:08:49 @OfirPress @qi2peng2
2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!
2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*
2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…
2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg
2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-23 21:08:49 @OfirPress @qi2peng2
2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!
2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*
2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…
2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg
2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-23 21:08:49 @OfirPress @qi2peng2
2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!
2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*
2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…
2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg
2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-11-23 21:08:49 @OfirPress @qi2peng2
2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!
2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*
2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…
2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg
2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…
2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.
2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…
2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…
2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.
2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear
2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.
2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD
2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs
2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me
2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3
2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…
2022-12-08 22:41:41 @emilymbender Anyone taking things out of context⁈ Looking at the talk—from an industry venue: https://t.co/UMlczZZJR3 after a detour on what self-supervised learning is, exactly where it goes is that big models give large GLUE task gains but at the cost of hugely more compute/electricity… https://t.co/kWESAvHXkl
2022-12-09 03:50:09 @KordingLab That I really talked about the right topic at that last CIFAR LMB meeting?!?
2022-12-08 22:41:41 @emilymbender Anyone taking things out of context⁈ Looking at the talk—from an industry venue: https://t.co/UMlczZZJR3 after a detour on what self-supervised learning is, exactly where it goes is that big models give large GLUE task gains but at the cost of hugely more compute/electricity… https://t.co/kWESAvHXkl