Christopher Manning

Profil AI Expert

Nationalité: 
Américain(e)
AI spécialité: 
IA Stockastique
Apprentissage Machine
NLP
Occupation actuelle: 
Chercheur, Université de Standford
Taux IA (%): 
50.99'%'

TwitterID: 
@chrmanning
Tweet Visibility Status: 
Public

Description: 
Chercheur en informatique à l'université de Standford, ll travaille sur des logiciels capables de traiter, comprendre et générer intelligemment du matériel en langage humain. Il est un leader dans l'application de l'apprentissage profond au traitement du langage naturel, avec des recherches bien connues sur les réseaux de neurones récursifs arborescents, l'analyse des sentiments, l'analyse des dépendances des réseaux neuronaux, le modèle GloVe de vecteurs de mots, la traduction automatique neuronale et la compréhension approfondie du langage. Il a tenté de définir les termes clés de l'intelligence artificielle et participe aux conférences de standford HAI.

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-22 15:16:39 RT @jamescham: @chrmanning @erikbryn The terrific thing about this generation of AI is that it is so accessible to try, build, and learn. T…

2023-05-21 21:02:13 @boazbaraktcs @percyliang There’s still time!

2023-05-21 20:49:03 “Everybody I talk to first goes to, ‘Oh, how can generative A.I. replace this thing that humans are doing?’ I wish people would think about what new things could be done now that were never done before. That is where most of the value is.”—@erikbryn https://t.co/2eJFU61Agj https://t.co/JbveewaYyV

2023-05-20 01:24:22 RT @s_batzoglou: The voices against open AI and for premature regulation are too loud and unrepresentative. The majority of AI researchers…

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-04-22 16:07:29 RT @russellwald: ATTN: U.S. Congressional staff!! Starting next week you can apply to the @StanfordHAI AI Boot Camp for Congressional Sta…

2023-04-21 00:00:01 CAFIAC FIX

2023-04-12 17:06:53 RT @i3invest: In episode 82 of the [i3] Podcast, we look at #machinelearning and #AI with Kaggle Founder @antgoldbloom and Stanford CS Prof…

2023-04-08 07:29:46 RT @jobergum: Vector DB Chroma raises $18M at a $75M valuation. Uses hnswlib for vector search. https://t.co/eDdPCtiXRZ https://t.co/4Z0h

2023-04-08 05:59:53 @jerome_massot I think @lilyjclifford can address this better than me but, despite individual variation, much work in sociolinguistics shows there are shared speech traits of communities, and LGBTQ people, at least in a region, form one sort of community. Perhaps see: https://t.co/NQ4YNVA3DR

2023-04-05 21:41:36 RT @lilyjclifford: Today speech synthesis takes a leap forward. https://t.co/kCrFGy8GVA is now live. Rime has the world's most expansi…

2023-04-02 12:02:18 RT @xriskology: Lots of people are talking about Eliezer Yudkowsky of the Machine Intelligence Research Institute right now. So I thought I…

2023-03-28 03:38:00 RT @russellwald: What could government be doing regarding tech that it isn’t? "Educate themselves. I’m not saying they need to go learn how…

2023-03-24 04:41:03 @roger_p_levy @weGotlieb @xsway_ Pushing the debate in Linguistic Inquiry! Good on you!

2023-03-23 14:51:21 RT @minilek: Now the front page article on https://t.co/FhQv6QmTvx. Quote 1: "Ironically, despite reviews and blog posts pointing out Boal…

2023-03-21 15:18:39 Web content tricks that search engines “learned” to be wise to in the 2000s decade (well, were programmed to not work by human beings) are still alive and well with 2020s LLMs https://t.co/fkhaVcKFfD

2023-03-20 19:57:49 @ruthstarkman @davidchalmers42 Still a nice intro to the problems of natural language understanding and it well represents the viewpoint I try to capture in tweet #4 in the original reply sequence. But in 2016, I just don’t say that we are or should be building huge LMs to make progress on #NLProc.

2023-03-20 14:39:36 @jonas_eschmann @RandomlyWalking @davidchalmers42 But it’s also based on the idea of building a practical compression tool that does lessless compression and runs on 1 CPU core using at most 10GB of memory, which ends up completely inconsistent in direction with modern work in neural networks.

2023-03-20 14:18:27 @Ollmer @davidchalmers42 It was an awesome post! But it just doesn’t suggest that building these RNNs ever bigger is a promising path to near-HLAI. And looking at the jumbled phrases of the Wikipedia page generated by the character-level LSTM in that post you can see why people weren’t yet thinking that.

2023-03-20 04:50:17 @michalwols @davidchalmers42 @ilyasut Yeah, but, in the early days of @OpenAI, perhaps influenced by @DeepMind’s work with Atari games, etc., the big push was RL (OpenAI Gym, Dota 2, etc.) and work on language was not seen as key but very marginal. The original GPT was clearly a small effort by a young researcher.

2023-03-19 21:02:16 @davidchalmers42 Apple’s Knowledge Navigator video was prescient but draws from symbolic AI/knowledge integration

2023-03-19 20:58:50 @davidchalmers42 Examples that others cite in replies are to me unpersuasive, and mainly too recent

2023-03-19 20:57:22 @davidchalmers42 There was work arguing that NLP is an AI-complete task but that conversely text accumulates the knowledge to address it, especially in the 2010s (Halevy et al., Etzioni, me), but we didn’t see just building a big LM as the best way to unlock the knowledge 4/

2023-03-19 20:56:39 @davidchalmers42 Work going back to at least the 1980s (or sort of to Shannon in the 1950s) promotes statistical models of text prediction and their usefulness for NLP tasks. We said that these models learned knowledge of the world, but we really didn’t see them as a clear path to near-HLAI. 3/

2023-03-19 20:55:19 @davidchalmers42 There was definitely a more general claim that exponentially increasing data and computation will lead to AGI, but work such as Kurzweil, etc. lacked any specificity as to mechanism (and in particular didn’t suggest LMs). 2/

2023-03-19 20:54:51 @davidchalmers42 For statements from 2017 or earlier, I’m voting that the right answer is “nobody” (which may be what you’re wondering with your question, I guess). I think one has to distinguish a somewhat precise version of “saw LLMs coming” from more general claims: 1/

2023-03-16 20:17:44 @sleepinyourhat @shaily99 @andriy_mulyar @srush_nlp @mdredze @ChrisGPotts Yeah, I agree, but a lot of time there is no feedback loop from the company lab back to the people in academia, which tends to be unfortunate.

2023-03-16 02:49:19 @azpoliak @dmimno @srush_nlp @ChrisGPotts @andriy_mulyar @sleepinyourhat @mdredze @tomgoldsteincs @PMinervini @maria_antoniak @mellymeldubs Yeah, I think there are still tons of examples like these of academic researchers doing cool original work. Slightly further afield, where do you think diffusion models were developed?

2023-03-11 20:25:24 RT @luke_d_mutju: Decades of work has gone into this incredible piece of work. A huge amount of work by Yapa and Kartiya to keep #Warlpiri…

2023-03-11 20:25:16 RT @Redkangaroobook: #diaryofabookseller the Warlpiri Encyclopaedic Dictionary was launched this week at Yuendumu. I wasn’t there but it wa…

2023-03-11 20:23:40 RT @chanseypaechMLA: The Warlpiri Encyclopaedic Dictionary is here. It’s the distillation of over 60 years of work and the contribution of…

2023-03-11 20:22:12 .. and people might also like Scott Aaronson’s blog post – not that he’s really someone with linguistic qualifications, but, hey, he’s a smart guy who can see changes happening in the world https://t.co/FkGh32I5rK

2023-03-10 18:04:00 @zyf I actually agree with nearly all of what you write

2023-03-09 16:53:31 As a Professor of Linguistics myself, I find it a little sad that someone who while young was a profound innovator in linguistics and more is now conservatively trying to block exciting new approaches. For more detailed critiques, I recommend my colleagues @adelegoldberg1 and https://t.co/2Lr9MBQBBX

2023-03-09 16:53:30 This is truly an opinion piece. Not even a cursory attempt is made to check easily refutable claims (“they may well predict, incorrectly”). Melodramatic claims of inadequacy are made not of specific current models but any possible machine learning approach https://t.co/Dd7rplkG6p

2023-03-06 19:54:09 RT @Redkangaroobook: Werte (hello) book lovers! We've got the Warlpiri Encyclopaedic Dictionary back in stock. This incredible language res…

2023-03-06 16:57:16 @random_walker I suspect it’s just that their shortened URL expander is down

2023-03-06 16:24:44 March 2023 AI vibes: Fanciful visions of AGI emergence sweep through the @OpenAI office

2023-03-06 04:34:49 @jerome_massot Jérôme, answer: I’m Christine Bruderlin prod manager Aboriginal Studies Press. You can buy straight from us. Email asp@aiatsis.gov.au with Warlpiri Dict in subject line. Send yr address &

2023-03-05 16:43:07 RT @gruber: The official Twitter iPad app is so bad it doesn’t even support any keyboard shortcuts at all. Not even ⌘N for New Tweet. Quite…

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-27 18:08:48 @BlancheMinerva I should maybe also mention that we’re still using even older GPUs than those mentioned above. I’m not sure what she’s doing, but as I write, @rose_e_wang is running some job over a bunch of 4xTitan Xp 12GB machines

2023-02-27 17:58:37 @BlancheMinerva I’d cover what you can do with 80, 40, 24, and 16GB GPUs. Like, the tables in this article were good (even though it didn’t do the 80GB case): https://t.co/XlePNDhSYu And also like it both current and older generation.

2023-02-27 17:56:36 @BlancheMinerva Smaller configs include: 8xA6000, 4xA5000, 8xRTX3090, 4xTitan RTX. It is common for academia to have consumer GPUs. Single GPU options are important both because of ratios of students:GPUs and because single node requires less technical expertise (even though it’s gotten easier)

2023-02-27 17:51:52 @BlancheMinerva Hi Stella, I think most university labs have motley collections of hardware and students variously end up dealing with many configs. They can’t all use the A100s at once! Certainly for us, we have 5.2.3 and 5.2.2 but students would also commonly fine tune on smaller machines …

2023-02-27 01:00:00 CAFIAC FIX

2023-02-19 15:34:29 RT @sama: the adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly

2023-02-19 15:33:03 RT @MelMitchell1: This discourse gets dumber and dumber. Soon all news will be journalists asking LLMs what they think about stories by o…

2023-02-15 15:17:37 RT @atroyn: announcing the chroma embedding database. it's the easiest and best way to work with embeddings in your a.i. app. https://t.co/

2023-02-15 14:33:51 @landay I guess my original tweet suggests that we’ve found out that, in the hands of users, these models haven’t been that dangerous so far—not like TNT or even autonomous vehicles.

2023-02-15 03:51:30 @npparikh @rajiinio Hi Deb, I also agree with your tweet. ChatGPT in high-stakes uses would appall me—e.g., giving medical advice. But we can also contrast the predictions of danger &

2023-02-15 01:20:32 @MadamePratolung @marcslove I like collegiality too, honest. Lots of people talk about “the AGI crowd”, including @random_walker, @pfau, @yoavgo, and @rodneyabrooks. It’s not so marked. https://t.co/z4l5AYNm88 https://t.co/geMQ9EbRXx https://t.co/GcMNp18uFb https://t.co/hLyMC1aOG9

2023-02-15 00:14:43 @RoseAJacob Yes, “but” sets up a contrast, which relates 2 statements, but they need not be a priori related, conflicting or contradictory, eg: “There’s a war raging in Ukraine, but I’m making coffee”

2023-02-14 22:59:34 @landay Indeed, there is no conflict. But that suggests a generative AI model is a tool like … a hammer. You can do a lot of damage with a hammer—they should carry a warning against striking living creatures with them—but mostly they are a great tool with all sorts of productive uses.

2023-02-13 22:11:50 @mmitchell_ai p.s. With all the usual qualifications about how people belong to multiple groups and groups have central and peripheral members, I nevertheless feel that there is a defined enough AI Ethics group that it is okay to refer to it, just like you might refer to “the JAX crowd”, etc.

2023-02-13 22:07:09 @mmitchell_ai Hi Meg, there’s no disliking. I like most everyone in the AI Ethics Crowd, in particular, you! And I believe the AI Ethics crowd has done much important, impactful work. Nevertheless, I do stand by my original post, and think the large gap it points to undermines credibility.

2023-02-13 21:00:23 RT @aixventureshq: We’re hosting an AI event for founders in the Bay Area on 3/11 Details 1⃣ AIX inc @antgoldbloom, @chrmanning, @pabbeel…

2023-02-13 17:02:13 Early 2023 vibes: The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable &

2023-02-09 15:18:01 I think I can imagine the bureaucrats’ brainstorming session—no bad ideas!—on their new residential neighborhoods lacking character at which someone suggested “I know, we could organize ideation sessions for the neighborhoods to choose some traditions!” https://t.co/cxAG2k5a2X

2023-02-09 04:40:59 @deliprao I think that last issue was the main reason for the stock losing $100B in value. “We still have an internal language model that we’re still not going to let anyone else try out, but it’ll be called ‘Bard’ when we do release it” just didn’t quite cut it as a major announcement.

2023-02-08 19:27:32 RT @aixventureshq: Check out the Australian Financial Review’s article on the most cited NLP researcher, AIX Ventures Investing Partner @ch…

2023-02-06 20:33:38 RT @atroyn: announcing stable attribution - a tool which lets anyone find the human creators behind a.i generated images. https://t.co/eHE

2023-02-06 00:50:05 RT @RishiBommasani: @anmarasovic When I started in NLP research, I knew no Python/PyTorch/ML and never had done research. @clairecardie to…

2023-02-05 16:09:08 @yoavgo @ylecun It’s an interesting reversion in terminology! When interviewing for my Stanford job in 1999, precisely what the Stanford old guard wanted to know was my take on reaching human level AI, as opposed to the simple ML of the 1990s. https://t.co/I3Zx9yd1Jh https://t.co/fL44s3lTt0

2023-02-02 15:27:08 @sama I seem to remember that someone suggested “Foundation Models” as a good name

2023-02-02 15:04:31 @roydanroy Isn’t that unclear? If the context is answering a question or obeying an instruction, can’t an LM learn to condition on that and answer much as the RLHF humans are doing? This seems to require only that most conversations in the original data follow Grice’s Cooperative Principle.

2023-02-02 03:40:30 @peterjansen_ai I repeatedly feel that Martha Palmer hasn’t got the credit she deserves (CC @BoulderNLP)

2023-02-02 03:25:44 RT @random_walker: Yeah no. Most of us outside a certain clique don't insist on putting plucked-out-of-the-air probabilities on eminently u…

2023-02-02 03:23:18 I’d thought this swing to self-supervised learning was meant to reduce the need for data annotation? https://t.co/VPwtNEZHnY

2023-01-30 01:00:00 CAFIAC FIX

2023-01-14 18:23:54 RT @bradproctor: As Elon tweets about transparency, developers and users of @tweetbot, @Twitterrific, and @echofon sit waiting 24 hours lat…

2023-01-13 17:34:54 RT @UvA_Amsterdam: Het eerste eredoctoraat gaat naar wetenschapper Christopher Manning @chrmanning @Stanford, vanwege zijn ongekende bijdra…

2023-01-13 17:34:44 RT @UvA_Amsterdam: De Dies Natalis is begonnen! Na lange tijd weer met een volledig cortège. Kijk live mee via https://t.co/s0HCXYGlVo of v…

2023-01-13 15:41:37 RT @DLDConference: "AI should be inspired by human intelligence. The human brain is the single most sophisticated device in the world. #AI…

2023-01-11 18:02:43 RT @rajiinio: Tesla has been pushing "driver liability" for years to shift the blame &

2023-01-11 01:27:40 Here’s some tweeting of the talks by me and @MarietjeSchaake on “Humane AI” in Amsterdam on Monday by @Mental_Elf (thx!) https://t.co/OOl4mBL8dO

2023-01-09 21:46:35 @MarietjeSchaake @UvA_Amsterdam @wzuidema Thanks, great seeing you today, and looking forward to your making it back to @StanfordHAI, @MarietjeSchaake!

2023-01-09 21:44:56 RT @MarietjeSchaake: Wonderful to see ⁦@chrmanning⁩ in Amsterdam where he will receive an honorary doctorate at my alma mater ⁦@UvA_Amste…

2023-01-03 22:13:09 RT @wzuidema: Final call for participation: join us this Friday and Saturday in Amsterdam for a workshop celebrating the honorary doctorate…

2023-01-03 16:51:59 @ScaleTechScott

2023-01-02 21:05:35 @DeeKanejiya That was true in the 2000s, and even for most of the 2010s, but I don’t think we have seen good evidence of it from explicit human engineering in the 2020s, only from models learning linguistic structure themselves, as in https://t.co/iEg7L3BZp9

2023-01-02 20:58:27 @AiSimonThompson @NIST That’s why it “can be an appropriate response”. But, mostly, when people ask a search engine, e.g., when did ABBA’s Dancing Queen come out, they just want the answer.

2023-01-02 20:49:03 @yuvalmarton @marian_nmt I sort of agree, but maybe only soft inductive biases

2023-01-02 19:32:51 RT @marian_nmt: @yuvalmarton @chrmanning The relation of NLP and linguistics seems to be one where having a good background understand of l…

2023-01-02 19:22:34 @yuvalmarton @yoavgo Oops, I meant to write “descriptive” not “observational”. Where is that edit button again?

2023-01-02 19:21:53 @yuvalmarton @yoavgo One definitely needs a place to start! Delineating the Chomsky hierarchy was a huge contribution—though recognizing mildly context sensitive languages came from outside the Chomskyan core. But having “perfect” CFGs only gives observational adequacy! See: https://t.co/MQLBH1bv9l

2023-01-02 18:50:29 Reviewing older work—here’s Ellen Voorhees @NIST in CIKM 2001. 20 years on, we’re almost there! “Traditional text retrieval returns a ranked list of documents. While it can be an appropriate response, frequently it is not. Usually it would be better to provide the answer itself.”

2023-01-02 18:23:45 This does show something fascinating! But not that linguists’ knowledge of language is “bunk”. Rather, what has mainly been a descriptive science—despite what Chomsky claims!—hasn’t provided the necessary insights for engineering systems that acquire and understand language use. https://t.co/p0Tka9WhZz

2023-01-02 18:11:42 RT @lipmaneth: 1/ Open source businesses are fascinating. Here's a quick history on how @huggingface built a $2B co by giving away its so…

2022-12-30 00:29:39 RT @petewarden: Wish you could control your TV or speakers with gestures? Check out the new online demo of our Gesture Sensor and then come…

2022-12-28 23:51:19 @AmandaAskell Yeah, that could actually be right!

2022-12-28 04:47:52 Wondering how regarding someone with an MLitt in Philosophy as a CS/Math techbro is going to go down

2022-12-21 18:20:12 Some of the paper is tied to a now-dated specific context. But promoting the task, emphasizing pragmatics and speaker meaning, and incorporating world knowledge and uncertainty were all good moves! For a modern take, see Ellie Pavlick @Brown_NLP’s paper: https://t.co/3bOJZBLXG1

2022-12-21 18:20:11 I wrote this paper 17 years ago—December break 2005—advocating for a new NLU task introduced by Ido Dagan @biunlp: He called it Recognizing Textual entailment

2022-12-21 00:50:18 RT @russellwald: Closing out the year strong @StanfordHAI w/2 important policy pubs on fed AI policy. A HAI/RegLab white paper finds the f…

2022-12-20 22:17:05 @yoavgo @csabaveres @cohenrap This isn’t “Language forces us to commit to an idealized expression". The author of the last one could have used “Indicates that execution of the code should be terminated if it ever loses its validity”. Human minds like metaphors, so indeed, LLMs need to learn to interpret them!

2022-12-20 21:25:10 @yoavgo @cohenrap I think some of my code wants to be killed too

2022-12-20 21:02:56 @yoavgo @cohenrap Similarly with “understands”. E.g., compiler people often talk about what a compiler understands: “the compiler understand it is a pointer”, “the compiler understands the lifetime of every variable” [all real textual examples in these two tweets! Corpus linguistics, yay!!!]

2022-12-20 21:00:01 @yoavgo @cohenrap It’s not a differing fact about English. Metaphorical sense extension is common in language. People talk about the behavior of all sorts of inanimate things: “the behavior of the sea”, “the behavior of the acoustic distortion product”, “the behavior of the producer price index”

2022-12-17 21:56:36 RT @MeigimKriol: La Katherrain la kaunsil offs dei garram nyuwan sain gada Kriol! Wani yumob reken? Im gudwan? https://t.co/ZcAfWkV9G9

2022-12-17 20:41:42 RT @Abebab: longtermism might be one of the most influential ideologies that few people outside of elite universities &

2022-12-17 20:35:50 @ChrisGPotts @KarelDoostrlnck Entre les tours de Bruges et Gand…

2022-12-14 15:33:51 RT @timnitGebru: Read this by @KaluluAnthony of https://t.co/bf2uXxretK. "EA is even worse than traditional philanthropy in the way it ex…

2022-12-10 20:07:57 RT @cHHillee: Eager mode was what made PyTorch successful. So why did we feel the need to depart from eager mode in PyTorch 2.0? Answer: i…

2022-12-10 19:27:40 RT @jaredlcm: @chrmanning While this paper of mine isn't generated by a silicon language model I do think it captures the kind of balanced…

2022-12-10 19:23:19 @NatalyFoucault @americanacad A philosophical question—it’s not clear! It may only be possible to learn textual meanings of further words due to grounded experience of some

2022-12-09 03:50:09 @KordingLab That I really talked about the right topic at that last CIFAR LMB meeting?!?

2022-12-08 22:41:41 @emilymbender Anyone taking things out of context⁈ Looking at the talk—from an industry venue: https://t.co/UMlczZZJR3 after a detour on what self-supervised learning is, exactly where it goes is that big models give large GLUE task gains but at the cost of hugely more compute/electricity… https://t.co/kWESAvHXkl

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-15 16:08:46 @FelixHill84 @gchrupala There is indeed a lot of excellent work there! However, the OP had asked for “breakthroughs in theoretical linguistics”, and while what counts as “theoretical” is a value judgment, my gut sense was that Lakoff et al. definitely wouldn’t count.

2022-11-15 16:05:28 @kadarakos @gchrupala Yes, indeed.

2022-11-15 02:34:00 @FelixHill84 @gchrupala Wait, I thought I wrote a tweet each on work on formal semantics &

2022-11-14 19:17:22 @gchrupala OK fair enough. I hope my list was useful. In reverse, of the linguistics in NLP, till 1957, you can have PoS, morphology, phrase structure, dependency and unconstrained transformations, if you want them. But most all else came later, whether HMMs, PCFGs or other linguistic ideas

2022-11-14 18:25:04 @gchrupala - Not only did sociolinguistics overall grow up in the 60s, but the development of formal probabilistic models of variation (variable rules) and code-switching also dates to the 60s- (I’ll stop here, but one could add even more areas of linguistics.)

2022-11-14 18:22:23 @gchrupala - Phonology: Everything from Chomsky and Halle’s Sound Pattern of English through many useful concepts in metrical phonology, autosegmental phonology, and optimality theory or harmonic grammar came out from the 60s through the 90s

2022-11-14 18:20:48 @gchrupala - Syntax: A lot of very good foundation material about how languages work, how to describe them and their cross-linguistic patterning was developed in the 60s, 70s, and 80s: X’ theory, the phenomena originally called raising/equi, grammatical relation changing operations

2022-11-14 18:20:07 @gchrupala - Pragmatics: This only emerged as a field with theoretical content in the 1960s, starting with Grice’s work and the 70s through the 2000s saw the development of formal accounts of pragmatic phenomena such as implicatures and presuppositions (Karttunen, Potts, etc.)

2022-11-14 18:18:22 @gchrupala - Syntax: A lot of very good foundation material about how languages work, how to describe them and their cross-linguistic patterning was developed in the 60s, 70s, and 80s: X’ theory, the phenomena originally called raising/equi, grammatical relation changing operations

2022-11-14 18:16:11 @gchrupala - Semantics: Basically all work on formal semantics is after 1957, including Montague’s Proper Treatment of Quantification in Ordinary English and all the subsequent development of formal semantics by Partee and others.

2022-11-14 18:14:56 @gchrupala - Mathematical linguistics: Most of the work developing properties of formal languages was done after Chomsky started things off in the mid 50s, including the work by CS people like Hopcroft, Aho, Ullman

2022-11-14 18:12:28 @gchrupala Even if you believe that mainstream theoretical linguistics has lost the plot in the 21st century, I think the presupposition of this question is quite absurd. Almost nothing was known about theoretical linguistics in 1957! So there are examples in every direction you might look:

2022-11-14 16:59:21 Human-in-the-loop reinforcement learning—DaVinci instruct—may be the most impactful 2022 development in foundation models. What can we achieve by reinventing the AI design process to start from people’s needs?Watch tomorrow’s @StanfordHAI conference 9 PSThttps://t.co/CCbFEnqklS https://t.co/XyFAAElmCJ

2022-11-13 15:51:38 RT @henrycooke: New Zealand is at 99% renewable electricity generation right now. You read that right. https://t.co/7Wzf9HGlvz

2022-11-13 15:50:31 RT @NC_Renic: Academics undaunted by the news that being unverified will mean that no one reads your tweets. We’ve been training for this…

2022-11-12 03:41:33 RT @petewarden: We need help turning our fugly, airport-security-unfriendly, held-together-with-Blu-Tack prototypes into clean looking sale…

2022-11-11 15:39:46 @sethlazar @robreich @mehran_sahami @landay @drfeifei Not yet….

2022-11-05 21:22:05 @yoavgo I like that one too. But you were asking for ones from 25 years ago.

2022-11-05 21:20:48 @zehavoc @yoavgo @wzuidema I agree it’s a good example of an attempt to carefully replicate a model, but I honestly couldn’t imagine asking a student to read it these days. Full of in-the-weeds details of modeling methods that no one in their right mind should have to care about in 2022.

2022-11-05 21:17:57 @yoavgo It’s a great example of a complex generative model from the probabilistic era

2022-11-05 21:08:56 @yoavgo @zehavoc @wzuidema No

2022-11-04 05:31:09 @zehavoc @stanfordnlp Thanks! Funny coincidence: I was just learning about Arabizi today from Mona Diab. Unfortunately, I didn’t know about your paper and I guess we were more searching for refs on “traditional “ creoles rather than this “new” creole.

2022-11-03 00:36:59 RT @sundarpichai: 1/ From today's AI@ event: we announced our Imagen text-to-image model is coming soon to AI Test Kitchen. And for the 1st…

2022-11-02 22:22:59 RT @StanfordHAI: This year’s HAI Fall Conference on Nov. 15 will focus on design principles, values, and guidelines to ensure that AI syste…

2022-11-02 20:12:43 I’ve found a gaggle of twitter spam accounts because one of their tweets matches Stanford NER: The weird way they write “Gard(i)ner” word-breaks the “ner”.Like they say, you’d think coordinated inauthentic behavior like this would be easily detectable!https://t.co/qaX3Bg1ORl

2022-11-02 19:52:15 Good thread! https://t.co/BGKYh8lGgP

2022-11-02 16:25:37 @yoavgo Collins 1997 Three Generative, Lexicalized Models for Statistical Parsing!

2022-11-02 16:24:41 @wzuidema @yoavgo Yeah, that was a good one!

2022-10-23 02:01:14 @ChrisGPotts @RishiBommasani @tallinzen @NYUDataScience @cocoweixu @david__jurgens @dirk_hovy @percyliang @jurafsky @clairecardie It feels a little unfair to be comparing a posed picture to an out-of-focus video capture, but there’s no denying @RishiBommasani’s shirt is a bold color!

2022-10-20 16:57:06 RT @StanfordAILab: Last night, 50 years to the day after the pioneering Intergalactic SpaceWar Olympics first video game contest (https://t…

2022-10-20 16:56:16 RT @petewarden: Launching @UsefulSensors! https://t.co/WUcGnF8Mky

2022-10-19 19:42:16 RT @petewarden: I'm finally able to talk about what I've been up to for the last six months! https://t.co/4qibCjUCIT

2022-10-19 15:29:54 RT @michiyasunaga: Excited to share our #NeurIPS2022 paper “DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining”, w/ the amazin…

2022-10-18 16:28:03 RT @lilyjclifford: i'm announcing the company we've been building.it's called rime.here's a tiny demo of what truly generative text-to-…

2022-10-18 15:26:56 RT @landay: Write up on our upcoming fall conference: “AI in the Loop: Humans Must Remain in Charge” https://t.co/5POjonMDtY

2022-10-18 14:55:19 RT @antgoldbloom: Tonight I attended a @StabilityAI event where they previewed generative animation. On the way home, l passed a @Cruise ca…

2022-10-18 14:36:57 RT @jugander: Looking forward to this @StanfordHAI workshop Nov 15 on "AI-in-the-loop": https://t.co/ZfeHcD4R8Q And resurfacing @ChenhaoTan…

2022-10-18 03:09:23 @RPuggaardRode @gwalkden @JennekevdWal @Lg_on_the_Move As a person whose most-cited first-author publication is “Why most published research findings are false”, Ioannidis now seems to be working to stack the deck to support his prior conclusions!

2022-10-18 02:50:42 RT @shaneguML: Attended @aixventureshq’s first community event a week.I entered deep learning in 2012 after ImageNet. I saw the craze bac…

2022-10-13 14:35:32 RT @StanfordHAI: Rethink the loop: At our fall conference this November, we challenge the phrase “humans in the loop” and advocate for keep…

2022-10-12 02:33:58 RT @robreich: GREAT fellowship opportunity at @StanfordPACS &

2022-10-10 04:10:22 @geeticka1 @rayruizhiliao Yes, indeed!

2022-10-07 15:09:56 RT @curtlanglotz: Some thoughts on the industry approach to AI in radiology (to date). A thread:@stanfordAIMI @StanfordHAI

2022-10-06 18:40:57 @michael_nielsen Thanks! And published versions of my Human Language Understanding &

2022-10-05 16:41:33 RT @HannaHajishirzi: My alma mater, Sharif University of Technology, Iran's premier university, was under siege yesterday. Many students we…

2022-10-03 03:45:11 RT @SoloGen: If you're a professor or a student in STEM in a Western country, you probably know someone from the Sharif University of Techn…

2022-09-28 04:54:16 @3scorciav @CVPR @jcniebles Thx!

2022-09-28 03:37:26 @3scorciav @CVPR for 8k subs, you have 16K+4K+2K=22K authors, and using everyone experienced you’d have 4K+2K=6K reviewers, and, if the number of experienced PhDs is similar to the number of postdocs, then 1/2 the reviews are being done by PhD students. Pretty similar to the reality!

2022-09-28 03:34:56 @3scorciav @CVPR Also, I don’t think the stats are so surprising when you think them through. A simplistic rough model of academic CVPR papers might be that each has 4 authors: 2 young inexperienced students, 1 experienced PhD/postdoc who is on 2 papers and a faculty PI who is on 4 papers. Then…

2022-09-28 03:31:31 @3scorciav @CVPR I’m actually not against the idea that people should be required to give back by reviewing. However, I think a successful system needs a careful blend of carrots, sticks, and flexibility, and at least the passed CVPR motion used the second without any of the first or the third.

2022-09-28 03:00:56 @yoavartzi Great visualization, but, really, these aren’t Likert scores at all!

2022-09-27 20:22:35 RT @BigCodeProject: print("Hello world! ")Excited to announce the BigCode project led by @ServiceNowRSRCH and @huggingface! In the spiri…

2022-09-25 15:35:37 RT @sethlazar: Time to retweet this! Another big year for junior recruitment in philosophy (esp ethics) and AI. There still hasn't been eno…

2022-09-23 16:15:22 @jonilaserson Oh, interesting

2022-09-23 16:05:38 Coincidentally coming out right after this tweet thread, there’s a new review of Multimodal Biomedical AI in @NatureMedicine, which has a very nice paragraph covering ConVIRT. Many thanks @jn_acosta, @GuidoFalconeMD, @pranavrajpurkar, @EricTopol! https://t.co/mr2hTroXKU https://t.co/1vKnJSiX30

2022-09-22 15:56:21 @EugeneVinitsky @davidchalmers42 @ylecun @GoogleAI That it is more effective in representing a world than models trained multimodally like (the smaller) CLIP, in particular better modeling object relationships such as spatial position, indicates that surprisingly good world models can be trained from text alone. 3/3 https://t.co/2TGhcJOpdk

2022-09-22 15:53:40 @EugeneVinitsky @davidchalmers42 @ylecun The best current evidence of LLMs considerably modeling world situations is @GoogleAI’s Imagen model https://t.co/udBdkKCCWP — which provides the surprising result that a frozen pre-trained LLM is very effective as a representation from which to generate images by diffusion. 2/3

2022-09-22 15:51:47 @EugeneVinitsky @davidchalmers42 @ylecun As often in tech development, we’re at a stage where LLMs have a glass-half-full language model. They certainly can’t do reasoning or assembly problems of the sort @ylecun mentions but there is strong evidence that current LLMs have more of a world model than you might think! 1/3

2022-09-22 01:21:27 “Fasten the nut and washer back in place with your wrench” — Well, I must object that it’s a “spanner” in my dialect! (And outside North America in general.) https://t.co/8aVfGUlsoj

2022-09-21 20:17:34 RT @gruber: The best thing about this copy-paste permission alert that is driving everyone nuts in iOS 16: the apostrophe in “Don’t Allow P…

2022-09-21 20:16:03 @panabee This problem has largely been solved: Papers get put on arXiv. But paper acceptances are still important both for student recognition and careers, and for wider paper dissemination.

2022-09-21 19:57:01 @kdexd Yeah, there’s randomness and luck, and, in the final reckoning, science advances either way. So, it’s best to be philosophical. But it can be very hard to take when you’re a student doing your finest work.

2022-09-21 17:36:11 Meanwhile, @ESL_Sarah, @gdm3000 &

2022-09-21 17:36:08 Meanwhile, colleagues at Stanford further extended and improved ConVIRT, leading to the approach GLoRIA by Shih-Cheng Huang, @syeung10 et al. at ICCV2021 and CheXzero by Ekin Tiu, @pranavrajpurkar et al. in Nature Biomedical Engineering 2022 https://t.co/SeqUWVGR5F

2022-09-21 17:36:05 And that led to a lot of other vision work exploiting paired text and images to do contrastive learning of visual representations, such as the ALIGN model from Chao Jia et al. at Google (ICML 2021) https://t.co/4tlyUDwwQx

2022-09-21 17:36:04 Luckily, some people read the paper and liked the idea! @AlecRad &

2022-09-21 17:36:02 However, sometimes you don’t get lucky with conference reviewing—even when at a highly privileged institution. We couldn’t interest reviewers at ICLR2020 or ICCV2021. I think the fact that we showed gains in radiology (x-rays) not general vision seemed to dampen interest….

2022-09-21 17:36:01 The paper (Contrastive Learning of Medical Visual Representations from Paired Images and Text, @yuhaozhangx @hjian42 Yasuhide Miura @chrmanning &

2022-09-21 17:36:00 I’m happy to share the published version of our ConVIRT algorithm, appearing in #MLHC2022 (PMLR 182). In 2020, this was a pioneering work in contrastive learning of perception by using naturally occurring paired text. Unfortunately, things took a winding path from there. https://t.co/CUwAZftKlV

2022-09-19 22:27:34 @roger_p_levy @zehavoc (Having now read OED entry:) I guess either sentiment could be intended!But the negative sense 1 really does seem to dominate (my original negative sentiment definition is all NOAD gives). Or maybe externalism has just “broken out” again with no sentiment intended?

2022-09-19 22:14:53 @roger_p_levy Oh! That’s certainly not how I took it! Negative connotations really do seem to dominate (in both English and French – see the sub-thread with @zehavoc).

2022-09-19 19:50:19 @zehavoc Isn’t it probably from the medical sense where it’s the re-emergence with visible symptoms of something bad like a rash or malaria, so it’s necessarily negative?

2022-09-19 17:44:17 @zehavoc Sounds plausible. I’ve now looked through results 11–30. All in French. Many more non-medical usages like you suggest: la recrudescence de la croyance dans la sorcellerie, Face à la recrudescence de ces escroqueries, …

2022-09-19 17:37:26 RT @PaulaBShannon: @chrmanning This usage made my day — in fact, my month. I was tickled to see your third thread as I had dropped my phon…

2022-09-19 17:36:57 @zehavoc Searching on Google in English, looking at the News tab so I don’t mainly get dictionaries, 8 of top 10 results are in French—pretty unusual. Many are the medical use—including 1 in English. The other English use: song title of a Francophone Canadian. So, yeah, rare in English.

2022-09-16 00:09:14 @David_desJ I’m happy to agree with this. I’m not one of the people who believes that we’re 10 years away from AGI (whatever exactly that means). But we have still seen a very substantial step – larger than any that occurred in the preceding 40 years.

2022-09-15 14:19:36 It must take a very particular kind of blindness to not be able to see that we have made substantial steps—indeed, amazing progress—towards AI over the last decade … https://t.co/eAIwUloRM4

2022-09-15 02:22:31 @kchonyc @srchvrs @xiaochang @deliprao @earnmyturns Sure, but he (or Markov) didn’t use the term “Language Model”

2022-09-15 00:46:12 @xiaochang @srchvrs @deliprao @earnmyturns Oh, and that should be “Bahl”. *Autocorrect

2022-09-15 00:45:19 @xiaochang @srchvrs @deliprao @earnmyturns But the bigram was used earlier. It seems common in translations of Russian, e.g., of I. Mel’chuk 1961 Some Problems of Machine Translation Abroad refers to Chomsky’s “language model” of immediate constituents and there are other usages in psycholinguistics and education papers

2022-09-15 00:39:56 @xiaochang @srchvrs @deliprao @earnmyturns You’re right to trace Language Model in the probabilistic sense to Jelinek, @deliprao. Indeed Jelinek took credit on behalf of his IBM group in his ACL LTA address: https://t.co/JSXdw7Gw12 fn. 3 but a slightly earlier reference is Jelinek, Baal, and Mercer 1975 IEEE Trans on IT

2022-09-11 17:02:15 RT @byersblake: At Google Venture a decade ago we searched for AI enabled companies and came up dry. That has changed. AI is going to eat s…

2022-09-09 16:14:27 Many small private schools have CS enrollment caps

2022-09-09 16:11:24 I apologize that this tweet was insensitive to the challenges of others—I do see in hindsight how it appeared elitist. However, there are real institutional choices here! It’s not only that smaller and richer makes life easier. 1/2

2022-09-09 16:04:53 RT @bianca_caban: This is a great overview from @aixventureshq Investment Partner @chrmanning on the most recent advances in large language…

2022-09-06 20:04:47 Meanwhile at @Stanford, we just encourage all students to take as many CS courses as they would like … https://t.co/MZppLiqetu

2022-09-03 18:34:01 RT @bianca_caban: It was great getting together with our team, founders, and LPs to celebrate the launch of @aixventureshq. Shoutout to @sh…

2022-08-27 16:45:25 RT @wzuidema: @chrmanning @soumithchintala @tdietterich @roydanroy @karpathy @ylecun @percyliang @RishiBommasani More support for the jazz…

2022-08-24 19:42:39 RT @StanfordHAI: Help define the future of human-centered AI. We are seeking a Deputy Director to oversee research, education, policy, part…

2022-08-22 19:59:15 @adawan919 @MasoudJasbi But some things I won’t use: E.g., although I love Unicode and Ken Lunde’s book, it’s just not right for this course, and I’m going to avoid where possible anything that’s 21st Century, since I just think this course gains from the complementarity of focusing on older work.

2022-08-22 19:55:51 @adawan919 @MasoudJasbi Thanks for all the work on this, Ada! I’m now myself going off on vacation in a couple of days, so probably a bit before I go through this in detail, but definitely some good suggestions here that I can use.

2022-08-19 01:49:11 @elgreco_winter Amen

2022-08-18 00:52:53 @soumithchintala @tdietterich @roydanroy @karpathy @ylecun @percyliang @RishiBommasani While I only came up with the jazz analogy this week, I think it’s not a bad one: People observed something new and majorly different happening in music and they gave it a name. At that point, it’s like all linguistic change: some names stick and some don’t. I’m hopeful.

2022-08-18 00:48:50 @soumithchintala @tdietterich @roydanroy @karpathy @ylecun @percyliang I think you’re mainly right, @soumithchintala. But there was no flag planting or cookie licking. We didn’t claim to have invented anything. Rather, as @RishiBommasani said, we observed a broad paradigmatic shift with many dimensions, with no good name, and sought to give it one.

2022-08-17 20:17:51 @roydanroy @tdietterich @ylecun @percyliang Ah yes, but how do you refer to jazz, now 100 years later?

2022-08-17 01:28:20 RT @karpathy: !!!! Ok I recorded a (new!) 2h25m lecture on "The spelled-out intro to neural networks and backpropagation: building microgra…

2022-08-15 15:15:18 @bugykoda @AbraxisSoftware @StanfordAILab This isn’t top journals

2022-08-14 23:17:22 RT @RoKhanna: We need term limits for Supreme Court Justices. My bill calls for 18 years. They can stay as judges on lower courts for life.…

2022-08-14 23:05:03 @sudhirPyadav @tdietterich @RishiBommasani @percyliang But a language model isn’t that. It’s a probability distribution over strings—as @tdietterich wrote. Common word meanings would give the broad general meaning of a model of human language but no—an LM says nothing about phonetics, pragmatics, sentence structure, social usage, etc

2022-08-14 17:52:39 @tdietterich @ylecun @percyliang (And I should add that the reason that the data scale has gotten a bit smaller is that people have started paying a bit more attention to filtering data—not before time!)

2022-08-14 17:44:52 @tdietterich @roydanroy @ylecun @percyliang Maybe the two aren’t so different really? Putting a name on a profound shift that was already happening in a domain — music and machine learning, respectively

2022-08-14 17:41:40 @tdietterich @roydanroy @ylecun @percyliang “When Broadway picked it up, they called it 'J-A-Z-Z'. It wasn't called that.”—Eubie Blake https://t.co/5PvJH2Pp5n

2022-08-14 17:37:16 @tdietterich @ylecun @percyliang I agree with this—in language, meaning is contextual. But, here, the scale of data hasn’t changed recently. The 2007 Large Language Models were already being built on 1 trillion words of broad coverage language data—a bit larger than The Pile or PaLM or GPT-3’s training data

2022-08-14 17:26:47 @roydanroy @tdietterich @ylecun @percyliang https://t.co/Y5Rrmk2xRk

2022-08-14 17:16:05 @tdietterich @ylecun @percyliang Receipts:https://t.co/t9hUChDs9qhttps://t.co/P6rZtKKB2B

2022-08-14 17:14:05 @tdietterich @ylecun @percyliang That may be the history seen from ML, but it isn’t the #NLProc history where “Large Language Models” were used since 2006—using that name! But without today’s representation learning neural net magic, they didn’t provide the revolutionary multitask abilities of Foundation Models.

2022-08-14 16:53:28 @sudhirPyadav @RishiBommasani @tdietterich @percyliang I think I’ll mainly just sit back with popcorn and watch, but … if this is the criterion, the term “language model” should have been banned 40 years ago! Surely it is way worse in having a broad general meaning from normal English that confuses and misleads people?!?

2022-08-12 14:36:15 RT @antgoldbloom: Just finished v1 of the new recommender system I'm building. Results so far are incredibly promising https://t.co/5QdiaYX

2022-08-11 20:16:15 @AmandaAskell But, at the end of the day, I’m certainly not a philosopher and I agree that quite a lot of it comes down to which beliefs an individual feels ring true. So, peace! And, for me, I’ll stick with @ShannonVallor 2/2https://t.co/bgoIdDLeiK

2022-08-11 20:10:18 @AmandaAskell I agree there’s a very broad range of views among philosophers and that we should evaluate arguments by quality not appeal to authority but philosophers—including Parfit—do have a disciplinary depth that I don’t see in many discussions on these topics around Silicon Valley 1/2

2022-08-11 03:20:25 @AmandaAskell Derek Parfit is most certainly a real philsopher. But the argument gets more complex: AFAIK, he argues against a pure social discount rate, but to the extent we all have so little idea what the world will be like in 100s of years, he’s fine with a Probabilistic Discount Rate.

2022-08-11 02:43:38 Maybe we should pay more attention to real philosophers rather than wannabes?( on EA, longtermism, and AI) https://t.co/U1ERFXJae3

2022-08-10 17:04:03 RT @realTinaHuang: Made it to day of the ⁦@StanfordHAI⁩ Congressional Boot Camp on AI! Staffers are learning about the Silicon Valley e…

2022-08-10 14:20:49 @maximelabonne @Cappuccino_Math @stanfordnlp @HannesStaerk Yes, after decades of inculcation of the importance of data structures in CS, it’s unsettling but somehow exciting that you can do so well by using a Transformer model for “everything” with just a simple, minimal encoding of the original data

2022-08-09 23:28:30 @Cappuccino_Math @maximelabonne @stanfordnlp @HannesStaerk Yeah, Transformers are Graph Neural Networks, cf. https://t.co/BTORTrtJqe, but beyond their being a rather special particular architecture, the interesting thing is whether you do just as well with an off-the-shelf transformer as with the many bespoke GNN architectures proposed

2022-08-08 16:54:15 @adawan919 @MasoudJasbi Well, I’m talking about my Stanford class Linguistics 200, but it’s essentially the same as @MasoudJasbi’s in content and title “Foundations of Linguistic Theory”. It’s for grad students.

2022-08-08 16:46:44 Behind the AI hype, increasingly capable AI systems are rapidly being deployed. They can hugely improve human health, creativity, capabilities &

2022-08-08 16:46:43 While simultaneously launching this week our @StanfordHAI AI Policy Bootcamp to try and increase the understanding of AI among policy makers and politicians, and proposing concrete actions like a National Research Cloud and a Multilateral AI Research Institute https://t.co/iO29Avuw7f

2022-08-08 16:46:42 However, an attempt to bridge can leave you in a lonely place in the middle: not fully on side with companies, too pro-tech and close to industry to not be pelted by “AI Ethics” full-timers, and simultaneously too close to and too far from international relations policymakers https://t.co/VBWaNwpRcd

2022-08-08 16:46:41 This tweet-outpouring from @jackclarkSF’s brain is very good. Well worth a read! However, it’s also such a large and freewheeling smorgasbord that it’s hard to take it all in! A few riffs on it with respect to @StanfordHAI below https://t.co/TpVp2ltIF9

2022-08-08 15:57:22 RT @robreich: The journey of effective altruism, from bed-nets to bio-risk and beyond.Fantastic profile of @willmacaskill in the @NewYork…

2022-08-07 23:11:08 @adawan919 @MasoudJasbi It’s for 2nd/3rd year PhD students to give them some historical foundations and context beyond standard grad classes. Barebones program overview:https://t.co/r8rfmkZoh0Here’s a list of classes—though many take others beyond the Linguistics dept:https://t.co/0tbvbHEdbP,

2022-08-06 18:22:16 @adawan919 @MasoudJasbi Do these thoughts lead to any concrete suggestions of proposed readings?

2022-08-05 22:19:03 RT @realTinaHuang: So I know you all have been asking “what’s it like being ⁦@StanfordHAI⁩ ‘s policy program manager a day before hosting o…

2022-08-05 16:44:39 @MasoudJasbi Would be very happy to get your materials from Paul!

2022-08-05 16:44:02 @MasoudJasbi I do have rough thoughts of a reading list. My hope is to emphasize original materials—except for pre-1900—to read nothing from the 21st century and to regard the 1990s with suspicion. I’ll also have a bit of a lean towards symbols, computation, etc. Here’s what I have so far. https://t.co/guRuCVVkcJ

2022-08-05 16:40:28 @MasoudJasbi Sorry, mega-slow reply, but would be happy to share stuff. I was even hoping I might be able to find my notes from Paul from 1992. I probably won’t really sort things out until early September (since Stanford starts late and I’ve got some things to finish this month), but …

2022-08-03 20:12:04 RT @realTinaHuang: T-minus4⃣days until we kick off the @StanfordHAI congressional boot camp on AI! We're welcoming2⃣6⃣staffers to campus…

2022-08-01 04:58:01 RT @zeynep: @NateSilver538 Are there viable third parties anywhere with a first-past-the-post system? Regardless of ideological coherence?…

2022-07-31 20:31:44 @CirnoBaka6 Yes

2022-07-27 00:40:58 @MasoudJasbi Hey, so am I…. It’s complex what to choose and how to structure.

2022-07-26 18:41:25 A bit more nuance could be added to this 2nd para on Supervised Learning. Initial breakthroughs _were_ made in #NLProc via unsupervised learning prior to AlexNet—the word vectors of Collobert&

2022-07-26 18:41:24 I finally read @boazbaraktcs’s blog on DL vs Stats.A great mind-clearing read! “Yes, that was how we thought about NNs losing out due to bias/variance in ~2000”“Yes, pre-trained models really are different to classical stats, even if math is the same”https://t.co/8IjnMJjfc9

2022-07-26 02:09:15 RT @antgoldbloom: Been spending the last few weeks speaking to data scientists working on demand forecasting. Some interesting things I lea…

2022-07-22 21:55:19 @BogdanIonutCir2 @GaryMarcus @Meaningness How indeed?!?

2022-07-17 23:04:32 This seems like an important contribution to the external validity of the (big) recent line of work on long-context transformer models.https://t.co/LlsXuCoJCD https://t.co/9ZsP6sxsGY

2022-07-16 16:33:26 @ZhunLiu3 There was food that went with it

2022-07-13 22:22:28 @ryandcotterell @ChrisGPotts @adinamwilliams Yes, they are: Using linguistic theory to understand properties of singleton entities that can be encoded into ML features added a component that directly improved coreference models. Hence, it provided a new method that helped coreference systems.

2022-07-13 22:14:49 @ChrisGPotts @ryandcotterell @adinamwilliams Isn’t singleton mention detection for coreference a pretty nice example of something linguistically motivated that helped? (Though perhaps it hasn’t survived into the era of E2E neural coref models.)https://t.co/ZfuGc5lQs5

2022-07-13 18:23:54 RT @petewarden: I've always dreamed of seeing @TensorFlow Lite on a Commodore 64! https://t.co/0l7tQV233V

2022-07-11 17:08:43 RT @StanfordHAI: Introducing the #AIAuditChallenge – a $71K competition to design better AI audits. @StanfordHAI &

2022-07-11 15:56:58 RT @stanfordnlp: .@stanfordnlp grads at work: Congratulations (and a big thank you) to @MarieMarneffe (at The Ohio State University) on bei…

2022-07-11 15:27:39 @e96857c58f71610 I hope so!

2022-07-10 23:59:29 @Hassan_Sawaf We can catch up during the conference — but had to dash off to see my kid today….

2022-07-10 18:47:39 RT @ThingsCanberra: Bus stop - late afternoon https://t.co/dURm77KOVU

2022-07-10 15:05:19 Heading to Seattle for #NAACL2022. This will be my first travel to an in-person conference in over 2 ½ years (NeurIPS2019 in Vancouver to NAACL2022 in Seattle—but not via Puget Sound) https://t.co/XGX3cbDlMj

2022-07-03 15:21:42 RT @fictillius: Sydney aquarium staff on their way to deal with this at a Sydney train station. https://t.co/QXSlbu4uCv

2022-06-28 04:46:31 @tejuafonja FWIW, this happens to (some) Europeans too. I had a student on a J visa, due to fellowship reasons

2022-06-28 04:36:07 @roydanroy @kchonyc It also occurred to me after posting that there's a bit of a definitional question as to what counts as learning, but I meant to differentiate learning from things like making a markov assumption via backoff or mixing different order models

2022-06-27 17:35:10 @kchonyc Fair enough, though you could have avoided one or two strong statements like “Count-based language models cannot generalize”. At any rate: I agree it is an interesting to better understand how neural LMs generalize and act and how that splits between the model and the decoding.

2022-06-27 16:40:53 @kchonyc (At the risk of appearing a grumpy old guy:)The discussion of autoregressive neural language modeling is interesting but these slides do totally elide the 30 years of work on how count-based language models can generalize without learning by smoothing the probabilities!

2022-06-26 20:23:51 @ryandcotterell @alexandersclark @trochee @jasonbaldridge @LiUNLP I’m digging back a bit, but I agree with @alexandersclark that the right place to look is the Lambek Calculus/Type-logical Grammar take on Categorial Grammar. I think you’re wanting a multimodal system. Perhaps start with Morrill 1994 or Moortgat 1996: https://t.co/Kh6ICzuwQC

2022-06-23 15:43:59 We’re still offline: Stanford lost power Tuesday 3 pm.It’s still out, except limited power for hospital, etc. Mail to @cs.stanford.edu doesn’t work—use manning@stanford.eduPower at home, Twitter, Github, Huggingface, texts, basic NLP website do all work!https://t.co/6UjKlRuZEH https://t.co/0PNmbaouT3

2022-06-23 15:28:44 RT @ItaiYaffe: (1/9) #1DatabrickAWeek - week 44Last week (https://t.co/DxQz9mR8OI) I focused on the awesome #keynote speakers at the upco…

2022-06-21 20:49:33 RT @robreich: Can the GPT-4chan episode be counted as a part of the responsible practice of AI research?More than thirty AI researchers h…

2022-06-21 18:19:01 RT @percyliang: There are legitimate and scientifically valuable reasons to train a language model on toxic text, but the deployment of GPT…

2022-06-20 15:12:35 RT @minilek: Stanford made an important update to its admissions site this week (before/after photos attached). First, they state statistic…

2022-06-16 15:52:46 RT @chelseabfinn: Want to edit a large language model?SERAC is a new model editor that can:* update factual info* selectively change mo…

2022-06-15 17:33:54 @JustinMSolomon @stanfordnlp I agree it’s a bit of a loose analogy, but, still, there’d be nothing to stop it.

2022-06-14 23:26:54 RT @StanfordHAI: If the cake tastes a little salty, that’s just our tears. Many thanks to @MPSellitto, our departing HAI Deputy Director! M…

2022-06-14 22:29:23 RT @sebkrier: Somehow missed this, but yesterday the Chancellor announced a review of the UK's compute capacity led by @ZoubinGhahrama1. Th…

2022-06-14 02:49:32 RT @etzioni: I'm speechless. Please RT. https://t.co/gwvNeh8z3c

2022-06-07 01:05:38 RT @antgoldbloom: .@benhamner and I are stepping down from @kaggle as CEO and CTO to return to our startup roots. Excited to share that D.…

2022-06-06 21:43:39 @AnimaAnandkumar @Caltech @bjenik Congratulations!

2022-06-04 01:12:39 RT @ivanzhouyq: Here we have - no others but @chrmanning and @karpathy! https://t.co/VBKgTAPcrX

2022-06-02 23:40:27 @ambimorph @joelgrus Not Naive Bayes classification!!!

2022-06-02 14:38:08 @joelgrus Oh—wow—I had no idea!It by no means solves all problems in education, but the impact from good quality free educational resources on the Internet is heartwarming.I hope foundation knowledge isn’t actually “defunct” but for modern #NLProc material, see:https://t.co/j9rAkVxLwQ

2022-06-02 00:53:33 @csabaveres @rogerkmoore @GaryMarcus Symbolic grammars capture only a small part of human knowledge of language and they do it poorly. This isn’t an observation of the neural era. Sapir (1921) noted “All grammars leak.” This motivated probabilistic models of language before neural era—see https://t.co/n7jj5CSh6R 2/2

2022-06-02 00:49:59 @csabaveres @rogerkmoore @GaryMarcus Insofar as modern LLMs are universal associational learners, not attuned to the constraints of human language processing, we can agree, but I’m just not on board with the privileged position your paper gives to symbolic grammars. 1/2

2022-06-01 17:03:14 RT @raohackr: @GaryMarcus Except the essay mischaracterizes the NRC proposal: make or buy compute, whatever is cheaper (making). Lots of…

2022-06-01 15:58:21 RT @StanfordHAI: Highlights from last week: @stanford and HAI-affiliated faculty met with members of the European Parliament to discuss the…

2022-05-31 21:02:50 @LingMuelller @LanguageMIT @ryandcotterell All my recent UD papers use tikz-dependency. It’s serviceable but not awesome—I do a lot of hand-setting dependency arc heights since it won’t do them in tiers in the “obviously right” way for compact display. https://t.co/dfHGJOe9o5

2022-05-31 14:21:23 @rogerkmoore @GaryMarcus Maybe the name “language models” was prescient? When simply Markov models, yes, they were just models of word sequence probabilities. But now Neural LMs are models of language, which is why their distributed representations excel at machine translation, QA, anaphora, parsing, etc

2022-05-29 17:45:13 @wm @ibuildthecloud Let me know if you find a better solution! As organizations grow larger and I grow busier, Slack seems a less and less good solution. It’s making me think more favorably of email—it actually scales better in some respects.

2022-05-29 17:42:20 @wm @ibuildthecloud This shows Discord’s gaming roots—Nitro subscriptions are the main way Discord makes money, but if you don’t need animated gifs or other cosmetic perks, you can just ignore the occasional suggestions. They only turn up once per version. Better than paying monthly for every user.

2022-05-28 18:49:22 @wm @ibuildthecloud Seriously, Discord gets more normal every year and isn’t so different from Slack – the only thing that I still feel is fundamentally more right on Slack is the implementation of threads

2022-05-28 18:33:09 @wm @ibuildthecloud Try Harder ™

2022-05-26 15:05:16 RT @WomensAgenda: Dr @NeelaJan treats more avocado-related injuries in Australia than gunshot wounds. She first highlighted this in 2018,…

2022-05-26 14:28:35 RT @fchollet: Reminder that if you want access to more fine-grained political parties that better represent your views, you first need to s…

2022-05-23 15:37:53@NickATomlin @roger_p_levy @juanbuis @Christophepas @callin_bull At any rate, it compares all the text in a light font weight vs. half the text in bold (and the rest in light font). A fairer comparison for traditional reading would put all of the original text on the left in a regular/medium weight font? Cf. https://t.co/K4hvn5pWAc

2022-05-23 14:42:28I’m on board for the Ineffective Altruism movement! (HT @timnitGebru) https://t.co/OixfBTOW3t

2022-05-20 08:11:00 CAFIAC FIX

2022-10-23 02:01:14 @ChrisGPotts @RishiBommasani @tallinzen @NYUDataScience @cocoweixu @david__jurgens @dirk_hovy @percyliang @jurafsky @clairecardie It feels a little unfair to be comparing a posed picture to an out-of-focus video capture, but there’s no denying @RishiBommasani’s shirt is a bold color!

2022-10-23 02:01:14 @ChrisGPotts @RishiBommasani @tallinzen @NYUDataScience @cocoweixu @david__jurgens @dirk_hovy @percyliang @jurafsky @clairecardie It feels a little unfair to be comparing a posed picture to an out-of-focus video capture, but there’s no denying @RishiBommasani’s shirt is a bold color!

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-23 21:08:49 @OfirPress @qi2peng2

2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!

2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*

2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…

2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg

2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-23 21:08:49 @OfirPress @qi2peng2

2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!

2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*

2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…

2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg

2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-23 21:08:49 @OfirPress @qi2peng2

2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!

2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*

2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…

2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg

2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-11-23 21:08:49 @OfirPress @qi2peng2

2022-11-23 21:03:29 @deliprao @JeffDean @huggingface Well, none of them got that one right!

2022-11-23 16:25:36 @zehavoc @OfirPress @stanfordnlp *Bresnan*

2022-11-23 16:21:55 RT @russellwald: Our tech policy fellowship for Stanford students is live!! There are so many opportunities for Stanford students w/this am…

2022-11-23 16:10:04 @OfirPress Great progress with this exciting new prompting approach! Hey, we were ahead of the game in proposing the importance of multi-step question answering: Answering Complex Open-domain Questions Through Iterative Query Generation by @qi2peng2 et al. 2019. https://t.co/5Rr1twDTpg

2022-11-22 00:50:12 RT @landay: Please retweet: Didn't have a chance to catch our @StanfordHAI Fall Conference on "AI in the Loop: Humans in Charge?" Don't wan…

2022-11-20 19:16:54 @BlancheMinerva In the deadline-driven world of current NLP/ML, I’m not sure it’s reasonable to demand software at submission time. However, it does seem like conferences and journals could refuse to accept/publish final copies of any paper that claims code/data is available but it still isn’t.

2022-11-17 14:38:32 RT @StanfordHAI: Artist and computer scientist @laurenleemack has spent days working virtually as a human Alexa, created a 24-hour machine-…

2022-11-17 04:24:40 RT @robreich: The @voxdotcom interview by @KelseyTuoc with SBF reveals a depressing moral rot. Makes Elizabeth Holmes look positively ang…

2022-11-17 02:32:16 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns It includes at least 3 major linguistic groups, functional linguists (eg Van Valin or Croft), cognitive science-oriented cognitive linguists (eg Tomasello or MacWhinney) and sociolinguists (eg Labov or Eckert). I just don’t think the 3 groups have that much common ground.

2022-11-17 02:29:07 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Externalists (empiricists) and essentialists (rationalists) are fairly clear

2022-11-17 02:26:10 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns However, I think there are limits in aligning the topic of emergence in neural networks with the “emergentist” category of that SEP article. I’m not that happy with Scholz et al.’s classification in that article. Really, I think they use “emergentist” as a grab-bag category.

2022-11-17 02:23:24 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns The emergence we see in recent ML/NLP models is super-exciting! It’s definitely worth exploring and developing this viewpoint and I’ve been excited by it. For instance, I talked about this at the end of the CUNY 2021 talk that I gave to (psycho-)linguists: https://t.co/MJHlyHRPiD

2022-11-16 21:41:47 @FelixHill84 @complingy @gchrupala @jurafsky @kchonyc @DBahdanau @kchurch4 @LukeZettlemoyer @earnmyturns Predilections: Most of #NLProc prefers work with a math/formal character, which is largely absent in emergentist work—with a few exceptions like sign-based construction grammar. So people traditionally saw more appeal in formal models like Categorial Grammar, HPSG, LFG, even CFGs

2022-11-16 21:36:18 @FelixHill84 @complingy @gchrupala @jurafsky Pedigree: many have no linguistics linkage—some of original IBM group, Bengio, @kchonyc , @DBahdanau, Mikolov—but for those that do it was mainly strongly Chomskyan—@kchurch4, people from UPenn like Marcus, @LukeZettlemoyer—or at least formal/generative—@earnmyturns, Shieber, me

2022-11-16 21:27:44 @FelixHill84 @complingy @gchrupala Yeah, I’ve got some thoughts. First off, there are some emergentists in #NLProc, e.g., my colleague Dan @jurafsky would identify as one. But, I agree that there aren’t many. I think there are two main reasons: pedigree and predilections…. 1/3

2022-11-16 05:29:34 RT @k_mcelheran: This is SOOOO good! Hats off to the organizers for great streaming and accessibility. Delighted to be watching this from s…

2022-12-08 22:41:41 @emilymbender Anyone taking things out of context⁈ Looking at the talk—from an industry venue: https://t.co/UMlczZZJR3 after a detour on what self-supervised learning is, exactly where it goes is that big models give large GLUE task gains but at the cost of hugely more compute/electricity… https://t.co/kWESAvHXkl

2022-12-09 03:50:09 @KordingLab That I really talked about the right topic at that last CIFAR LMB meeting?!?

2022-12-08 22:41:41 @emilymbender Anyone taking things out of context⁈ Looking at the talk—from an industry venue: https://t.co/UMlczZZJR3 after a detour on what self-supervised learning is, exactly where it goes is that big models give large GLUE task gains but at the cost of hugely more compute/electricity… https://t.co/kWESAvHXkl

2022-03-16 21:22:19 RT @TFNBreakingNews: New Yorks’s Bloom grabs $1.1M to help e-commerce brands generate more sales https://t.co/AqnpONhUzf #tech #fundi… 2022-03-16 16:10:21 I agree with the first half of this. Recently https://t.co/sf3xlH7NRx has changed significantly every 2 years. But I suspect the death of university research is exaggerated: That aero/astro students can’t build a Boeing jet in classes doesn’t mean there is no more research to do. https://t.co/J8kTgQauYD 2022-03-12 08:11:00 CAFIAC FIX 2022-01-17 20:54:45 RT @wisprAI: 1/ We have HUGE NEWS One of the top systems neuroscientists in the world – Anthony Leonardo – is joining Wispr AI full-t… 2022-01-17 08:11:00 CAFIAC FIX 2022-01-11 08:11:00 CAFIAC FIX 2022-01-06 01:05:26 @OriolVinyalsML Thanks again for the great talk at the @stanfordnlp Seminar! Overall, you sure had an amazing year! 2022-01-04 04:35:13 RT @robreich: Elizabeth Holmes found guilty on four charges. It's common to see the verdict as a conviction not only of Holmes but of Sili… 2022-01-04 04:32:09 RT @AndyPerfors: It's easy here in Australia as we watch covid running wild to ask what the point of all of those lockdowns was. But look a… 2022-01-04 04:21:39 RT @histoftech: https://t.co/MBhr3CBfO2 2022-01-03 16:24:33 @rbhar90 Try @tengyuma’s recent papers? Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning https://t.co/1cwBtasMx8 Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data https://t.co/QOBLeBJKpX etc. https://t.co/wXIn4QlzFG 2022-01-03 16:14:09 @prasanth_lade No, it’s already done. India is out and Philippines seems unlikely to join but it started 1 Jan 2022 with 10 countries including Australia, Japan, Vietnam, & https://t.co/EpaFhXhWmC 2022-01-02 23:34:56 I know there’s a bad omicron surge but I’m surprised how little the media is noting the start of the RCEP. Surely the world’s largest free trade area now being in the Asia-Pacific and including China while the US is still on a tariff binge is significant? https://t.co/aPU3AsuQFE 2022-01-01 14:36:21 Amazing New Year’s fireworks display in Sydney https://t.co/39fwzY3uYS https://t.co/NYxqE91kBG 2021-12-29 02:23:21 RT @learnkullilli: Calls for funding boost to preserve Indigenous dialects as languages rapidly disappear https://t.co/g4JmfsQ4ry via @@SBS… 2021-12-29 02:03:38 RT @l2k: When I was starting my first company I was a little offended by random outreach from tech recruiters about entry level software de… 2021-12-27 08:20:00 CAFIAC FIX 2021-12-19 23:08:32 @TaliaRinger @rasbt @andrewthesmart @story645 In my experience, it’s mainly that *organizations* push nominations for their employees. I’m pretty sure it was happening all around you while a Ph.D. Student at UW, and you just don’t see it as a student. But there are some people genuinely nominated by their nice colleagues! 2021-12-19 15:31:39 It’s great to see Bean Machine, a new Probabilistic Programming Language (a bit like @mcmc_stan) built on @PyTorch. But how much impact will this have? Somehow Bayesian modeling has gone from the center of AI in the 2000s decade to the margins since 2015. https://t.co/4PhlCTAkJC 2021-12-18 22:58:57 @alexandersclark 16x9. The world has changed. 2021-12-16 18:05:29 RT @sethlazar: Political Philosopher interested in new technologies? Come join me for up to 4 years on my ARC project on the political phil… 2021-12-16 18:03:25 CC: @rajiinio, @sleepinyourhat: Came to mind reading Deb’s thread on recent NeurIPS panel on The Role of Benchmarks. Related: A key IR vs. ML/NLP eval difference is that IR believes in human variability not gold answers, perhaps due to stronger HCI links. https://t.co/RykslVhBLW 2021-12-16 18:03:24 A fondly remembered paper from the vault (with @MarieMarneffe & 2021-12-15 17:21:27 RT @russellwald: .@StanfordHAI released 2 must read policy briefs. One on #AI race detection in medical systems and another on the use of M… 2021-12-15 17:19:34 RT @dojiboy9: @forethought_ai $65M to transform CX w AI! @kmehandru @NEA @aplusk @K9Ventures @collabfund @Frontlinevc @cleocapital… 2021-12-14 18:37:44 RT @VentureBeat: Open source MLOps framework ZenML raises $2.7M https://t.co/KdZPORVRlx by @shubham_719 2021-12-13 17:16:25 I love the “intimate” iPad experience but being productive can require arcane workarounds. Opener (@OpenerApp) is one key bit of connective tissue that helps: https://t.co/wQpNlyGNlK 2021-12-13 16:41:04 An earlier, unrelated petition about the proposed California Mathematics Framework had ugly overtones that I didn’t support but this one hits the right notes. @boazbaraktcs: “The status quo of math education isn’t good. But no matter how bad things are, you can make them worse.” https://t.co/tuukWHSEHp 2021-12-10 20:48:33 I’m “amicable”, apparently…. https://t.co/Z08bdKcvMJ 2021-12-01 17:29:02 I agree that, in many cases, reviewability of AI systems is a much more achievable and better goal than explainability https://t.co/GBLZDPP8t4 2021-11-25 05:04:36 RT @ngaralk: Two more happy customers! Kera Galaminda and Tamia Manmurulu who are studying in Melbourne now have their copies. Get your cop… 2021-11-25 02:44:29 @Judah_Grunstein @stanfordnlp @esalen Yes, was meant to be an Esalen hot tub in the Redwoods. (And while they do have much larger baths, they do literally have single person tubs there.) 2021-11-24 19:24:34 @brendan642 @stanfordnlp @esalen How the years pass by! 2021-11-24 19:23:19 @Judah_Grunstein @stanfordnlp @esalen Yeah, totally! But the commonest ambiguity online in my field is whether “NLP” is referring to Natural Language Processing or Neuro-Linguistic Programming—hence the former group often using #NLProc as a twitter hashtag—and Easley is a fun place to hang out. 2021-11-24 18:03:05 Interesting thread and linked-to blog post on Silicon Valley, liberalism, conservatism, and libertarianism. But for the @stanfordnlp group, I’m just wondering why we haven’t been taking advantage of the posited link between NLP and @esalen…. https://t.co/wM4cQXX1MF 2021-11-23 23:19:59 We seem to be seeing a new split on immigration, with the US and UK seemingly determined to keep most immigrants out, but Germany and Canada, and, very soon, Australia again, throwing down big welcome mats (via @NYTimes). https://t.co/lRA0Bw7alh 2021-11-23 03:33:04 RT @DingemanseMark: Dying Words: What Endangered Languages Have to Tell Us (by Nick Evans) is beautiful — a joyful and exhilarating tour of… 2021-11-23 03:24:44 @vasudev_sharma_ @yuhaozhangx @hjian42 @curtlanglotz @stanfordnlp @StanfordAIMI Yeah, that’s the problem of students’ last project for their dissertation … nudge, @yuhaozhangx. But it looks like there’s a replication at https://t.co/hgitR8nJDr (which I haven’t tested, so can’t vouch for). 2021-11-22 21:17:32 RT @bread_fixer: You ever think about how California increased its housing stock less than Iowa, Louisiana and Alabama the past decade 2021-11-22 21:12:17 RT @annargrs: @tpimentelms @KarimiRabeeh @ReviewAcl @chrmanning @aclmeeting There are studies on ICLR data that show that reviewer scores d… 2021-11-20 20:57:52 @datamize @stanfordnlp Hi Mike, I’ve posted how to fix it! 2021-11-19 04:00:26 RT @StanfordHAI: Post-doc opportunity: Interested in embedding ethics into computer science education? @StanfordEthics and @StanfordHAI inv… 2021-11-16 00:50:15 RT @robreich: Applications are now open for our 3rd evening course for tech professionals on ethics, policy & 2021-11-12 15:35:28 RT @_youdotcom_: There's no better time to start a new search engine, writes https://t.co/QRGeptn4bk CEO and founder @RichardSocher in this… 2021-11-11 20:02:29 RT @StanfordHAI: Want to be a HAI Faculty Fellow? We are still accepting applications for the Associate Professor/Junior Fellow position un… 2021-11-11 15:43:25 RT @Dutchcowboy: What a line up @markoff prolific writer, @Jerry_Kaplan founder Go and so much more, #jerryyang founder Yahoo, @chrmanning… 2021-11-10 15:43:07 I prefer to think of it as @stanfordnlp grads at work. Congratulations, @RichardSocher — it’s looking good! https://t.co/A5qMrqXHny 2021-11-10 15:38:19 RT @fchollet: Given its economic and cultural impact, Silicon Valley should look like Singapore or central Tokyo, with comfortable highrise… 2021-11-09 03:39:29 RT @russellwald: Our latest op-ed addresses how a National Research Cloud has the potential to challenge tech power concentrations and give… 2021-11-08 00:52:05 A really smart perspective from by ⁦@thao_pow⁩, ⁦@jakusg, ⁦@DrMoniqueMann⁩ & 2021-11-06 23:20:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-08-20 23:02:46 @moinnadeem No reason. In fact a few students are already doing just that (well, the training, not the Pareto optimality so far). We hope that you’ll join in! 2021-08-20 15:26:39 *typologically* 2021-08-20 14:41:27 It has been great to see Universal Dependencies helping to enable new work in quantitative typological syntax. (And we would love to see more topologically diverse languages represented in UD – if you could help, do get in touch!) https://t.co/jjEJJKKQoa https://t.co/kVg4wrVcrg 2021-08-20 14:26:49 RT @russellwald: Is #AI altering how espionage is conducted? Quite significantly in fact. @AmyZegart explains in this @StanfordHAI Q& 2021-08-19 16:58:16 Yes a big win for @Microsoft @mstranslator over Google Translate https://t.co/xBdVIvGxoM 2021-08-18 21:52:27 RT @UpFromTheCracks: “I don’t think anyone ever thought about data privacy or what to do in the event the [HIIDE] system fell into the wron… 2021-08-18 21:51:19 RT @robreich: There's no more important voice on the need for transatlantic cooperation around tech policy and governance than @MarietjeSch… 2021-08-18 20:05:13 @BlancheMinerva @GoogleAI @YejinChoinka Good data is very important – and I work towards its creation – but those kinds of repeat loops are mainly something other than poor data 2021-08-18 19:51:41 @gelly_patrick @GoogleAI @YejinChoinka No, it was tested – on average it is much better than the statistical phrase-based MT that they used for the previous decade – but it has its own pathologies. Cf. https://t.co/eRkAV6185O 2021-08-18 03:09:55 RT @dsoltesz: @chrmanning @timnitGebru @GoogleAI @YejinChoinka I saw that tweet earlier and REALLY enjoyed the translation. 2021-08-18 03:07:28 @timnitGebru @GoogleAI @YejinChoinka It’s Tigrinya, right? It certainly doesn’t help that Google Translate doesn’t support Tigrinya, but the language model just went on a tear anyway…. 2021-08-18 00:57:21 @GoogleAI’s neural machine translation isn’t yet perfect This is a good example of how neural language models still go haywire, especially when training data is sparse. See the discussion in @YejinChoinka’s https://t.co/za7NnnBB52 . https://t.co/qyGrG7mAJK 2021-08-18 00:01:24 @mmitchell_ai @mer__edith It will be great to have you there, and I’m looking forward to your talk, Meg! 2021-08-11 14:41:35 RT @AveryAndrews: @wm @chrmanning For Oz, in most environments, these animals are nothingburgers 2021-08-09 22:26:31 RT @IJCAIconf: #VideoMessage In 2021, the conference chair of the oldest AI conference, started in 1969, is Maria Gini @gini50883560 @UMNC… 2021-08-09 18:07:31 RT @wm: Definitely want to see the Australian #'s https://t.co/qCZGSWPfK2 2021-08-04 23:48:40 RT @StanfordHAI: Seeking an opportunity to help build the future of AI? Join our rapidly growing team. We are looking to fill multiple rese… 2021-08-03 20:27:15 @LucianaBenotti @ChrisGPotts @StanfordHAI Practitioners: Blog posts, tutorial videos, open source code are key ways to communicate. Again, additional staff really help but also a culture of valuing real world impact beyond just writing research papers. In neither case do I think scientific papers the right vehicle. 2021-08-03 20:25:58 @LucianaBenotti @ChrisGPotts Good question! I think you need different answers for each group. Leaders: At @StanfordHAI we’ve started a series of white papers. Having staff to help produce them really helps 2021-08-01 02:46:17 Amazing achievements by the Australian women’s swimming team in Tokyo! Emma McKeon got gold in the 50m freestyle and then was part of the gold-winning 4x100m medley relay team. So remarkable that even American media has felt moved to start discussing her. https://t.co/fahXtisWo1 https://t.co/RPKyAeoA1A 2021-07-31 20:49:57 The published version of Marie-Catherine, me, @JoakimNivre and Dan’s UD article is now out: July 13 2021 Universal Dependencies Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, Daniel Zeman Computational Linguistics (2021) 47 (2): 255–308 https://t.co/DNugfBrkEo 2021-07-31 19:38:36 @sarves @ezhillang @stanfordnlp Great to hear! 2021-07-31 16:07:50 @sarves @ezhillang @stanfordnlp *experiencer 2021-07-31 03:26:01 Wow! Kaylee McKeown gets a second gold medal in the 200m backstroke (but Titmus’s personal best time in the 800m freestyle isn’t enough to beat Ledecky at that distance). https://t.co/aqFrmfB2ri https://t.co/w0kOxRSZqj 2021-07-31 00:55:28 @sarves @ezhillang @stanfordnlp It would indeed be great to do more to improve Tamil-PDT, like recognizing dative experiences subjects, and also to have more Tamil UD data overall! 2021-07-30 03:03:59 And Australia's McKeon wins Olympic gold in women's 100m freestyle, bronze for Cate Campbell. That might be the last gold medal for Australian women swimmers this time round, though there’s still the 800m freestyle tomorrow…. https://t.co/TYnQwyO9vJ https://t.co/RKXbPs3ZbK 2021-07-28 03:04:48 Titmus’s ability to come from behind to win is jaw-dropping. Ariarne Titmus wins Olympic gold again with victory in 200m freestyle final. https://t.co/H7A3uikStT https://t.co/9Axk1RTCjh 2021-07-28 00:50:07 @landay Well, we are – some years 2021-07-27 03:23:57 @buirachel Yes. Funny! 2021-07-27 03:19:55 So much winning from the Australian women’s swimming team! Kaylee McKeown breaks the Olympic record to win 100m backstroke gold. https://t.co/csNqFKPosB https://t.co/hdVDVEBC06 2021-07-26 03:08:06 And Australia's Ariarne Titmus sprinted home to win gold ahead of her inspiration, American Katie Ledecky – though I should also have a soft spot for Team Stanford https://t.co/3YvPe5aVC1 https://t.co/2iXlBtnVXm 2021-07-25 04:06:38 Australia's women win third straight 4x100m freestyle gold https://t.co/W2ybcIVp1Q https://t.co/BSMrEFwJxs 2021-07-22 22:51:35 @ariannabetti @rajiinio That’s fair—and I confess to being a typical social media user who only skimmed your paper—but the question remains of for what tasks “a ground truth” is a good goal. It’s practical for low-level tasks like PoS tagging but even there, where annotators disagree tells you something 2021-07-22 22:42:26 @RishiBommasani @rajiinio @ariannabetti Yes, that’s a great paper! 2021-07-22 01:20:49 @ariannabetti @rajiinio An alternative—which hasn’t been widely picked up but I still think has much going for it—is to model the full distribution of annotator judgments since whether people naturally do or do not agree on an item is itself information 2021-07-22 01:18:38 @ariannabetti @rajiinio For semantic and pragmatic tasks, people may differ in their interpretations for all sorts of reasons reflecting their life experiences and trying to force consensus seems questionable. We see the same IRL, where people sometimes disagree with each other on interpretations. 2021-07-15 00:01:23 @faezehzps @qi2peng2 2021-07-13 18:15:28 @atakanince Yes, we’re totally aware of upward & 2021-07-08 21:54:28 @heidi_harley Many schools use CLEP but Stanford gives credits only for courses (AP and IB) and adds SAT-II and own testing for placing out. Not perfect, but seems fairer, and they will test any language being taught so you can test out in Navajo, Uighur, Tagalog, Punjabi, Quechua, …. 2021-06-25 19:31:29 RT @cynthiablee: Fascinating and important leadership move by Stanford HAI to add a sort of parallel to the IRB process that does vetting o… 2021-05-22 01:50:09 @Twitterrific Oh! Great to know, thanks!!! 2021-05-21 19:58:33 .@Twitterrific It’s a shame that the “Share discussion” option has disappeared in the last “upgrade” — it was one of my favorites and really rather useful. 2021-05-21 03:26:29 RT @rogerjfrank: Fantastic talk to close the conference! Thank you @ruha9 for sharing your incredible insights and research. The dialogue a… 2021-05-18 06:00:30 @forloop @RehanSaeedUK @gep13 @stevejgordon Read books either published by Manning or written by Manning 2021-05-16 21:01:36 @VeredShwartz @successar_nlp Yes, I think that is the first paper to use the term, thanks @VeredShwartz. (It was an attempt to replace the term Recognizing Textual Entailment, which we weren’t very happy with [“entailment” has a narrower precise logical defn]. In retrospect, we seem to have succeeded!) 2021-05-16 19:47:52 RT @amarpreetkalkat: What if a significant part of that AI, maybe even a larger part, is focused on augmenting human capability rather than… 2021-05-16 17:22:11 .@kahneman_daniel calls it: “Clearly AI is going to win” [against human intelligence], and lots of other interesting thoughts on system noise, exponentials, and human judgments. The big remaining question is how to use AI advances to augment human lives. https://t.co/jwe38oRAV8 2021-05-13 20:21:36 Full schedule and registration: 2021 Tech and Racial Equity Conference: Anti-Racist Technologies for a Just Future | Center for Comparative Studies in Race and Ethnicity https://t.co/mKTTVD8ERW 2021-05-13 20:21:03 RT @ruthstarkman: 2021 Tech and Racial Equity Conference: Anti-Racist Technologies for a Just Future This conference is May 19 https://t.co… 2021-05-08 22:33:59 RT @DigEconLab: There are only a few days left to submit your AI policy proposal. We're looking for radical ideas that will shape our AI-po… 2021-05-08 16:53:27 https://t.co/cFIMzDUU9D 2021-05-06 15:14:07 RT @chrmanning: This NYT article shows the importance of pursuing a broad cross-disciplinary approach to studying AI embedded in society, a… 2021-05-06 05:36:54 RT @StanfordHAI: “Policymakers in democratic nations need to play catch-up,” @MarietjeSchaake says about the state of play for digital tech… 2021-05-06 05:12:32 @David_desJ @akorinek @JosephEStiglitz And nor will AI have replaced highly paid Silicon Valley programmers or product designers by 2050. But that’s not to say that there won’t be severe and painful economic dislocation for many people from automation & 2021-05-06 05:09:17 @David_desJ @akorinek @JosephEStiglitz I will more precisely predict that in 2050, any robot gardener will still be at least an order of magnitude more expensive than a human gardener and still not as good and so human gardners will not be competing with robots on price. 2021-05-06 04:10:39 It cites @akorinek & 2021-05-06 04:10:38 This NYT article shows the importance of pursuing a broad cross-disciplinary approach to studying AI embedded in society, as we’re doing at @StanfordHAI. First, it shows the increasing reach of AI: I thought I was getting an article on politics but … 1/4 https://t.co/xsg1FW4EAr 2021-05-04 00:19:31 RT @persily: Thursday, 2 PM Pacific @StanfordCyber to host a panel discussion on the @OversightBoard decision in the Trump case. Joining Bo… 2021-04-27 05:09:44 RT @victorstorchan: Really stunning to see how an idea that emerged from an academic institute @StanfordHAI, was able to quickly gain momen… 2021-04-27 03:35:22 RT @MarietjeSchaake: “There’s a need to articulate a much more coherent policy vision. To say, this is what a democratic model of tech gove… 2021-04-23 01:48:00 RT @xamat: The founder of Palantir, a private company that was built mostly on public money contracts, is against taxes. The irony https:… 2021-04-21 14:24:25 @siddkaramcheti The first time I saw “hegemony” was in 7th grade history doing Ancient Greece! This tidbit might partly explain the increase in the 1980s but there are other big factors, namely the rise in influence of Gender & 2021-04-20 15:04:55 (Showing my age) Rogue and then Hack were the only “video” games that I ever became seriously addicted to. I’d played the games of that time, Asteroids, Space Invaders, Pac-Man, etc., but something about the way Rogue was untimed but with permadeath spoke to me. https://t.co/QwLFS2rJxq 2021-04-19 02:44:54 @tdietterich @mmitchell_ai @chelseabfinn And, earlier, @danqi_chen in 2014, and even earlier @RaiaHadsell. 2021-04-19 02:35:25 @chris_brockett @mmitchell_ai Yes, she is: @adjiboussodieng 2021-04-19 00:04:55 @xamat @yoavgo @ylecun @christopher The issue of optimizing for engagement was actually a side-issue to the original thread, maybe best forgotten. The more interesting original issue is whether organizations like @PartnershipAI are mainly serving to improve AI or to deflect people from considering regulating AI. 2021-04-18 21:25:04 @yoavgo @xamat @ylecun @christopher “Nefarious” is a strong word. I don’t accuse Facebook of nefarious intent. (Given his comments, @yoavgo can speak for himself.) But, embedded in a large organization, people optimize for many things 2021-04-18 15:49:46 Interesting thread on mitigating harm from AI/ML, started by @yoavgo. (@ylecun doth protest too much, methinks.) https://t.co/yXVUPQ0ihd 2021-04-17 18:05:52 The incredible rise in use of the word “hegemonic”. It’s hard to see the blue just next to the x-axis, but until 1960 the word “hegemonic” was less frequent than “misogyny”. No more…. Even though use of “misogyny” is up 30x from 1960, “hegemonic” is now 6x more frequent than it. https://t.co/XGz3K0KU4r 2021-04-16 17:53:51 @mjpost @ChrisGPotts I’m not convinced that it’s only funding. I would suspect a big part of it is that supporting Unicode isn’t a big priority for physicists. But volunteering has to be productive! 2021-04-14 21:19:16 Excited to be publishing a paper that not only satisfies the #BenderRule but also the #CotterellTestForInterdisciplinarity with Marie-Catherine de Marneffe, @JoakimNivre, and Dan Zeman: Universal Dependencies https://t.co/iehyEX0LtL https://t.co/d385wlXnuf https://t.co/dMpWjTRYgy https://t.co/KX60rHwPYz 2021-04-14 20:13:34 Pithy comments from Mark Díaz @blahtino—loose quotes: Corporate DEI: They give us stuff we never asked for 2021-04-14 14:09:02 RT @russellwald: This could be a significant case in the use of FRT. @StanfordHAI noted the high error rates when FRT is used in different… 2021-04-13 19:53:02 RT @StanfordAILab: The Stanford AI Lab supports and deeply appreciates the talented Iranian members of our community. We strive for equitab… 2021-04-08 14:53:34 @SeeTedTalk @successar_nlp @yoavgo It’s now many years since I was NAACL Treasurer, but at least at that time, for chapters like NAACL/EACL, our ONLY source of funds was money from conferences. None of the ACL membership fee went down to chapters. That can justify maybe about $30 of conference registration fee. 2021-04-07 22:49:26 @evanmiltenburg @tpilehvar English sentence boundary detection.... 2021-04-07 03:22:21 RT @landay: Please Retweet. Giving this talk tomorrow morning. Join us. 2021-04-02 23:38:48 RT @jackclarkSF: Universities need access to computational infrastructure at equivalent scale as private sector, or I worry about long-term… 2021-04-01 04:05:54 RT @kchonyc: element ai survives as the Element AI Research Group within ServiceNow: https://t.co/2RdOr7Bc7b a chance to work with the emin… 2021-03-31 14:24:15 RT @indexingai: TOMORROW at 10 am PT: Join the AI Index Steering Committee Co-Chair @jackclarkSF for a discussion on findings of the 2021 A… 2021-03-26 01:27:04 RT @matei_zaharia: Thanks @pbailis, hope you let me graduate soon! https://t.co/ZNCD7jYGqw 2021-03-25 17:37:09 Next session of @StanfordHAI conf—11:05 Pacific—looks awesome: @ken_goldberg—link between Exoticism, persistent colonialist attitudes & 2021-03-25 16:03:11 The AI revolution—@StanfordHAI spring conferences on Human Intelligence Augmentation—will be televised: https://t.co/yJWIVff1ne . Starting now. #AugmentHAI https://t.co/NhBZmFpLny 2021-03-24 15:29:24 @polm23 But I think credit for the idea of algorithmically producing word shapes—even though we didn’t cite him, oops—belongs to Michael Collins in https://t.co/jFqS0RHfJn who also calls them “types”—not Ratnaparkhi 1996, who only has prefixes, suffixes, & 2021-03-24 15:17:38 @polm23 I.e., https://t.co/bcaW1mmmJZ . I suggested using “shapes” since “types” is very generic and unclear. But there were antecedents. A closed set of “shape”classes appears in earlier work. E.g., see the list in Nymble, a well-known NER system at that time https://t.co/Qn7QeASL1s 2/ 2021-03-24 15:12:35 @polm23 I can answer this one! The term “word shape” for these features definitely comes from papers by me and students around 2004–5. See, e.g., https://t.co/Yw2wd3HD3t which roughly defines them. We had actually first used them in a 2003 paper where they were referred to as types 1/ 2021-03-24 14:38:29 “The US should establish a standalone, open-source intelligence agency, because existing agencies will never give open-source the attention it needs to succeed”—@AmyZegart. You’d think the US has enough intelligence agencies but this could just be right https://t.co/8Snja8uLuo 2021-03-23 02:43:43 RT @gregggonsalves: I am sympathetic to this case & 2021-03-20 23:56:22 RT @mattbeane: It's normal to use AI to get productivity at the expense of our humanity. But it's not inevitable, and it's not a pipe dream… 2021-03-20 19:53:12 @mgubrud @StanfordHAI Human-augmenting AI can provide opportunities to many people at the lower end of the power hierarchy: E.g., gardeners could have an AI chatbot—on their phone—manage appointments, changes & 2021-03-20 19:47:49 @mgubrud @StanfordHAI Society has many choice points in its evolution. We are indeed now in a 2nd Gilded Age and almost all the benefits go to the already rich & 2021-03-20 17:28:05 Augmenting human capabilities is how we bring home the promise of AI helping people. It’s the topic of @StanfordHAI’s Spring Conference online, 9am Pacific, Mar 25: caregiving, artistic expression, and education. https://t.co/dBfALXFspb 2021-03-18 17:47:02 RT @StanfordHAI: In honor of our second anniversary, hear from @StanfordGSB Dean Jon Levin, an early supporter of HAI, about his vision fo… 2021-03-17 15:37:26 I really appreciated this perceptive take on the overall space of social media harms, policies, and regulation by @rajiinio. https://t.co/agsdybmf8A 2021-03-17 15:20:35 The America’s Cup doesn’t get much attention in the U.S. without an American team in the finals (or @larryellison)—but congratulations to New Zealand on winning again. A fantastic achievement! https://t.co/K8SFAoniAz 2021-03-15 20:59:31 Some advice from @kaylburns on choosing a grad school—I’ve certainly enjoyed working with Kaylee as a non-PI mentor! https://t.co/OP6a7et9Eh 2021-03-08 16:04:30 @RoshnaOmer @haleyhaala @alienelf Glad that the talk went well! 2021-03-07 02:16:14 @peteskomoroch Universal Dependencies is an #NLProc project that has very successfully done continuous data revision on github for 6 years now—the benefits vs @LDCupenn or static download URL datasets where bugs don’t get fixed even if reported have been transformative! https://t.co/jjEJJKKQoa 2021-03-05 18:41:59 If a welterweight country like Australia can successfully redress the power of Big Tech, I wonder what heavier weight countries could do https://t.co/tmb9fNtuaP 2021-03-05 05:58:49 @LChoshen You should ask an MLsys expert not me! But this is new investments 2021-03-05 02:59:37 Even though research on Ethical AI is exploding, companies show no increased awareness that AI equity and fairness is a business risk … #AIIndex2021 by @indexingai https://t.co/HIbKEmAqwD https://t.co/xshSbaqyH5 2021-03-04 17:44:41 The new money seems to be going into applications, not core technology … and applications other than autonomous driving. https://t.co/ZiM1X98C7h 2021-02-25 03:28:07 Australia’s News Media law is a complex issue—but can you expect balanced views from @Facebook’s VP of Global Affairs? Maybe instead try @BBCNews, @kwingerei, an independent Australian journalist, @WIRED? https://t.co/XlQaUFTt7yhttps://t.co/JpQXEwZWZ3https://t.co/0ZAbcD1jCC https://t.co/F3AOKI0bcI 2021-02-24 22:23:50 @michael_guihot @math_rachel In particular, I was born in Bundaberg — which has the odd property that every Australian knows about it, but almost no one outside Australia. I owe my life to Bundaberg Hospital. So definitely read that article! 2021-02-24 15:38:08 @math_rachel Well, Sydney and Melbourne are not only much bigger and more culturally diverse in general, but they have more tech – but I was born in Queensland, so I should stick up for the place! 2021-02-24 05:45:50 @math_rachel I'm sure you'll find it great in Oz - even though Brisbane is a bit of an “interesting” choice 2021-02-24 04:24:22 @suzan Origin of IOB tagging: Ramshaw and Marcus (1995). But I doubt that it is a good first thing for the person to read.... https://t.co/7BHGC9qt7q 2021-02-23 04:44:22 Some useful reflections on AI-generated art https://t.co/sVinsFiyIw 2021-02-22 03:23:02 RT @iclr_conf: Important #ICLR2021 conference date update: @iclr_conf will take place on May 3-7, with the main conference on May 3-6, and… 2021-02-21 01:23:14 @RoshnaOmer This The Gradient article provides a lot of info: https://t.co/iehyEX0LtL . But you might also note what’s been achieved with Universal Dependencies: https://t.co/jjEJJKKQoa . 2021-02-21 00:57:52 The Lily this week has an example of the meteoric rise in use of “Even still”: “Even still, it wasn’t enough for Harper, who lost to the establishment-backed Beatty.” https://t.co/iLL8YyFw5j #2 in an occasional series on linguistic change #1: https://t.co/RVa3uumj3v HT @jinpa1345 https://t.co/8ol84IVfT6 2021-02-20 19:08:47 Thanks so much to Michelle and Andrey (@michellearning & 2021-02-18 17:06:13 @anders_gustavo I don’t. But I’m told that a very large number of people do—in Australia, in the U.S., and in fact world-wide. But I’m sitting here with my Guardian app, my Atlantic app, and my Washington Post app at the ready. Crikey. TheConversation. 2021-02-18 16:13:30 RT @MarietjeSchaake: How it started How it’s going↘ https://t.co/cG8ZtYhpGk 2021-02-18 01:30:13 Big things happening in Australia, but you’ll have to go somewhere else to read about them…. https://t.co/jMpNJrAuCS 2021-02-18 01:24:44 @galaxygarden23 @thegautamkamath @EmtiyazKhan There was a long complex tweet tree, which as usual is hard to reconstruct in retrospect 2021-02-18 01:20:41 @galaxygarden23 @thegautamkamath @EmtiyazKhan Well, it’s complicated—there are some people on twitter that loudly proclaim it is broken, but it’s not at all clear that the majority aren’t where the clear majority were when the policy was started: more strongly wanting strengthened blind review than unlimited dissemination 2021-02-15 19:58:23 The @C3DTI call for proposals for research on Digital Transformation and AI for Energy and Climate Security is open for researchers at participating institutions (including @Stanford). Proposals due: Mar 29. CC: @StanfordWoods @StanfordEnergy @StanfordHAI https://t.co/MmN21mg3aJ 2021-02-15 01:51:50 @tensorflowastf @og_giesecke @kaggle Sure: https://t.co/iPwXeMDXxV 2021-02-10 16:07:19 I guess the days of carefree pip install and npm are over? Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies | by Alex Birsan | Feb, 2021 | Medium https://t.co/9n7lUEb3Ja 2021-02-10 07:09:30 @2plus2make5 I’d agree with others: 10GB is probably fine for vision, but for modern NLP or genomics you’ll suffer if you have less than 24GB of GRAM. Get a mix but some 3090s 2021-02-07 21:26:04 RT @StanfordAILab: AI courses at @Stanford: “It’s all student demand-driven, which just reflects the huge breakthroughs that have been made… 2021-02-07 18:17:01 @AlejandroPiad @Isinlor @stanfordnlp @DreamOnShadows @fchollet I agree. There’s no doubting the enormous power and success of simple, highly scalable neural models trained on huge data, e.g., T5, GPT-3. But there’s also no doubting the far superior learning—and meta-learning—ability of a human child working from orders of magnitude less data 2021-01-29 06:38:11 RT @CoEDLang: Congratulations to our CoEDL Director Nick Evans on his election as an honorary member of @LingSocAm (https://t.co/Qsjx7o5nE… 2021-01-14 01:32:33 @kchonyc I think there might still be things to do for ICLR: e.g., based on my most recent experiences AC-ing for each, NeurIPS did a better job at cross-AC calibration, imho 2021-01-13 16:19:06 RT @StanfordHAI: Welcome to our newest HAI fellow, @kingjen! Learn more about her work at the intersection of AI, law, and social sciences:… 2021-01-12 02:18:36 @stanfordnlp Oops, one bad mistake above: That should have been “Cherokee” not “Choctaw” 2021-01-11 08:02:22 RT @og_giesecke: @chrmanning @kaggle cs224n was fantastic. I learned so much. Thanks for making it publicly available 2021-01-11 07:21:36 @David_desJ I’m really not sure who he’s visiting… 2021-01-11 07:12:13 RT @SuryaGanguli: 100% agree with @chrmanning . And when the physical science and math faculty talk - we wring our hands about how little s… 2021-01-10 22:43:19 RT @marian_nmt: @chrmanning @kaggle I would argue that "being good at kaggling" is by itself another meta-skill that is way more valuable t… 2021-01-10 22:31:07 What parts are right? Yes, all technical professionals today need to engage in continuous education to keep their skills up-to-date. The meta-learning will come in useful. And, yes, experience on @kaggle is a really useful way to build and refresh your skills—recommended! 7/7 2021-01-10 22:28:07 Finally, universities do not only educate you, they also provide you with an extensive network of people, who will be of enormous value to you in your future life in many ways, one of which is that they make it easier for you to hear about and learn new things as time passes 6/7 2021-01-10 22:27:04 And many might argue that it’s just a development of the machine learning approach to AI that replaced knowledge-based approaches beginning in the late 1980s. We want students to be able to absorb and be productive beyond paradigm shifts, but they don’t happen that often. 5/7 2021-01-10 22:25:16 I think your sense of a “paradigm” is off—scientists immediately think of Kuhn’s paradigms: https://t.co/lrtEPYSSC5. At most they’d count modern deep learning as a paradigm shift. It’s now 15 years from the initial deep learning breakthrough. So seeing some old ideas helps. 4/7 2021-01-10 22:21:58 Yes, we at @Stanford do try to ensure we also teach some up-to-date approaches—and we have a great up-to-date course on neural meta-learning too https://t.co/jd3TmEEgv4 —but when AI faculty talk, they mainly wring their hands about how little students learn from before 2010 3/7 2021-01-10 22:17:57 To use the currently trendy terminology, what we’re teaching students is meta-learning—a strong foundation of approaches, ideas, understanding, and tools so that they will be able to quickly learn and evolve over the following decades, as science and engineering changes 2/7 2021-01-10 22:16:53 Honestly, this thread is 80% wrong. This is treating science as like front-end frameworks. Yes, if you’re a front-end developer who only knows 3-years-old JavaScript frameworks, then you’ll have trouble getting a gig. But that’s not what we’re teaching students 1/7 https://t.co/ZphYmyBw0g 2021-01-05 21:42:08 @yoavgo @rajiinio @ducha_aiki @annargrs @ani_nenkova Here, a reviewer doesn’t say anything clearly biased 2021-01-05 21:36:04 @yoavgo @rajiinio @ducha_aiki @annargrs @ani_nenkova The problem is that many biases are subtle 2021-01-02 22:57:30 @roydanroy @mraginsky @yoavgo @boazbaraktcs @ylecun @SeeTedTalk @ChrisGPotts @aclmeeting But there is not enough space in a tweet to explain them?!? 2021-01-02 22:49:28 @themadrasboy @aminamouhadi @pmarca @peterthiel Well, really, all of the above. Based on geography you can try to reclassify Australia/NZ with Asia or the Global South—and East Asia and Africa are the other places successful in dealing with covid-19—but for all the aspects above, that classification isn’t very compelling. 2021-01-02 18:20:55 @annargrs @mariusmosbach @yoavgo @jasmijnbastings We could have author reveal parties! 2021-01-02 18:12:07 @yoavgo @annargrs One can nitpick the wordings a lot, but, honestly, I disagree: Being unconsciously biased against people from underrepresented groups is just the other side of the coin of being unconsciously biased in favor of people from overrepresented groups 2021-01-02 16:58:50 .@pmarca: “the harsh reality is that it all failed—no Western country, state, or city was prepared—and despite hard work and often extraordinary sacrifice” Huh, I guess @pmarca doesn’t count NZ or Australia as Western countries. @peterthiel knew to get his NZ citizenship early! https://t.co/xUKnEUn7vs https://t.co/24GqpNn0Cw 2021-01-02 02:04:51 RT @russellwald: The Senate just overrode the veto on #NDAA putting in place numerous federal #AI provisions. The impact on AI going forwar… 2021-01-02 02:02:13 @SeeTedTalk @yoavgo @ChrisGPotts @aclmeeting @openreview @iclr_conf @ylecun Perhaps the interesting next experiment for @aclmeeting is to use @openreview with all the gains of archived public reviews, updatable submissions & 2021-01-02 01:54:48 @SeeTedTalk @yoavgo @ChrisGPotts @aclmeeting I’m a fan of @openreview & 2021-01-02 01:28:32 @annargrs @yoavgo We should note the deviation between the broader ACL membership vs. the people we are hearing on Twitter! Already in 2017, the survey has 39% favour banning preprints before acceptance and 65% say double blind is more important than pre-prints. https://t.co/dLiyHMjM89 2021-01-01 06:52:44 @martinpotthast @yoavgo @ChrisGPotts Using an anonymous preprint server was suggested and allowed by the rules—and the OpenReview people built one (thx!)—but most people wanted to be on arXiv, with their names attached to their papers (partly for valid reasons) 2020-12-31 23:22:26 @yoavgo @ani_nenkova @ryandcotterell @ChrisGPotts No. We discussed this 2020-12-31 23:14:24 @yoavgo @ChrisGPotts @yuvalpi My most recent ACing has been for NeurIPS and ICLR 2020-12-31 23:09:11 @yoavgo @ChrisGPotts Of course, how people felt varied—I had an American, white, cis, male student who fervently believed that preprinting/promoting papers prior to peer review was just wrong—but overall this group was more European, less male, more minority, older, less working at companies 2020-12-31 23:02:29 @yoavgo @ChrisGPotts This group thought that having genuinely blind reviewing was the most important value 2020-12-31 19:28:21 @ryandcotterell @ChrisGPotts @ani_nenkova @yoavgo Precisely! That was exactly the rapidly expanding practice which made it seem unviable to take no actions but to pretend that we still had double blind conference reviewing. 2020-12-31 18:25:53 @ani_nenkova @ryandcotterell @yoavgo @ChrisGPotts Where, precisely, I think the question is that the reviewer doesn’t know (i.e., remember) who or what institution wrote the paper while reviewing (so it is “anonymous”) not whether they have seen the manuscript. It would be good to do this experiment! 2020-12-31 18:14:37 @ani_nenkova @yoavgo @ChrisGPotts I think the goals were very clear: How might we be able to get most of the gains of speeding science and increasing NLP visibility by use of preprints while not too much undermining the benefits of anonymous peer review? 2020-12-31 18:12:06 @yoavgo @ChrisGPotts @aclmeeting Something that was never a goal but might be a side effect: Probably we now get lots of better ACL papers because they were drafted a month before the deadline rather than in the last two days 2020-12-31 18:10:03 @b_niranjan @yoavgo @ChrisGPotts Yeah, that does at least partially address the problem of papers stuck in multiple cycles of reviewing if they’re a little below the acceptance threshold but basically fine 2020-12-31 18:01:38 @yoavgo @ChrisGPotts I don’t think so. But I’d certainly be in favor of any new ideas that might be better while recognizing the diversity of people and opinions in @aclmeeting. Any ideas? I think ACL is trying to work on that now. Older version of this reasoning: https://t.co/CUqYx7gnff 2020-12-31 17:58:45 @yoavgo @ChrisGPotts which seemed to center on white, male, Americans were all for preprints (mostly killing anonymous reviewing) while another large group (mainly people outside the above group) believed strongly in preserving anonymous reviewing. Should we just fully go with one group? ... 2020-12-31 17:55:59 @yoavgo @ChrisGPotts Your suggestion, @yoavgo, is to move the dial all the way to the left or to the right, but such extremist positions seldom are optimal in a complex and varied world. We could survey again, but I suspect the situation is similar to where it was 3 years ago: one large group, ... 2020-12-31 17:52:51 @yoavgo @ChrisGPotts I take primary blame for advocating the anonymity period. It was an honest attempt at a compromise middle ground. With the passage of time, I admit that it seems a bit flawed, as more people aim for “the anonymity period deadline” but the real question is what would be better? 2020-12-30 19:30:18 RT @jbhay: Our paper on non-Māori speakers’ remarkable implicit knowledge of te reo Māori sound patterns and word-forms has just been publi… 2020-12-26 21:49:08 RT @andreagiraldez: "In the midst of a massive social justice movement, multidisciplinary artist Rashaad Newsome envisions human-centered A… 2020-12-23 00:44:50 RT @KyleCranmer: New Years resolution: close all my browser tabs 2020-12-23 00:43:50 @cmarschner @HinrichSchuetze Thx! 2020-12-23 00:39:37 RT @RBReich: Millennials own less than 5% of all US wealth. In 1989, when baby boomers were the same age, they owned 21% of all US wealth.… 2020-12-22 14:56:51 RT @StanfordAILab: We support our @StanfordAILab alum @timnitGebru, her critical research on fairness & 2020-12-21 22:11:24 RT @robreich: The stunning announcement of > https… 2020-12-17 21:33:24 The first version of this was awesome. The new version with more focus on contextualized discussion leading to action should be even better. https://t.co/gT6BONxQ7H 2020-12-17 01:49:22 A linguistic change in progress used by 78 year old Joe Biden: “Biden told reporters as he departed … ‘We agreed to get together sooner than later.’” https://t.co/b7679C6Zyo Usage growing (also with “rather” ). 2013 StackExchange thread: https://t.co/BaLKc8HJbs HT @jinpa1345 https://t.co/2HyaWcwTYZ 2020-12-17 01:38:15 @ani_nenkova If it’s not just random, it’s actually really linguistically and culturally astute, given the rise of “speak your truth” 2020-12-16 16:37:51 RT @StanfordHAI: Many proposed solutions to fix algorithmic bias are on a collision course with Supreme Court rulings on equal protection.… 2020-12-16 05:06:07 .@adriandaub speaks truth to Silicon Valley’s prophets: “There are all these tech CEOs who believe they are the victims of random meanies on Twitter. And you think, how can you be so blind so as to not understand what the power differentials are here?” https://t.co/8OZXCxyhld 2020-12-15 18:58:36 Running out of things to watch on TV? My recommendation for the end of the year break is Mystery Road (from Australia). (U.S.: available on Acorn TV via Amazon Prime or on Apple TV.) https://t.co/RZr3nrSh1U https://t.co/GVYvPmCxKt 2020-12-13 17:52:09 @hannawallach @brendan642 @Ted_Underwood @yoavgo @HAndySchwartz @dmimno I’ve always assumed that the reason that variational inference “won” was mainly due to the fact that Michael Jordan loved variational inference 2020-12-13 17:50:15 @hannawallach @brendan642 @Ted_Underwood @yoavgo @HAndySchwartz @dmimno Doesn’t it make sense that sampling from the right model should work better than exact inference in the wrong model? — modulated only by the fact that there tends to be high variance and a failure to mix well in the full model. 2020-12-12 05:38:17 RT @StanfordHAI: ICYMI: Today Congress passed the National Defense Authorization Act for 2021, which has broad implications for #AI. Here w… 2020-12-11 02:36:49 RT @russellwald: Job alert!! At the @StanfordHAI we have numerous new full time positions! Do you have a can do attitude with the goal of… 2020-12-09 16:25:25 RT @kareem_carr: the covid trolley problem https://t.co/3zC91cJGzi 2020-12-09 15:30:24 This talk by @drfeifei draws the big picture between the human condition, human freedom of expression, and the need to focus on human-centered artificial intelligence https://t.co/rmSDm128v8 2020-12-08 02:07:39 Looking back on the hype, VC funding, and huge genuine progress in AI, ML, and autonomous vehicles in the 2010s, I think this will come to be seen as an inflection point: Uber, After Years of Trying, Is Handing off Its Self-Driving Car Project https://t.co/XePvjbFPD7 2020-12-07 18:25:55 RT @Benazir_Shah: Sindhi becomes the first language from Pakistan to be digitized by Universal Dependencies - a project of Stanford Univers… 2020-12-06 15:31:10 RT @jackclarkSF: @chad_oda @timnitGebru There are some pretty good proposals out there. I like the 'National Research Cloud' idea from @Sta… 2020-12-04 18:18:22 @newplatonism @fwolf_mergeflow @stanfordnlp @facebookai There’s been a bit of work on GPT-2 but not as much. That model also learns some syntax, but, yes, it does seem like a simple language model is emphasizing more putting attention on topically predictive words. E.g.: Da Costa https://t.co/CfK7yHVNRk Vig https://t.co/jj8bHSPpPg 2020-12-02 17:32:28 RT @codeorg: Deon Nicholas, co-founder and CEO of @forethought_ai, takes us through a deep dive on how neural networks work. Learn how neur… 2020-12-01 15:10:25 @kmlawson @lucy3_li Where R greatly wins is precisely for statistics. Really no choice for some things like mixed effects models. However, although little advertised, Python’s statsmodels is surprisingly good, as soon as you just accept that your stats package was written by a bunch of economists 2020-12-01 15:07:36 @kmlawson @lucy3_li Yeah, there are packages (often using external APIs) that do quite a lot of text/NLP in R, but I think Python just wins these days for text/NLP/ML/GIS, and is a better general programming foundation (and a better job market skill) 2020-11-30 20:32:20 @Hey_tati Well, anyway, lovely to e-meet you – and I hope you’re also applying to Stanford for grad school! 2020-11-30 19:20:33 @Hey_tati The error is in the original article—shame on them!—but that should be 1.25 *million* not billion! If one in six people on the planet died in road traffic deaths each year, the human population would be pretty sparse by now! Nevertheless, a very good role for robotics/AI here! 2020-11-30 15:57:22 @Abel_TorresM @StanfordHAI E.g., I’d like to reflect the view of Nils Nilsson, who, in his (2nd) AI textbook, very clearly saw “reactive machines” including simple stimulus-response agents as part of AI, but they precisely lacked world models. https://t.co/YyAu3oNis1 2020-11-30 15:54:44 @Abel_TorresM @StanfordHAI If by a “representation”/“model” you mean some kind of world model, which can be used for various questions, then I suspect that this definition is too narrow. For instance, it would exclude most work in machine learning (building supervised, discriminative classifiers). 2020-11-30 03:18:25 @reynoldsnlp Thanks! 2020-11-29 00:41:31 I succumbed to threats and wrote my 2019–20 faculty report. Hot papers last year: Electra: Pre-training text encoders as discriminators https://t.co/UXrVN5dDEx, Stanza: A Python toolkit for many languages https://t.co/ZEHuBQPQcp & 2020-11-28 16:53:53 RT @roger_p_levy: This is a terrific paper! Among other contributions, @MKeshev & 2020-11-28 16:40:14 RT @ducha_aiki: Done with #ICLR2021 rebuttals as a reviewer. 2 score increases, 2 no changes. Overall, I am happy with @iclr_conf unlimited… 2020-11-28 16:39:50 The amazing rise of reinforcement learning! (With graph neural networks and meta-learning in hot pursuit. ConvNets? Tired.) Based on #ICLR2021 keywords HT @PetarV_93 https://t.co/ozKpNUVH1i 2020-11-28 16:34:02 RT @SBSNews: New Zealand's first African MP gave a powerful and emotional maiden speech in Parliament as he recounted his escape from Eritr… 2020-11-27 23:09:19 Note that this version has been updated with a November 2020 version at this URL: https://t.co/0HqODjaHm9 (Modern CMSs, what can you do?) 2020-11-27 23:05:24 RT @vrandezo: Not the worst set of definitions I have seen. 2020-11-25 21:09:47 RT @StanfordHAI: Bringing together the brightest minds across disciplines, the @StanfordHAI Junior Faculty Fellows program is creating oppo… 2020-11-24 22:10:19 RT @StanfordHAI: AI novice? Lost in the alphabet soup of AI, ML, AGI? Learn the key terms in artificial intelligence in this one-page read,… 2020-11-23 16:43:25 @michellearning @hoeferse Really, being a PhD student more qualifies you to be a company founder. That’s another job where you have to do everything: research, development, hiring, marketing, fund-raising, management—and learn the things you don’t know as you go 2020-11-21 20:47:06 @tuomo_h @Arash_Hajikhani @spacy_io @helsinkiuni @HYhumtdk Great to hear! 2020-11-21 18:20:24 @jrfinkel @eternalmagpie Thanks, Jenny! 2020-11-21 18:09:29 @tuomo_h @Arash_Hajikhani @spacy_io @helsinkiuni @HYhumtdk A couple of links: https://t.co/CII6NGL0Wn https://t.co/HyeIqIuH1n https://t.co/ETEVFVAMrX HT @TurkuNLP 2020-11-21 18:02:17 @tuomo_h @Arash_Hajikhani @spacy_io @helsinkiuni @HYhumtdk Fewer than English, but, for a language with ~6 million speakers, Finnish has _awesome_ language resources thanks to the great work of several groups: POS & 2020-11-17 21:57:17 @BDelipetrev Definitions get tricky! But then I’d say it’s not “fully pre-programmed”.... 2020-11-16 20:55:56 RT @salem_alelyani: Prof. @chrmanning, this is your published AI definitions sheet translated to Arabic. 2020-11-15 15:44:49 @embee0 @StanfordHAI Thanks - probably there should be a bit more of that in the AI definition but on the other hand (i) the “other half” is way less than half these days and (ii) I think there is a correct recognition that smart behavior without the ability to learn and adapt is insufficient 2020-11-14 20:05:49 @DrDeclanORegan @curtlanglotz @StanfordHAI The definition definitely mentions “learning”! I suspect your definition sets the bar too high—and would exclude most things that people currently regard as (narrow) AI systems—and not just “media calls it AI” things like logistic regression, which can be reasonably excluded 2020-11-14 01:08:35 RT @russellwald: This policy brief on facial recognition demonstrates what @StanfordHAI does best: bring together the best minds, @drfeifei… 2020-11-13 20:31:40 @cmuell89 @StanfordHAI Yes, “intelligence” is a very hard one, which many people just punt on (“what people recognize as intelligence”), but that seems unhelpful. This is an instrumentalist defn, but “might” be okay for technical contexts. At any rate, I do think it is right to mention learning. 2020-11-13 16:28:45 My attempt at understandable but technically correct definitions for key terms in Artificial Intelligence in one page for @StanfordHAI. With thanks for helpful feedback from people on Twitter, I’ve revised and hopefully improved a few of the definitions: https://t.co/0HqODjaHm9 https://t.co/SHexPWj1xr 2020-11-13 14:27:08 RT @prfsanjeevarora: Our InstaHide allows users and IoT devices to "encrypt" data yet allowing deep learning on it. Minor efficiency and ac… 2020-11-13 00:44:28 RT @StanfordHAI: Join HAI as a Junior Fellow! In this five-year program, you can leverage the strength of Stanford's top colleges and facul… 2020-11-13 00:41:03 RT @russellwald: Original @StanfordHAI research on facial recognition distilled down into a brief. Highlights: an evaluative framework to b… 2020-11-12 04:38:10 RT @StanfordHAI: HAI and @StanfordCISAC are seeking applicants for our 2021-2022 fellowship program! Pursue research on policy issues relat… 2020-11-10 21:54:55 RT @EBKania: Don't miss this @StanfordHAI seminar tomorrow if you can make it. Shazeda Ahmed's research is always incredibly insightful, an… 2020-11-04 22:26:58 RT @brcblog: #NovemberWish POPE FRANCIS PRAYER INTENTION FOR NOVEMBER Universal Prayer Intention - Artificial Intelligence "We pray the th… 2020-11-04 22:17:38 RT @NicolasPapernot: Check out our recent piece on "Preparing for the Age of Deepfakes and Disinformation" for @StanfordHAI 's policy brief… 2020-11-04 22:16:28 RT @plefoll: .@magrawala "As the technology to manipulate video gets better and better, the capability of technology to detect manipulation… 2020-11-04 22:14:40 @jochenleidner Wow! Thank you, Jochen! (But it was 100% John that wrote the blog post.) 2020-11-02 14:17:28 American Exceptionalism at work! Opinion: ‘It’s Like You Want to Stop People From Voting’: How U.S. Elections Look Abroad via @NYTOpinion https://t.co/cRGbXWWTHY 2020-11-02 05:29:07 @lpachter Hanna Neumann, chair of pure mathematics at the Australian National University from 1964 (!). https://t.co/bccOIVOd0D 2020-11-01 22:19:43 @messy_meha_monk Definitely not in person at present. For remote, it’s up to individual faculty, some have worked with visiting students, but it’s hard to line something up, and certainly I myself think that I do enough hours on Zoom already 2020-11-01 22:08:56 A roundtable of ⁦@StanfordHAI⁩ leaders and friends provide a great concise summary of major AI and tech issues that will face the next U.S. administration—and in general countries around the world in the 2020s—from disinformation to healthcare https://t.co/cmECpv2wpm 2020-10-31 23:04:37 The world has changed so dramatically in the last year that it’s hard to recall now that exactly one year ago today, I was in Beijing (with ⁦@xiaohe_seattle⁩ & 2020-10-31 05:45:54 @panabee Yes! 2020-10-31 00:57:09 @panabee Space. Many other terms could also be included and I was very selective. Self-supervised learning is currently in vogue but just not nearly as general or common as the other terms included. But I deliberately hinted at it as a major kind of unsupervised learning. 2020-10-31 00:50:29 @fsids Or perhaps even just “computer systems” since while it’s always software in practice, that’s maybe an unnecessary restriction? 2020-10-31 00:42:49 @fsids Decisions and actions still do seem different to me. I accept the point on agents in ML. Really the “agents” language is canonical in AI, as in Russell & 2020-10-31 00:37:13 @vijayganti Yeah, I think I could try to word that phrase a bit better. My point there was only meant to be that humans learn to perform suitable techniques to solve problems and achieve goals, appropriate to context in an uncertain, every-varying world” not the details of mechanism 2020-10-31 00:30:13 @_chapter10_ Sure, as long as I’m credited 2020-10-30 15:19:48 RT @StanfordHAI: “What if, instead of thinking of automation as the removal of human involvement from a task, we imagined it as the selecti… 2020-10-29 08:24:56 @msweeny @MDoornenbal The status and role of consciousness is a complex one—perhaps see https://t.co/3Dellqdg3P—but more than I intended to address here 2020-10-29 08:21:49 @fsids Maybe a fair point. Would changing “thinking” to “decisions” help a little bit? 2020-10-29 08:12:17 RT @StanfordHAI: When the arts and technology coalesce, the whole experience becomes that much more human. Discover how these two worlds ar… 2020-10-29 08:05:48 @burantiar At present, by URL, I guess. Maybe I should put it on arXiv? 2020-10-29 08:01:03 @yanezlang @fchollet Definitely 2020-10-29 07:58:38 @_ankurgupta_ I don’t think other forms of intelligence are excluded here. Art is a way of achieving certain goals. 2020-10-29 07:45:46 @ETN_Jan You would minimally need to show that the entity can also behave intelligently in very different environments by adapting and learning 2020-10-29 07:43:21 @stanfordnlp @msweeny @MDoornenbal As someone notes in a later comment, even though I wasn’t thinking of it at the time I wrote my definitions, the position I adopt is similar to the one @fchollet argues for in much greater detail in his paper https://t.co/2K3C7W8W2I 2020-10-29 07:28:02 @patrickruch Thanks—I agree with both these points: 1) yes, but the AI defn contrasts with HAI at the end 2020-10-28 21:45:22 @ak_panda @mte2o @mtsmiru Sure, translations with acknowledgment are most welcome! 2020-10-28 20:47:36 RT @MarietjeSchaake: A year ago, my journey at @Stanford started with this case for regulating the digital world to preserve democracy. It… 2020-10-27 14:59:48 RT @tylerraye: two monuments to scientific racism were removed from @Stanford's campus this weekend symbolic gesture… 2020-10-26 18:28:45 Artificial Intelligence Definitions: This (northern) summer, I spent more time than I’d like to admit coming up with a handout defining key terms in AI in 1 page, trying to be informative and suitable for non-specialists – let me know if you like them! https://t.co/baISijEzGK https://t.co/UAkeCT4Ha3 2020-10-26 18:16:12 RT @erikbryn: This is what’s possible when leaders don’t give up 2020-10-26 18:12:31 RT @erikbryn: Join me for the AI & This free virtual event will bring together visionar… 2020-10-23 00:06:23 RT @IPsoft: You ever wonder how AI understands and responds to the questions you ask it? Watch this video about our question generation wor… 2020-10-22 22:32:41 @ani_nenkova That’s fair, though it’s hard to come up with a better general measure. Chris Manning’s Judgment of Worth™?!? But, for this, there are enough last authors with both Findings and regular papers that I’m sure one could do something cleverer with paired statistics 2020-10-22 22:27:46 RT @curtlanglotz: Helpful AI-related definitions from @chrmanning and @StanfordHAI (PDF) https://t.co/XeXuRPqsSz @StanfordAIMI @Radiology… 2020-10-22 22:21:16 RT @seth_stafford: The redoubtable @johnhewitt - whom I first noticed for his 'probing for syntax' work w/ @chrmanning - is back leading a… 2020-10-20 15:49:16 It’ll be interesting to see in a year’s time how the distribution of number of citations varies between Findings of EMNLP 2020 and regular EMNLP 2020 papers. Just randomly skimming papers as they appear on social media, a lot of the time they look equally interesting to me. 2020-10-15 15:23:25 RT @curtlanglotz: "If only we could do self-supervised learning, but with a more structured latent space." Joshua Tenenbaum from @mitbraina… 2020-10-15 00:52:04 RT @noUpside: Anyway, one takeaway from today's news (and takedown-related meta-news) is that we just got a preview of what'll happen if th… 2020-10-15 00:03:05 RT @DeepLearningAI_: We’re thrilled to present "Heroes of NLP," our latest interview series. @AndrewYNg interviews @chrmanning, Kathleen… 2020-10-14 23:48:07 Update: There are more than twice as many new cases per day in Santa Clara County—a county of ~2 million people in a state easily in the quartile of US states with least #COVID19—than there are for Australia, a country of ~25 million. Data dashboard: https://t.co/zE7dh3z5hu https://t.co/M7zbnKxtu9 2020-10-14 23:24:02 This is the kind of place where @Twitter excels—you _can_ find an article on the topic in the @nytimes or @washingtonpost, but you’re very unlikely to unless you already know about it. For both, it doesn’t appear on the World News page—you have to specifically look in Africa News 2020-10-14 23:20:04 I’ve been horrified but inspired reading about the protests against SARS—the Special Anti-Robbery Squad—in Nigeria. Partly the the same issue of police violence but completely different—you might be shaken down or bashed just for carrying a laptop bag! #SARSMUSTEND HT @tejuafonja 2020-10-12 21:52:21 RT @StanfordHAI: Our conference last week focused on the latest research on cognitive science, neuroscience, vision, language, and thought… 2020-10-11 23:26:41 @srush_nlp @yoavgo Ranking lists are certainly a big factor in many countries. But I think the other part is that NeurIPS—despite the name—is now perceived as very broad, whereas ICLR is understood as mainly neural networks still 2020-10-07 23:15:26 Re: Facebook will ban political ads after polls close Nov 3 “The moves come after executives, including Mark Zuckerberg, became increasingly alarmed by the presidential race”—The real problem is that this solution may work for the US, but not for every other country on Earth… 2020-10-07 20:37:35 RT @JesParent: Shout out to Josh Tenenbaum for using the oldschool classic NBA Jam (and minecraft, atari etc) in his set of screenshots for… 2020-10-07 20:37:15 RT @JesParent: wow @YejinChoinka's talk on common sense in AI was awesome. Really appreciate the revisiting common sense attitude and all o… 2020-10-07 20:07:22 RT @tobigerstenberg: Thought-provoking and funny talk by @YejinChoinka on modeling common sense, arguing for the importance of language… 2020-10-07 20:07:01 RT @DrJimFan: Prof. Aude Oliva's talk: integrating insights from cognitive science into AI. Now discussing the Moment-in-Time large-scale v… 2020-10-07 20:06:34 RT @ShikharMurty: Really interesting presentation by Yejin Choi, on using language as a substrate for producing intuitive commonsense infer… 2020-10-07 19:36:30 RT @DrJimFan: Prof. Yejin Choi: common sense intelligence. Many AI algorithms these days beat the benchmark without actually solving the un… 2020-10-07 19:36:23 RT @DrJimFan: Crazy results grounded in theory: you can exponentially *increase* learning rate and neural networks will still train success… 2020-10-07 19:36:10 RT @ShikharMurty: Sanjeev Arora starts off the afternoon session by motivating the need for mathematical understanding to help answer impor… 2020-10-07 19:35:36 RT @akjags: One highlight (among many) from Matt Botvinick's @StanfordHAI talk earlier today: unsupervised models (beta VAEs) trained to re… 2020-10-07 19:11:43 RT @StanfordPsych: Check out the @StanfordHAI conference on "triangulating intelligence: melding psychology, neuroscience, and AI" -current… 2020-10-07 18:10:07 RT @akjags: Great panel discussion on what's required for building human-like AI systems happening now at @StanfordHAI #NeuroHAI https://t.… 2020-10-07 18:09:45 RT @DrJimFan: Key takeaways from Chelsea's talk: data is the key ingredient for robot generalization. There are many ways to collect "inter… 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired