Découvrez Les IA Experts
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Profil AI Expert
Non Disponible
Les derniers messages de l'Expert:
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-22 20:49:59 A thread on MMS by author @MichaelAuli https://t.co/P6HGbLSqfh
2023-05-22 20:31:46 Correct link to paper: https://t.co/yFrcnSiqft
2023-05-22 19:42:56 MMS: Massively Multilingual Speech. - Can do speech2text and text speech in 1100 languages. - Can recognize 4000 spoken languages. - Code and models available under the CC-BY-NC 4.0 license. - half the word error rate of Whisper. Code+Models: https://t.co/zQ9lWms5TQ Paper:… https://t.co/SWSbEJrQUh
2023-05-22 15:45:40 @nareshshah139 @MetaAI They are a component of the future. They are the short-term future. They are not the medium-term future. At least not without some major changes.
2023-05-22 13:05:23 LIMA : LLaMA 65B + 1000 supervised samples = {GPT4, Bard} level performance. From @MetaAI https://t.co/FIuIo6agXa
2023-05-22 12:42:12 RT @hardmaru: LIMA, a 65B LLaMa fine-tuned only with supervised learning on 1000 curated examples, without any RLHF, demonstrates remarkab…
2023-05-22 12:28:09 @Newsalertswork @jiwa986 @erikbryn @GoldmanSachs @geoffreyhinton Studying the past to predict the future may be fraught with uncertainty, but it sure beats the alternative which is ignoring the past and pulling crap out of your backend. And no, this is not a false comparison.
2023-05-22 12:24:40 @Newsalertswork @jiwa986 @erikbryn @GoldmanSachs @geoffreyhinton I didn't know I agreed with them until I talked to them. In fact, they changed my mind.
2023-05-22 12:22:18 @muddletoes I sure hope that my writings are more comprehensible than Lacan's! "the impenetrability of Lacan's prose... [is] too often regarded as profundity precisely because it cannot be understood."
2023-05-22 12:01:11 @alain_co Ça serait pas des fois de la betterave? Y en a aussi.
2023-05-22 11:53:25 @BretagnePoint Debonair or debonnaire means "having a sophisticated charm". I would totally go for that! Yec'hed mat <
2023-05-22 05:40:19 RT @munkdebate: Join us on June 22 for our public debate on #ArtificialIntelligence: Be it Resolved, AI research and development poses an e…
2023-05-22 05:39:47 @mgubrud @ESYudkowsky No AI will, either.
2023-05-22 05:38:53 @davidarredondo @ESYudkowsky That movie is beautifully filmed but gets just about everything wrong about AI.
2023-05-22 03:08:26 @truesteel23 @davidsancar The richest person in the world is French and lives in France.
2023-05-22 03:01:56 RT @VPSHendriks: An artificial intelligence system trained on words and sentences alone will never approximate human understanding. | @ylec…
2023-05-22 01:06:01 RT @nntaleb: ChatGPT is the modern version of Flaubert's "Dictionary of Received Ideas" (Dictionnaire des idées reçues), that is, a powerfu…
2023-05-21 23:34:54 @kourouklides @davidsancar Post-tax numbers would show even *lower* inequalities for France (not for the US).
2023-05-21 23:32:34 @g_fariello @davidsancar Simple: - not electing Republicans. - reducing the influence of money in political campaigns. - keeping the FCC "fairness clause" which would have prevented propaganda outlets like Fox News.
2023-05-21 23:26:30 @jeremyphoward @adamdangelo @robinhanson @SchmidhuberAI I sure did.
2023-05-21 23:24:34 @_jameshatfield_ My question was specific (with particular instantiations of A,X,Y, and Z). I just replaced them by symbols in the post to highlight the fact that LLM answers follow templates that can be applied unmodified to many instantiations of A,X,Y, and Z.
2023-05-21 23:20:03 @ESYudkowsky I can give you 100 reasons why turbojets would fail. Yet modern turbojets are insanely reliable. The proof is in the pudding. If we can make AI systems subservient and safe, they will be deployed. If we can't, they won´t. And they won't kill us if we can't.
2023-05-21 23:11:00 @Newsalertswork @jiwa986 @erikbryn @GoldmanSachs @geoffreyhinton I picked a bunch of economists who I know for a fact have studied the effect of technology on labor markets and published peer-reviewed articles about it. It's not an appeal to authority, it's a reference to actual professional expertise. I refer to experts precisely because I am… https://t.co/e8K8Wns4uq
2023-05-21 20:19:14 @ESYudkowsky Objectives that make AI systems subservient to human bosses. Those objectives would be optimized at run time to plan any action taken by the system.
2023-05-21 19:56:43 @meat_computer @elonmuskewl Herrm, so there are over 2 billion people who are 55+ or FB employees? https://t.co/edRm8vUrgM "Number of daily active Facebook users worldwide as of 1st quarter 2023" (in millions) https://t.co/RfAENc17mx
2023-05-21 19:45:25 @jiwa986 @erikbryn @GoldmanSachs @geoffreyhinton Both Geoff Hinton and Yoshua Bengio are good friends for whom I have a lot of respect. But neither of them is an economist. They have not studied the impact of technological progress on the labor market, unlike Erik Brynjolfsson, Andrew McAfee, David Autor, Daron Acemoglu, ...
2023-05-21 19:29:10 @jenningsgreg Yes, and they are authoritarian governments.
2023-05-21 19:21:33 Wrinkled so much, I am not. But when 900 years old I reach, look as good I will not. https://t.co/ZJsJy1DDk3
2023-05-21 19:18:26 @s_batzoglou Wrinkled so much, I am not. But 900 years old, I am not either.
2023-05-21 19:13:55 @davidsancar Technology increases productivity, i.e. the amount of wealth produced per hour worked. That is intrinsically a Good Thing. The question is how societies organize themselves so that the benefits of increased productivity are shared equitably. This is a political question. The… https://t.co/u0iiwSmmjs https://t.co/JH3NMFPY9Q
2023-05-21 19:00:32 AI won't take your job. But it will transform it and create new ones. This NYT article has quotes from economists who specialize in the effect of technology on labor markets, such David Autor, Daron Acemoglu, and @erikbryn : "Everybody I talk to, supersmart people, doctors,… https://t.co/ozqyJ1Gpss
2023-05-21 18:54:14 RT @erikbryn: There's are lots of potential harms from AI to be concerned about. But if you want a brief respite, read "The Optimist’s Gui…
2023-05-21 17:00:21 Learn to write like an LLM! Question: what do you think of person A's declaration that X is caused by Y ? LLM Answer: As an AI language model, I don't have personal opinions. However, I can provide information on the topic. Person A is a renowned scientist known for his work… https://t.co/R6TGRZ35q9
2023-05-21 15:38:50 Many people are more capable than their boss. AI systems may become more capable than you, but you'll still be their boss. If you feel threatened by having a staff -- of humans or machines -- that is smarter than you, you are not a good boss.
2023-05-21 14:59:55 Wondering if people who are afraid of open-source AI infrastructure have, in fact, a deep distrust of human intelligence. Do they doubt that the benefits of AI will be overwhelming compared to the risks of misuse? Would they have had similar fears about the open internet?
2023-05-21 14:25:39 @StevenLevy Haha!
2023-05-21 14:16:57 @Eriler It looks like I need to make the underlying logic of this joke explicit. This is not an insult towards French people (I am French, too!). It is a joke directed at a category of philosophers that much of the world associates primarily with French schools of thought: philosophers… https://t.co/GXTbtjYga1
2023-05-21 13:56:36 @hifichris @Eriler I am French and I met Macron a few times. Although he has a background in philosophy, I find him neither angry nor arrogant. I find him smart, pragmatic, and interested in intellectual debates. Some may interpret this as arrogance, but that would be a mistake, independently of… https://t.co/M3Jvpb2PIn
2023-05-21 13:48:51 @idriss_neumann @Eriler Ma supposée généralisation est que les philosophes coléreux avec un complexe de supériorité sont en grande partie français. C'est une réputation que certains courants de philosophie français ont à l'étranger (une conséquence du post-modernisme, je suppose). Je suis sûr qu'il… https://t.co/sHAE5hnaG9
2023-05-21 13:34:24 @_fyr @chloratine @MaisOuVaLeWeb @Eriler Ok, let's review the logic here. I stated: "angry AND philosopher AND superiority complex IMPLIES French." I did not state "French IMPLIES angry OR philosopher OR superiority complex." Hence, French people (like myself) should not feel insulted UNLESS they are angry… https://t.co/HIaiMZmZlp
2023-05-21 12:52:30 @fattyfatman I'm not sure the name-calling played any role, but I'm a humanist. What did you go through to misread people's motivations and philosophies so badly?
2023-05-21 12:32:53 @JeromeColombain I don't know about your writing habits, but I would not recommend it.
2023-05-21 04:33:12 @chloratine @MaisOuVaLeWeb @Eriler Si vous n'êtes pas un philosophe mécontentement avec un complexe de supériorité, vous ne devriez pas vous sentir insulté.
2023-05-21 04:26:09 No need to find the ones that work in French. I've heard them all since elementary school.
2023-05-21 04:22:37 @0karma108 I like that one!
2023-05-21 04:22:05 @youcantcallmeal I've been cited as "Cun, Yann L." Which is one reason I changed the spelling from "Le Cun" to "LeCun"
2023-05-21 04:17:15 @clemmihai I heard those in first grade.
2023-05-20 22:30:21 @MaisOuVaLeWeb @Eriler Je pense que vous n'avez pas saisi l'étroitesse de la catégorie de personnes concernée par mon insulte, qui n'était qu'une réponse à une insulte préalable.
2023-05-20 22:23:26 RT @twentyminutevc: Are there any countries that rival the US in terms of scientific research? VP &
2023-05-20 22:19:12 Since we are talking about insults, This is one of the two rarest, most original, witty, and intelligent ones involving mutations of my name. I'll let you guess what the other one is. https://t.co/0ACOvNhDvm
2023-05-20 22:12:54 @OmarSaydThat @elonmuskewl Many of my friends and colleagues are not on Twitter. A friend once told me "Twitter gives me seizures." One can have intellectual discussions on FB. Twitter is for announcements, quick assertions, and jokes. But be prepared for insults and invective.
2023-05-20 22:07:19 RT @scienceisstrat1: Research interest in AI has soared Cc: @ylecun @Scobleizer @erikbryn @amcafee https://t.co/u3pa9XVihJ
2023-05-20 21:29:11 @elonmuskewl 2 minutes ago
2023-05-20 21:27:58 RT @scienceisstrat1: India will soon produce more CO2 emissions than the EU https://t.co/xIPqz1EBVq
2023-05-20 21:14:38 @seanmcbride The best protection we have against the misuse of technology (AI or otherwise) is the strength of our democratic institutions.
2023-05-20 21:13:06 @SamirKhazaka Interesting question. One can't do much about 3: what people pay attention to is a consequence of culture and human nature. For 2, the problem is difficult. Search engines and social networks must be somewhat centralized to protect user data privacy. Revealing the code of ranking… https://t.co/9lwT5xDUMG
2023-05-20 20:58:40 @Eric_Sadin @Eriler I would argue that technology liberates us from the "algorithmic organization of life" by taking care of repetitive tasks. But perhaps I don't understand what you mean by "algorithmic organization of life".
2023-05-20 20:56:08 @jcunniet Non. Pour ça il y a Facebook Si vous voulez des opinions que leurs auteurs ont le courage d'exprimer en leur nom, il faut utiliser une plateforme où les pseudonymes sont découragés.
2023-05-20 19:13:22 @Eriler Je pensais a celui-là https://t.co/rikwLFC7KV
2023-05-20 19:10:07 @tprstly @kortizart For parody, you can try @boredyannlecun I don't know who is behind it, but it used to be funny.
2023-05-20 19:05:10 @QuantumG @the_boring_dad Me too. https://t.co/lpi7mMJi97
2023-05-20 18:38:55 The insult was: "Turing Award laureate, I doubt this guy can even pass the test" But the author seems spooked by my retweet.
2023-05-20 18:35:11 @jcunniet L'auteur a d'abord rendu son tweet privé, sans doute intimidé par mon retweet, puis a finalement fait réapparaître son tweet.
2023-05-20 18:30:47 @Eriler Eric Sadin is an angry philosopher with a huge superiority complex embattled against everything and anything technological. He is also French, but that's somewhat redundant with "philosopher", "angry", and "superiority complex".
2023-05-20 18:18:09 @mulmbot Anything worth tweeting is worth tweeting loudly.
2023-05-20 16:07:22 AI has actually played a hugely *positive* role in 2 and 3: Content moderation on social networks makes massive use of AI to take down or down-rank objectionable content, including dangerous misinformation. This has made huge progress in recent years because of transformers and… https://t.co/oT2FvN2HUx
2023-05-20 16:03:46 3 obstacles for a piece of content to have an impact on people. 1. Production 2. Dissemination 3. Attention Computers &
2023-05-20 13:52:25 @trunghlt @Miles_Brundage Cool video generation is nice, but it doesn't take us any closer to machines that can learn how the world works by watching videos.
2023-05-20 13:39:37 RT @AJamesMcCarthy: Look in the upper arm of this galaxy- you'll see a star appear to blink in and out of existence. That's a supernova! Ve…
2023-05-20 13:37:15 RT @whereisyvette: SUPERNOVA ALERT : SN2023ixf was just discovered a few hours ago in the Pinwheel Galaxy, M101! At 21 million light year…
2023-05-20 04:49:40 @technomancers @Miles_Brundage I'm not saying we won't. I'm saying we don't. The exceptions today are managed fleets of higly-instrumented cars by Cruise and Waymo in small domains. And despite all the depth sensors and the detailed maps, it doesn't train itself in a few hours like a 17 year old with eyes.
2023-05-20 02:20:28 RT @PessimistsArc: Why's no one talking about the last time an emerging technology was rapidly slowed down due to perceived risk? - Block…
2023-05-20 02:07:57 @Miles_Brundage Sorry, but the absence of such robots has nothing to do with the speed of edge GPUs (or lack thereof). We just don't know how to do it.
2023-05-20 01:50:04 RT @HarryStebbings: You have to admit, it would be the most amazing show to hear @ylecun and @elonmusk in discussion on the future of AI on…
2023-05-19 22:34:26 @Miles_Brundage Ok then, 1. why don't we have level-5 autonomous driving, which any 17 year old can learn in 20 hours of training? 2. Why don't we have domestic robots that can clear the dinner table and fill the dishwasher, which any 10 year old can do? Predicting what happens in a fully… https://t.co/hriMupYQKt
2023-05-19 22:27:11 @clmt Congrats Clément &
2023-05-19 21:47:00 @SpacemanTheDJen There is much more to intelligence than language. All of animal intelligence, which includes what we would call common sense, has nothing to do with language. Everything infants learn is entirely non linguistic. https://t.co/XK6SdxRGjy
2023-05-19 21:43:52 @chris8279 No. https://t.co/XK6SdxRGjy
2023-05-19 20:43:24 1. Make nearby neurons have correlated outputs. 2. .... 3. Explain perceptual organization. https://t.co/1mINgnRSd7
2023-05-19 19:41:41 @misterbipster @stewartschley AR-15 aren't designed for hunting.
2023-05-19 19:39:29 I have to admit, this insult is actually funny I'm certainly less fluent in English than a 13B-parameter auto-regressive LLM. (I wanted to say "a two-bit 13B LLM", but that might have been misinterpreted). I'm hoping I don't confabulate as much though. https://t.co/UJH6dtjGGS
2023-05-19 19:31:07 Good interview of Rodney Brooks in IEEE Spectrum about AI in general, and the LLM craze in particular. Favorite quote: - It sounds like you don’t think GPT-5 or GPT-6 is going to make a lot of progress on these issues. - Brooks: No, because it doesn’t have any underlying model… https://t.co/XRASVbJI7D
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-04-21 00:00:01 CAFIAC FIX
2023-04-15 05:04:17 RT @AlainGoudey: L'application native #GPT4All est maintenant disponible sur votre ordinateur (sans connexion internet donc) : Windows :…
2023-04-15 04:58:59 @JurgisBekepuris Ok then: Toxoplasmosis gondii dominates humanity. Which *clearly* shows that intelligence is *absolutely not* necessary for domination.
2023-04-15 04:55:25 @awadallah @mpshanahan But a dog has a much deeper understanding of the physical world than any LLM. This says nothing as to whether LLMs are useful or not. They are. Regular computer programs can beat you at chess, computing integrals, and planning a route. That doesn't make them smarter than dogs.
2023-04-15 04:48:30 @0xBurhanW @elonmusk Indeed. But I'm less fat than this.
2023-04-15 04:46:57 @vagrantcow @elonmusk Same story. I know many people with similar traits.
2023-04-15 04:37:02 RT @mpshanahan: A dog has the ability to negotiate the everyday physical world, something that no language model, and no robot, currently c…
2023-04-15 01:53:40 @RobertS32915096 @elonmusk Not enough.
2023-04-15 01:44:21 @elonmusk I'm not speaking about myself. I'm neither recluse, nor introvert, nor super-smart. I'm thinking of, I dunno, Alexander Grothendieck ? https://t.co/P1KtTbPFrf
2023-04-15 00:36:03 @QuintenFrancois Not really. More like shaped prose.
2023-04-14 23:25:30 Cats dominate humanity Super-smart scientists are often recluse and introvert. Which goes to show that intelligence is neither necessary nor sufficient for world domination.
2023-04-14 23:17:36 @sumdepony How do we get self-driving cars and domestic robots? How do we get disembodied virtual assistants to understand the real world as well as we do and have common sense?
2023-04-14 23:04:05 @anatelorenzen Even human intelligence is very specialized.
2023-04-14 15:24:51 @emerywells You are confusing intelligence and knowledge.
2023-04-14 15:10:01 @far__el No
2023-04-14 15:09:50 @emerywells But a GPT4-powered robot couldn't clear up the dinner table and fill up the dishwasher, which any 10 year old can do. And it couldn't drive a car, which any 18 year old can learn to do in 20h of practice. We're still missing something big for human-level AI.
2023-04-14 14:26:09 Before we can get to "God-like AI" we'll need to get through "Dog-like AI".
2023-04-14 14:22:47 @jeandpardaillan @TrubadurAta @nntaleb @cingiler_ This says pretty explicitly that there was indeed a ban by sultan Bayazid II in 1485, which was renewed by its successor Selim I in 1515. The ban was lifted in 1716, but no one dared printing books before 1727. https://t.co/IwuvOo5HRd https://t.co/VcGQoGAx1E
2023-04-14 14:17:57 @nntaleb @cingiler_ This talks about the Qozhaya press, but also says that it was unique and isolated, had very limited impact, and only printed a few psalm books (apparently). The article mentions that Arabic typography didn't take off until Napoleon's Egypt campaign around 1798.
2023-04-14 14:09:28 @nntaleb @cingiler_ Also this (in French): Source: https://t.co/IwuvOo5HRd https://t.co/XdXctHMdVy
2023-04-14 14:05:19 @petadactyl @Levi7hart False. Pretty much every scientist I've hired are smarter than me. That's why I hired them.
2023-04-14 01:05:05 @logopetria @LisyMarek I'm never happy when people get hurt. But everything is a risk-benefit tradeoff. Cars &
2023-04-13 23:57:22 RT @perplexity_ai: We are excited to launch the next version of https://t.co/ut3wdOwUEd! Introducing login, threads, focus search, improved…
2023-04-13 23:40:32 @RichardSocher Above the wing spar is probably the safest place in such scenarios.
2023-04-13 12:23:23 An interview with Barron about AI, LLMs, the moratorium call, etc. https://t.co/CIU4E5NLC8
2023-04-13 12:20:51 @zeroXmusashi @ubiq1er @ezraklein Population is leveling. https://t.co/9lEvbrX3sy
2023-04-13 12:06:57 @LisyMarek The adult response would be: let's see if the availability of this type of information *actually* causes people to hurt themselves and others. But then, it could be like alcohol and cannabis: yes, people can hurt themselves, but prohibition causes more problems than it solves.
2023-04-13 12:00:03 @mireillemoret This obviously had very little effect. Convenience largely won.
2023-04-13 05:54:34 @GregAttilaKiss @ezraklein Governments, courts, and regulatory agencies do it all the time with corporations.
2023-04-13 05:50:22 @md_rumpf @ezraklein Yo design the machines so that *by construction* they can only produce outputs that optimize the objectives. The objectives define the "laws". Enforcement is unnecessary when laws are respected by design.
2023-04-13 05:44:27 @The24HourCCNA @ezraklein No. That idea is based on the false premise of the "hard take-off." It just won't happen that way.
2023-04-13 05:32:59 @nntaleb @cingiler_ Hmm, this scholarly paper claims that the circulation of printed books was very limited before restriction were lifted in the 18th century. https://t.co/Fe8BLCteF3
2023-04-13 04:56:33 Let's see, Typing "how to synthesize codeine?" on Google gives you links to articles with detailed answers. Nobody has ever worried about that. But somehow, people are now demanding safety guardrails to stop LLMs from answering such questions. What? Why? https://t.co/BpzpKsQYuG
2023-04-12 23:44:26 @ubiq1er @ezraklein Why? Because every seemingly-exponential process turns out to be the beginning of a sigmoidal process. Examples? Airplane speed. Moore's law ...
2023-04-12 21:09:51 RT @NikolausWest: We got so excited by the release of @MetaAI’s Segment Anything Model (SAM) that we had to follow the hint on the blog pos…
2023-04-12 20:12:01 @GarrittyOf @ezraklein No.
2023-04-12 20:11:30 @ubiq1er @ezraklein That's the thing: the "hard take off" story is complete BS.
2023-04-12 13:42:40 I agree with @ezraklein : humanity has been dealing with the "alignment problem" for millennia by educating children and designing laws for individuals &
2023-04-12 13:32:23 La version audio fait 14 minutes, la version vidéo 30 minutes.
2023-04-12 12:33:02 @guysnovelutumba The link includes a text transcript, which you can run through a translator.
2023-04-12 12:25:17 Avec la video: https://t.co/i545bIuA5S
2023-04-12 12:21:37 Mon interview sur France Inter ce matin. https://t.co/dwmAGf6Sdl
2023-04-11 21:49:50 RT @mattturck: Being “good at prompt engineering” in 2023 is like being “good at Googling” in 2003
2023-04-11 21:13:34 RT @NYUDataScience: This week, the MaD Seminar Series collaborates with Courant Math Colloquium to present a research talk by Lenka Zdebovo…
2023-04-11 21:07:09 RT @DrJimFan: There're 3 major bottlenecks for robotics: data, data, data. Amazon ARMBench completely flew under the radar, but I believe…
2023-04-11 21:06:04 RT @NablaTech: Coming in ! This week's edition of The Healthcare Hoagie: Mental health is the fastest-growing marketplace for startu…
2023-04-11 21:05:55 RT @MetaAI: The Segment Anything Model (SAM) by Meta AI is a step toward the first foundation model for image segmentation. SAM is capable…
2023-04-11 18:46:59 RT @gabrielpeyre: Poljak heavy ball method speeds up gradient descent by introducing momentum. Corresponding to second order ODEs. https://…
2023-04-11 18:18:32 @mercurialsolo There will be a set of immutable objectives to ensure safety.
2023-04-11 18:16:24 Une interview sur France Inter demain matin à 7h50. https://t.co/QZNHFE9N7K
2023-04-11 18:13:36 @akidapart Despite my blindness, I can see that you are staggered. My apologies.
2023-04-11 18:12:44 @olinsaul ChatGPT doesn't disagree with me. People who wrote the texts that ChatGPT was trained on disagree with me. Why should I care?
2023-04-11 18:11:20 @LoreMenace What he calls "active inference" is a form of what I'm talking about here: find values of latent or action variable that minimize an objective (the energy). The "free energy" trades off minimizing energy with maximizing entropy of the distribution over variables being inferred.
2023-04-11 18:06:39 @BAPearlmutter [citation needed] I'm certainly not claiming priority on inference-time objective minimization. That's what Model Predictive Control is (essentially) going back to the 1960s. But current Auto-Regressive LLM *do not* do that at all. That's why they are not controlable.
2023-04-11 17:37:02 Top companies by R&
2023-04-11 12:20:03 @QRDL This would be "laws" (in the form of objective functions) governing robot behavior. They would be hardwired into the systems, so enforcement would be automatic.
2023-04-11 12:15:51 @wschroll Yes, hardwiring "laws" is the idea. I call it "Aligned Machine Intelligence" At inference time, AMI produces its output so that this output minimizes a set of objective functions. Some objectives are hardwired and immutable (for safety). Others are trained through human feedback
2023-04-11 12:10:56 @wschroll Laws become inadequate and need to be fine-tuned when society or technology evolves. When new practices appear, &
2023-04-11 12:00:36 @alsaai_eth They can't generate revenue for very long if their products are dangerous. They would be sued into oblivion, regulated to death, or both.
2023-04-11 11:23:36 Me: 1. Design AI systems whose behavior optimizes objectives at *inference* time. 2. Design/fine-tune objectives that align their behavior to human values. It's like lawmaking. AI gadfly with acute strawmanitis: "LeCun: if we can solve AGI, alignment will come along for free."
2023-04-09 23:16:56 RT @gordic_aleksa: The rate of progress in LLM optimization is just mind-blowing! Running 13B LLMs like LLaMA on edge devices (e.g. Mac…
2023-04-09 21:10:55 @plattttttttt @elonmusk Yes.
2023-04-09 21:10:43 @mihaisafta_ @elonmusk @ESYudkowsky How would you know that it's "a big if" ? Do you have a design for a superhuman AI? Can you show it can't be made safe? Could people in the 1920s imagine that giant jet-powered flying machines would transport people across the globe with incredible levels of safety?
2023-04-09 21:06:48 @accretionist @elonmusk The solution to this particular problem is to limit the power of plutocrats. Not to limit the research and development of AI. Otherwise, you would have to ban *every* single technology that can possibly be used by plutocrats to do bad things. That's pretty much everything.
2023-04-09 21:01:10 @4PFinance @heydave7 @elonmusk Unless you know how to build a superhuman AI system, claiming that they can't be made safe is complete speculation. Just like claiming in the 1920s that air travel could never be made safe would have been speculative and turned out to be wrong.
2023-04-09 20:49:18 A complete misrepresentation of my position. There are risks &
2023-04-09 20:46:28 @mattyglesias Iterative design &
2023-04-09 20:39:53 @VerdySylvain @EmmanuelMacron Sur les 14 auteurs de LLaMA, le LLM open source distribué gratuitement par Meta aux chercheurs, 11 sont à FAIR-Paris. Les talents sont là.
2023-04-09 20:34:54 @heydave7 @elonmusk A complete misrepresentation of my position. There are risks &
2023-04-08 20:13:55 Pas vraiment étonnant. https://t.co/cir2aScC4y
2023-04-08 20:03:58 Interesting graph. https://t.co/5ShBdEN4nr
2023-04-08 16:21:20 @yzingher @erikbryn @elonmusk @Tesla How?
2023-04-08 16:20:49 @angeloki @erikbryn @elonmusk @Tesla Hydro is limited by geography. It works wonders in Costa Rica and Québec. Elsewhere....
2023-04-08 16:02:02 RT @Nicolas_Colin: OK this is going too far. Here's a short thread about understanding the French pension debate (1/7)
2023-04-08 13:27:13 Survey by country: "Products and services using AI have more benefits than drawbacks" China: yeah, AI good! South Korea, Turkey, Brazil: Meh. Europe: AI bad. France, Canada, Netherlands, US: OMG, we are doomed! From the Stanford AI Index: https://t.co/qzGXd5oApw https://t.co/P7e2vEQys7
2023-04-08 12:15:56 @elonmusk @yoavgo No surprise there.
2023-04-08 12:14:05 @ftuuky @ID_AA_Carmack Except that all of this "training" must fit in 800 MB (size of the genome). And whatever part of this "training" distinguishes us from chimpanzees fits in 8 MB (1% difference between human and chimp DNA). This is really not much. GPT-3 needs 350 GB (2 bytes/weight).
2023-04-08 12:07:58 The gist of our arguments against the 6-month AI moratorium at @VentureBeat. With @AndrewYNg. https://t.co/TWKgPNmoGN
2023-04-08 01:00:18 @elonmusk Not mentioning *stochastic* cockatoos.
2023-04-08 00:53:41 RT @DrJimFan: Why does generative AI struggle with hands? It is not a mystical Bermuda Triangle in the latent space. There're compelling r…
2023-04-08 00:51:39 @artificialguybr It's not entirely up to me.
2023-04-07 23:35:15 Does it run LLaMA 7B? is the new Does it run Doom? https://t.co/QyLyE55Kou
2023-04-07 22:19:12 @jeremyphoward Indeed. But I got a few nice pictures. Good deal. https://t.co/NjfbN91CnR
2023-04-07 22:12:23 @jeremyphoward Well, while we are on the topic of nice-looking-but-nasty Aussie birds, this kookaburra stole a piece of steak from our barbecue. https://t.co/yVg5ynItLG
2023-04-07 21:20:31 @bitcloud @ID_AA_Carmack No, it would not. https://t.co/XK6SdxRGjy
2023-04-07 21:19:38 @ID_AA_Carmack more like a thousand times more. Between 1 and 2 trillion tokens. It would take a person 22,000 years to read through 1 trillion words at normal speed for 8 hours a day.
2023-04-07 21:10:56 @erikbryn @elonmusk @Tesla The only problems to solve now are: (1) large scale energy storage. (2) long-distance distribution. Storage is not solved. Batteries are not scalable. Hydrogen (by breaking up H2O) could work but existing methods are either inefficient or not scalable, requiring exotic catalysts
2023-04-07 21:06:48 And cockatoos are worse. They'll gouge your eyes out if they don't like you. I'm calling for a 6 month moratorium on cockatoos. https://t.co/mI2Pq9gS83
2023-04-07 20:39:35 @yoavgo The fact that it needs to be discussed is sad.
2023-04-07 20:30:19 RT @jdboachie: AI is going to be an amplification of human intelligence. Why would we want to stop that? Amazing thirty minutes with @ylecu…
2023-04-07 20:29:27 RT @alfcnz: As per your request, the latest updates on the book. https://t.co/DCIUyIPTOM https://t.co/2ZcHr8suJ2
2023-04-06 21:16:09 @ElieMesso Not only AI systems will have "laws" (or rather, objective functions they must optimize) they will have to obey them by construction, unlike humans.
2023-04-06 17:34:02 @olivierzongo9 Pourquoi n'avons-nous pas de voiture autonome de Niveau 5? de robot domestiques qui peuvent débarasser une table et remplir le lave-vaisselle? Il manque des choses fondamentales à l'IA actuelle.
2023-04-06 17:32:02 @MarcioK People said that about compilers. Before that, they said that about computers. About calculators. About electric and steam power. About horses. About iron. About agriculture. About bronze. About cut stones.
2023-04-06 17:24:11 @pfreeideas Attempting to predict the future without looking at the past is the surest way to make stupid decisions. [incidentally, predicting by getting information from the past is the essence of learning. Including machine learning]
2023-04-06 17:05:28 @JimDMiller @alexandrosM @npabon15 This argument sounds awfully similar to the one used by the Chinese government to block access to Wikipedia, Google, Facebook, and the New York Times.
2023-04-06 16:51:56 @officialKrishD That's my point.
2023-04-06 16:42:16 Inference-time objectives. Planning &
2023-04-06 16:24:27 RT @cosmo_shirley: We are organizing "Cosmic Connections": An AI X Astro Symposium in @SimonsFdn at @FlatironCCA! If you are a student/r…
2023-04-06 14:05:49 Machine intelligence is a way to amplify human intelligence, just as mechanical tools amplify human physical capabilities.
2023-04-06 12:57:36 @recurrentluigi No
2023-04-06 12:53:57 @TonyZador Konrad Zuse built a 22-bit Turing-complete, programmable, electro-mechanical computer in 1941, the Z3. So the idea of using bit for computation is older than McCulloch &
2023-04-06 12:14:53 History shows over and over that society and people's well-being makes progress with more intelligence: better skills, literacy, education, creativity, culture, communication, &
2023-04-06 12:01:51 @npabon15 The Catholic clergy worried very much about the "safety problems" of the printing press. They were right: it reduced their grip on European society. It caused a bunch of religious rifts and conflicts. But society made progress because of it.
2023-04-06 11:58:22 @you2045 We're talking about the 15th century here.
2023-04-06 11:57:26 @MuhsinAli99 Do you, ever?
2023-04-06 11:56:22 @max_paperclips I don't disagree.
2023-04-06 11:55:55 @cingiler_ That only came in the 18th century. 300 years of lost literacy is hard to catch up with.
2023-04-06 11:50:37 @asnar002 Yes, but that requires hard-core research, not just more data, more GPUs, and more RLHF duck tape.
2023-04-06 11:42:47 Repeat after me: 1. Current Auto-Regressive LLMs are *very* useful as writing aids (yes, even for medical reports). 2. They are not reliable as factual information sources. 3. Writing assistance is like driving assistance: your hands must remain on the keyboard/wheel at all times https://t.co/uF0zjO6thE
2023-04-06 11:36:02 RT @rao2z: Aw C'mon! Who are we going to believe? A 20 billion $ "Search by Imagination" service or some @washingtonpost reporter? (My hea…
2023-04-06 11:35:10 Missing something important? Oh, you think? https://t.co/cem83lHNR4
2023-04-06 11:33:46 RT @jpineau1: The notion of "object" (or "concept") is contextual and controllable. Segment Anything allows a greater degree of expressivit…
2023-04-06 11:32:48 RT @DeepLearningAI_: Join a special event happening this Friday, April 7 at 9:30 AM Pacific Time with @YLeCun and @AndrewYNg as they discus…
2023-04-06 11:09:25 Yes, regulate applications, not R&
2023-04-06 11:07:12 More info https://t.co/fiMS2VFnqB
2023-04-06 11:06:53 The Ottoman Empire limited the dissemination of printed books, fearing religious &
2023-04-06 10:51:24 Some good remarks about the mood of many AI researchers and engineers at the moment. It's easy to make two mistakes and get depressed or feel burned out: 1. Thinking that AI is "solved" or will soon be. 2. Thinking that one can not contribute. Both are false. https://t.co/YBfi5BAQl2
2023-04-06 05:31:58 RT @raphaelmilliere: New preprint! What does it take for AI models to have grounded representations of lexical items? There is a lot of di…
2023-04-06 05:28:50 RT @Jake_Browning00: Very exciting and provocative new work by @raphaelmilliere on a topic recently taken up by @ylecun @davidchalmers42 @B…
2023-04-05 18:54:28 RT @jcjohnss: I'm excited about Segment Anything released from FAIR today. It tackles an old problem (find objects in images) at large scal…
2023-04-05 18:54:16 RT @AndrewYNg: Yann LeCun and I have thought a lot about the proposed 6 month AI pause, and plan to chat about it on Friday - the questions…
2023-04-05 18:37:48 SAM: Segment Anything Model from FAIR. Foundation model for image segmentation. Demo: https://t.co/Ai29kp5dfs Blog: https://t.co/TiORmyDIeM Paper: https://t.co/Qppcl9mIKU Code: https://t.co/4yOXB4WniI Dataset: SA-1B , 11 million image, 1 billion masks https://t.co/b1fBRPMnmm https://t.co/3sL8wvIIMz
2023-04-05 17:35:51 This Friday, 12:30-13:00 EST. https://t.co/oCNyguNxpO
2023-04-05 16:15:20 @MahdiA_IO Yes
2023-04-05 16:14:28 RT @tobias_rees: got many interview requests from journalists interested in AI &
2023-04-05 12:25:10 @Maizek_ They might be OK for image generation. But they are really not good to learn features for recognition.
2023-04-05 12:23:51 @k_saifullaah @cmseibold MLMs (BERT style) are most definitely generative models, trained contrastively (a special form of denoising auto-encoder). They are *auto-regressive* (unlike GPTs), but they are generative.
2023-04-05 12:21:00 @adfiniteai I don't hate them. Making codes fuzzy is a good way to regularize AE to avoid a collapse to the identity function. But the features they produce are just not very good for image recognition.
2023-04-05 12:18:24 @danilodjekic Indeed. VICReg and I-JEPA. https://t.co/fBG8xxDXlt
2023-04-05 12:17:10 @cmseibold They are a kind of denoising auto-encoder, and hence generative models trained with a contrastive method. But their features are not good unless you fine-tune the entire network. They depend on using transformer architectures, which don't scale well for large images and video.
2023-04-05 12:12:04 A fireside chat about AI at the NYU Paris campus. Hosted by CNRS philosopher &
2023-04-05 02:29:25 RT @svlevine: We've released the Koala into the wild The Koala is a chatbot finetuned from LLaMA that is specifically optimized for high-…
2023-04-04 23:05:20 RT @erichorvitz: At the White House today for discussions with the President on the risks and opportunities of #AI. https://t.co/ukWTV6LGsn…
2023-04-04 23:01:26 RT @peteskomoroch: The transcript of this debate is really interesting and worth reading. @ylecun makes good points:
2023-04-04 22:57:11 @literallydenis @primalpoly @ESYudkowsky You are right.
2023-04-04 22:46:21 @OneDeanBocobo I don't think I'm on the euphoric end of the spectrum. Perhaps you're thinking of Ray Kurzweil. I think building human-level AI that is safe is hard work, but doable. And failure is not human extinction but stupid AI.
2023-04-04 22:43:47 @JediStartup We can design AI systems to be all that (empathetic, seeking approval from humans, etc), but unlike humans, we can explicitly design their intrinsic objectives to be non aggressive, submissive, etc. We can't do that with humans (at least not ethically so).
2023-04-04 22:30:10 @grbradsk Evolutionarily, the smart ones can survive on their own, while the less smart ones cannot survive without influencing others to help them.
2023-04-04 22:27:01 @Vert_Noel Euh oui, mais quel rapport?
2023-04-04 14:58:37 @syhw @togelius I've made that point numerous times in my talks of the last 7 or 8 years!
2023-04-04 13:26:31 @iruletheworldmo @ESYudkowsky Not nearly as interesting phonetically.
2023-04-04 13:23:14 My claim is that AI alignment will be manageable &
2023-04-04 13:10:43 RT @togelius: I agree with @lemire: Pausing AI research is neither practically doable nor desirable. Fortunately, as a society we have plen…
2023-04-04 13:10:28 RT @michaelshermer: I’ve been asking this same question @leecronin &
2023-04-04 12:45:08 4. Intelligence does not immediately cause an entity yo want to "take over" 5. A very dumb but specialized entity can kill a smarter one, e.g. virus vs human. Julian argues that MS Excel can do intelligent tasks and, in some ways, has already "taken over" our lives
2023-04-04 12:39:36 Julian reminds us: 1. all intelligence is specialized, including human intelligence. 2. being smart in some domains makes you strong in some environments but weak in others. 3. Intelligence does not immediately cause a thing to be able to "take over" 2/
2023-04-04 12:36:19 Excellent essay by my dear NYU colleague Julian @togelius about the existential threats from rogue AI, or rather, about the insanely-overstated likelihood of such threats. 1/ https://t.co/DOOgmGHIhr
2023-04-03 21:50:18 @faroukianoxide @balazskegl Elon revoked OpenAI's access to Twitter data. So the next GPT won't be trained on recent tweets.
2023-04-03 21:18:00 @balazskegl Which shows that not all human knowledge is expressible through language. Arguably *most* of human knowledge is completely non linguistic. Which is why LLMs trained solely from text will never come close to Human-Level intelligence.
2023-04-03 20:02:04 @MaxiCaveat Not particularly, unless you were Herbert Simon (Turing Award and Nobel in Economics). But perhaps more than psychologists talking about AI ?
2023-04-03 19:52:53 RT @andriy_mulyar: GPT4All and LLaMa.cpp Python Bindings Are Here Over the weekend, an elite team of hackers in the gpt4all community…
2023-04-03 19:47:37 RT @michaelshermer: Dear @ESYudkowsky You stand a far greater chance of dying from lighting strikes, collisions with deer, peanut allergies…
2023-04-03 19:38:00 Well, but learning image representations with generative models never actually worked. When it comes to SSL for images, GAN, VAE, denoising AE and other gen models have been a bust. What has worked is Joint Embedding (non generative) Architectures like Siamese nets, JEPA &
2023-04-03 19:26:18 RT @_rockt: I'm not scared of some AGI taking over the world anytime soon. I'm scared of society and in particular our education and wealth…
2023-04-03 19:08:29 @gershbrain ConvNets were never meant to be accurate models of the whole visual cortex, but simplified/abstract models of the foveal area of the ventral pathway. Some choices were due to convenience, simplicity, or efficiency. E.g. the 1st versions had strided conv but no separate pooling.
2023-04-03 16:47:28 @iruletheworldmo @sleepinyourhat No.
2023-04-03 16:47:20 @LeszBuk @sleepinyourhat Animals, including humans, are steered by the intrinsic objectives built into them by evolution, by their life experience, and by their education.
2023-04-03 14:19:24 @claudes24060374 Well, you have the choice between listening to the scaremongerers who tell you things like "if the academics are wrong, your kids die", and scientists like me who tell you they will not die because of AI. In fact, they lives will be enriched by AI. Your choice.
2023-04-03 14:12:05 @TransitoryInfl Here is the thing: like many people who bloviate on Twitter, that person is a self-appointed "expert".
2023-04-03 07:16:49 @Marco_Pinnisi @sleepinyourhat One has to make it work first.
2023-04-03 06:50:17 A useful feature on the Twitter app would be a quick way to say "you don't know what you are talking about." Perhaps an acronym "YDKWYATA", or an emoji
2023-04-03 06:25:50 RT @nsaphra: The thing about climate science is that their doomsday forecasting is based on actual physics models and simulations instead o…
2023-04-03 06:22:25 Paradoxical that normally pro-business American conservatives are positioning themselves against tech companies and in favor of regulating AI. https://t.co/5R78d4Jlye
2023-04-03 06:17:30 This. The blue area also includes "people that feel qualified to tell people who do research on AI that they don't know what they're doing" https://t.co/46h0n91bvk
2023-04-03 06:15:22 Everything is a remix. Filmmaker Kirby Ferguson added an interesting 5th part about AI &
2023-04-03 06:14:58 @balazskegl Most people do this subconsciously and do not realize what they're actually doing.
2023-04-03 06:14:09 @balazskegl For starters, explain that the main item in GPT's answer is wrong. You don't actually pull the handlebar in the direction of the turn. You first pull it in the *opposite* direction so as to tilt the bike into the the turn. Only then do you pull it in the direction of the turn.
2023-04-03 06:07:14 Serious knee-jerking at play here. https://t.co/bRSUZaWFku
2023-04-03 06:02:12 RT @rao2z: I am both bemused and confused by the "disaster backchaining" mindset: Start with something serious--climate, asteroids, stem ce…
2023-04-03 05:58:27 "There is no reliable technique for steering the behavior of LLMs" Is one of the 8 things @sleepinyourhat wants us to know about LLMs. This is a flaw of *auto-regressive* LLMs. https://t.co/ktGSfGmJyK
2023-04-03 03:25:34 @Marc_Compere @rao2z It's fundamentally different from producing tokens one by one auto-regressively. It's not different from latent-variable inference in graphical models or from model-predictive control in control engineering.
2023-04-03 03:19:32 @ElieMesso We are building propeller airplanes, and you are asking for regulations of jet engines.
2023-04-03 02:49:33 @XiaohuiChen18 That's just BS.
2023-04-03 02:49:05 @ElieMesso Because you can't regulate what nobody knows how to build.
2023-04-03 02:46:55 @elonmusk My point exactly. It took 50 years between the first flights and the creation of the FAA. Why be scared of AI when we don't even have a blueprint (let alone a demo) of a system capable of human-level intelligence ? It's like Otto Lilienthal being scared of engine failures https://t.co/5mhiL93Gxi
2023-04-03 02:24:44 @ckartik_ As someone who teaches a course on deep learning, I beg to differ.
2023-04-03 02:09:49 @rao2z Well, that's why I'm advocating that AI systems should produce outputs by optimizing objectives at *inference* time. Training them merely makes that process more efficient.
2023-04-03 00:40:37 Some folks say "I'm scared of AGI" Are they scared of flying? No! Not because airplanes can't crash. But because engineers have made airliners very safe. Why would AI be any different? Why should AI engineers be more scared of AI than aircraft engineers were scared of flying?
2023-04-03 00:15:37 RT @ReligionProf: Lots of people don't understand what ChatGPT is and what it's designed to do. They're surprised it makes things up and sa…
2023-04-02 21:29:15 @karlrgibson1 Don't you think worrying about the proper design of parachutes before the invention of the airplane is a little too early?
2023-04-02 21:20:27 RT @nsjersey: This is basically a 20-year gap. Wow. https://t.co/4L37FdeVkj
2023-04-02 21:17:30 @pmarca How about signaling a sufficient condition for mere competence? Merit doesn't matter nearly as much.
2023-04-02 20:58:36 @beadle1989 No, it's not.
2023-04-02 20:57:19 @ansgarjohn .@geoffreyhinton and I have been friends for 37 years. We don't disagree on many things. He says that an AI takeover is "not inconceivable" and I can't disagree. But I also believe it's very, very low probability and preventable rather easily.
2023-04-02 20:47:27 Many AI safety discussions today seem as speculative as discussions about airliner safety in 1890. Before we have a basic design &
2023-04-02 20:34:18 An oversimplification, but a rather accurate one. Not clear to me that @geoffreyhinton is that far to the left of me. https://t.co/y6Cv0OZWq1
2023-04-02 19:53:16 @maniaciaciec @ESYudkowsky If you claim that there is a teapot floating between the orbits of Jupiter and Saturn, the burden is on you to prove it, not on me to refute it. [With thanks to @RichardDawkins for this example]
2023-04-02 19:41:06 @rich_gast Scientific debates are good. Here is an example below. But debating AI ethics and safety with extreme AI doomers is like debating evolution with creationists. Pointless. https://t.co/egIJr3U4cz
2023-04-02 19:32:21 @ChrisStoecker @balazskegl When it comes to content moderation, such as the detection and take-down of hate speech and calls to violence, AI is not the problem. AI is part of the solution. https://t.co/YkPvJH6P9x
2023-04-02 18:50:13 Teratogenerative AI : producing monsters with AI. https://t.co/iAqyoYbXNe
2023-04-02 15:09:03 @benedictevans @elonmusk @erikbryn The US regulating agency certified Tesla's "FSD" as Level-2 (out of 5). The only manufacturer to have obtained Level-3 certification in the US is not Tesla but Mercedes (using technology from Nvidia).
2023-04-02 15:03:13 @ESYudkowsky I can propose an infinite number of improbable doomsday scenarios. They might make for fun Sci-Fi. But they are not worth anyone's time to refute one by one. They would have to be realistic to be worth refuting.
2023-04-02 14:55:10 @FrogChowder I don't engage with arguments from creationists and flat-earthers either.
2023-04-02 14:53:32 @MichaelOumano This is a complete misrepresentation. I made a historical point about the car and aircraft industries. Every technology has benefits AND risks. Some risks can be anticipated and mitigated in advance. Others are hard to predict and must be corrected as they emerge.
2023-04-02 14:40:34 @KnutarMike MS has several AI ethics efforts. They only got rid of one. I assume because it wasn't particularly effective.
2023-04-02 14:34:55 @baturinsky Regulate *new applications* wherever there are *real* public safety concerns. Existing application areas are already regulated, whether they use AI or not (e.g. in transportation, health care, etc).
2023-04-02 14:25:48 @NicolasMauduit Works pretty well for the aircraft industry. Also, for all kinds of interoperability standards: telephone, internet, encryption, video compression, banking, payment cards, USB..... Ok, scratch banking
2023-04-02 14:19:23 @MarethBrian Exactly. It makes sense to regulate applications (most of them already are, e.g. in transportation and health care). Regulating R&
2023-04-02 14:15:19 @bitcloud If you believe in the myth of the "hard take-off", *and* you hold the ridiculous belief that AI alignment is impossible to achieve *before* turning on an all-powerful system, then you might freak out and be subject to a nuke-data-centers-style hysterical meltdown.
2023-04-02 14:09:36 @MichaelOumano Think about the early days of automobiles: weak brakes, no seatbelts, no bumpers, no traffic signs. Yes, people died. Then, we had disk brakes, belts, ABS, airbags, driving assistance, speed limits, traffic signals.... Same story for pretty much every new tech ever deployed.
2023-04-02 14:05:37 @ndiakopoulos Are you talking about LLMs? Are they useful? Are they dangerous? Is their usefulness overwhelmingly larger than the dangers? (You know, like cars, airplanes, kitchen knives, gas stoves, smartphones....)
2023-04-02 13:49:22 The *only* reason people are hyperventilating about AI risk is the myth of the "hard take-off": the idea that the minute you turn on a super-intelligent system, humanity is doomed. This is preposterously stupid and based on a *complete* misunderstanding of how everything works.
2023-04-02 13:45:54 Every new technology is developed and deployed the same way: You make a prototype, try it at a small scale, make limited deployment, fix the problems, make it safer, and then deploy it more widely. At that point, governments regulate it and establish safety standards. 1/
2023-04-02 13:39:44 @JimPhos @ChadBowman0 @elonmusk We == society as a whole.
2023-04-02 13:38:54 @meamZ_MZ @ChadBowman0 @elonmusk If it's unsafe, people won't buy it, and regulations are likely to make it illegal. So, I fail to see the motivation that "someone" may have to build it.
2023-04-02 13:06:04 @kourouklides In the Middle Ages, Radical Catholic Europe was way more obscurantist than the Muslim world. Ask yourself why algebra &
2023-04-02 12:59:05 @DrorBenNaim Throughout history, (older) people have been scared of new technology, particularly new communication technology that can affect society. Those techs *empower* people &
2023-04-02 04:27:30 @anthrupad I'd submit that most of these people are not "terrified of AI." Except Stuart Russell, who is just wrong. Working on AI safety and ethics doesn't automatically make you terrified of it. I think AI safety is an important topic. But I'm not terrified.
2023-04-02 04:05:29 @DevDminGod Those are both extreme sides. They are both wrong.
2023-04-02 04:03:29 @Jeff_Aronson @elonmusk No. This is science. People should try to prove me wrong. As a scientist, I will change my mind in front of credible evidence.
2023-04-02 03:59:01 @_5ingularity @elonmusk You are extrapolating what you think you know about me. Which apparently is not very much.
2023-04-02 03:57:41 @ChadBowman0 @elonmusk It doesn't exist. So yes, right now, it's benign. Once we have at least *some* idea of how this could work, we'll be able to discuss how to make it safe. If it turns out we can't make it safe, then we won't build it. Until then, it's like we're worrying about the sex of angels.
2023-04-02 03:42:02 @rineez @elonmusk @erikbryn Where are those "generalized AI"? They don't exist. They will, but right now, they don't. And what harm might they cause once they exist? How could you tell, since you have no idea how they would work?
2023-04-02 02:30:52 @elonmusk @erikbryn There are regulations and regulating agencies for *applications* of AI, e.g. for driving assistance, medical image analysis, etc. Are you suggesting that R&
2023-04-01 22:57:34 RT @TonyZador: These transcripts of a discussion about AI alignment, modified from a freewheeling discussion on FB 4 yrs ago w/me, Stuart R…
2023-04-01 22:55:19 @JohnnyRivers33 @LeoKanaF The wealth gap has increased in the US. But it has not significantly increased in the European Union (in fact, it has decreased in France and in a few other countries). So, it's purely a fiscal policy issue. The US political system sucks. What else is new?
2023-04-01 22:47:27 @claudes24060374 @elonmusk Elon &
2023-04-01 21:36:58 @bluehorizons290 @erikbryn Safety is very much embedded in the AI industry. All the big players have independent AI safety groups. The *usage* of AI is very much regulated. E.g. AI-based driving assistance and medical image analysis systems go through certification processes.
2023-04-01 21:17:23 Forever fall. https://t.co/EJVfF4bAhe
2023-04-01 21:14:18 RT @nntaleb: Let me be blunt. Those who are afraid of AI feel deep down that they are impostors &
2023-04-01 21:12:22 RT @tobias_rees: There is no point in being for or against AI. There is no point in opting in or out, like Italy. Soon there will be no asp…
2023-04-01 04:51:50 Okay doomer....
2023-04-01 04:50:16 RT @Spacecolonize: Come on your entire company is based on their paper.
2023-04-01 03:15:18 RT @AI4_kids: Jacob Browning and @ylecun provide a wonderful overview of the evolution of AI and the current debate around symbolic meaning…
2023-04-01 03:10:59 RT @soumithchintala: +1 to this. the amount of baseless hatred that Eliezer is spewing is toxic, and does disservice to the alignment and s…
2023-04-01 02:58:11 RT @_kainoa_: A visual cortex is the region of the brain that (together with the motor cortex) enables an organism to convert vision into m…
2023-04-01 01:55:54 @elonmusk Any technology has good sides and bad sides. It must be developed and deployed responsibly to minimize the bad side-effects. LLMs, even bad ones, are useful. Do you really think they constitute some sort of existential risk?
2023-04-01 01:15:13 Rhetoric from AI doomers is not just ridiculous. It's dangerous and unethical. https://t.co/YEiSdtOwxJ
2023-04-01 01:10:40 AI doomers reach the pinnacle of unethical behavior by calling for violent acts to prevent completely made-up risks. https://t.co/rvcTMdMts8
2023-04-01 01:07:40 You know what's unethical? Scaring people with made-up risks of a technology that is both useful and beneficial. https://t.co/BFV7pIM28f
2023-03-31 20:51:27 @scienceisstrat1 @Scobleizer @elonmusk @azeem @erikbryn No.
2023-03-31 20:49:49 @VeryBusinessPe1 Once they get older, Americans seem to be in similar health as their peers in other countries. Then again, older Americans have access to European-style "socialized medicine" in the form of Medicare.
2023-03-31 20:01:03 RT @juttaholstein1: Can deep learning systems learn to manipulate symbols? The answers might change our understanding of how intelligence w…
2023-03-31 18:54:07 Pretty much. https://t.co/52DXh6IkHA
2023-03-31 18:51:41 RT @michaelshermer: I declined to sign the letter when asked. Halting AI is ridiculous. I have read the AI doomsayer lit &
2023-03-31 18:45:19 The epitome of overreaction. https://t.co/5kW64sSkwy
2023-03-31 18:39:10 RT @MetaAI: Today, we're sharing two major advancements in our work toward general-purpose embodied AI agents: VC-1 &
2023-03-31 18:27:45 @dpkingma Startup idea: " The Actually Boring Company". We develop seamless and efficient technology that absolutely everyone finds mindnumbingly boring.
2023-03-31 18:23:03 Americans are dying at a much younger age than residents in peer countries. A fascinating set of charts demonstrating the abysmal state of affairs, derived from a Financial Times article. https://t.co/ZeVQXw3eoD
2023-03-31 15:59:11 "Robots that learn from videos of human activities and simulated interactions" A new blog post from the Embodied Intelligence group at Meta-FAIR. https://t.co/K3lbkYU8Xi
2023-03-31 15:56:34 @MelMitchell1 You and me both.
2023-03-31 12:50:48 @nazarre @GregAttilaKiss "almost custom designed to produce propaganda". The main obstacle to propaganda is not the difficulty of production but the difficulty of dissemination. Every single communication technology ever developed has, by definition, enabled the dissemination of "propaganda".
2023-03-31 12:25:31 Haha! https://t.co/IfVlOZSWE0
2023-03-31 01:48:13 RT @jkronand: An interesting new Nature paper compares fMRI recordings with activations across layers in a language model, and find evidenc…
2023-03-30 21:11:34 @differenzierend Sure. But isn't "managing such a process" what always happen in well-run democracies?
2023-03-30 20:57:51 @ID_AA_Carmack @woj_zaremba @geoffreyhinton Pretty much. You even get a hood with your PhD. Not just academia, but the whole research community. Some of my padawans run chunks of DeepMind: @koraykv @RaiaHadsell @clmt The Force is strong with them! Technically, I was already a young Jedi when I did my postdoc with Geoff
2023-03-30 18:38:37 RT @plevy: « The main debate: Does symbolic manipulation need to be hard-coded, or can it be learned? » by Jacob Browning and @ylecun ht…
2023-03-30 12:25:50 @GregAttilaKiss Nuclear warheads are designed to kill people. The New Testament tells people to stop killing each other, but has been used pretty effectively to brainwash people into killing each other. The purpose of AI is to help people become smarter. Perhaps even wiser.
2023-03-30 07:35:13 RT @random_walker: This open letter — ironically but unsurprisingly — further fuels AI hype and makes it harder to tackle real, already occ…
2023-03-30 07:31:01 RT @tdietterich: Important post from @Noahpinion. LLM-based tools have the potential to make all of us more efficient. Don't let the fearmo…
2023-03-30 07:25:02 @boazbaraktcs They banned Galileo.
2023-03-30 04:02:49 The Ottoman empire banned printed books until the 18th century, which greatly contributed to their decline from the pinnacle of science and mathematics in the Middle Ages to an intellectual backwater after the Renaissance.
2023-03-30 03:48:28 @2020science The knee-jerk reactions to new technologies, particularly new communication technologies, or new cultural movements are quite consistently present and very consistently misdirected. https://t.co/Q4LwhiSayI
2023-03-30 03:34:28 Society *was* destroyed... ...for the better. Printed books enabled the Protestant movement, and 200 years of religious conflicts in Europe. But printed books also enabled the Enlightenment: literacy, education, science, philosophy, secularism, and democracy.
2023-03-30 03:26:27 @rlacombe Also a key reason we have literacy, science, secularism, and democracy.
2023-03-30 03:24:55 RT @togelius: I don't think a six-month ban on developing models "more capable" than GPT-4 (whatever that means) would make much difference…
2023-03-30 03:20:48 Agreed. https://t.co/fRMGo8Mbem
2023-03-30 03:08:27 RT @Noahpinion: This post by @tylercowen is very good, and I also disagree strongly with one of its basic premises. We've been living in a…
2023-03-30 03:01:53 RT @perplexity_ai: Announcing Perplexity AI’s iPhone app and series A funding! Perplexity provides instant answers and cited sources on any…
2023-03-30 03:00:14 The year is 1440 and the Catholic Church has called for a 6 months moratorium on the use of the printing press and the movable type. Imagine what could happen if commoners get access to books! They could read the Bible for themselves and society would be destroyed.
2023-03-30 02:52:02 @tdietterich @yoavgo @boazbaraktcs Optimal Brain Damage?
2023-03-30 00:43:47 @ariwun @aaron_defazio Yup
2023-03-29 23:24:52 @ariwun @aaron_defazio I didn't. It's a server configuration issue.
2023-03-29 23:01:54 Hahaha! Good point, @aaron_defazio Moratorium on Development == Development in secret [which is the exact opposite of what some of the signatories are hoping for] https://t.co/X7iManBl3M
2023-03-29 02:55:20 Nope. I did not sign this letter. I disagree with its premise. https://t.co/DoXwIZDcOx
2023-03-29 02:52:37 @oising Perhaps I should tell you that almost all of my publications of the last several years have been on self-supervised learning for images and video.
2023-03-29 02:47:58 RT @randall_balestr: Supervised and self-supervised learning? Two separate methods for different cases... one might say! With @CabannesVivi…
2023-03-29 02:43:06 @conor_muldoon @TonyZador That difference only popped up in the last million years or so and is encoded in less than 8 MB of genetic information. That is awfully small.
2023-03-28 13:58:25 @HulsmanZacchary @TonyZador You can learn a hell of a lot in 20 minutes.
2023-03-28 13:56:44 @TomerLevinboim @TonyZador No. System 2 would require a bit of "innate machinery" perhaps along the lines of what I described in my position paper below. Vision/multimodal SSL would be a breakthrough, but it's way harder to do than from text. https://t.co/7ZgRtLJoMw
2023-03-28 13:53:27 @TonyZador They still have touch, which is very high bandwidth, and audio (not just for speech).
2023-03-28 13:51:01 @elonmusk Indeed.
2023-03-28 06:26:20 @Phoenix2574 The amount of information to transform chimpanzee DNA into human DNA is about 8 megabytes. It took about 5 million years. So we are talking 12 bits per year. Not much.
2023-03-28 06:22:43 @jponline77 Packed in a tiny amount of bits in the genome? In the 5 million years since humans and chimpanzees split evolutionary, our genetic differences are a mere 8 MB (about 1% of our DNA, or 30 million base pairs).
2023-03-28 06:15:00 @davidwhogg In the first 3 or 4 months, babies have essentially no power of intervention on their surrounding. They do flail theirs limbs a lot. But they learn an enormous amount of background knowledge about the world from mere observation.
2023-03-28 06:11:20 @danijarh None. Animals learn world models from vision without text.
2023-03-28 06:09:38 @TonyZador As you know, you and I disagree on that. AI systems need a bit more "innate" machinery than today, but not much: something that allows them to reason and plan. What they need is the ability to perform self-supervised learning from high-bandwidth natural signals, like vision.
2023-03-28 06:02:27 Our highest-bandwidth information channel is not speech. It's vision and touch.
2023-03-28 04:35:32 RT @michaelshermer: Dear @harari_yuval @tristanharris &
2023-03-28 04:32:41 Is the US finally catching up with the rest of the developed world? https://t.co/LEHefJmKlr
2023-03-28 00:05:47 Humans don't need to learn from 1 trillion words to reach human intelligence. What are LLMs missing? https://t.co/JysSIvegX4
2023-03-27 23:55:03 RT @TobyWalsh: Can deep learning systems learn to manipulate symbols? @ylecun &
2023-03-27 23:48:06 Haha! https://t.co/h0lbD96hRQ
2023-03-27 23:43:15 @F_Sammarco Pretty good paraphrasing, with helpful details!
2023-03-27 20:08:02 RT @tonyzzhao: Introducing ALOHA : ow-cost pen-source rdware System for Bimanual Teleoperation After 8 months iterating @stanford a…
2023-03-27 19:34:25 RT @CrosslandTamsin: Fascinating article by @ylecun and Jacob Browning "Does symbolic manipulation need to be hard-coded, or can it be lear…
2023-03-27 19:28:00 Yup. AI is even hotter than hot. https://t.co/AimHM1MQ1Y
2023-03-27 19:08:48 @KordingLab Hierarchical planning and refinement is the best kind of planning.
2023-03-27 16:44:10 @KordingLab That's because Konrad's brain can *plan* long answers and thereby avoid the exponentially-decaying probability of not farting that would inevitably occur were he to pull words out of his backend one at a time auto-regressively
2023-03-27 13:58:09 @RandomlyWalking @DuaneJRich I think it depends on what we mean by P(). P(y1,y2,y3) is the "real" joint t distribution. You can choose to parameterize it as Q(y3|y1,y2,w)Q(y2|y1,w)Q(y1|w) where Q is some trainable function with parameter w. But you certainly don't have to use this factorization.
2023-03-27 13:51:58 @scottiev I'm certainly *not* underestimating the value of current systems for creators. These things are very useful as writing aids for poetry, prose, or code.
2023-03-27 13:49:44 @peiyong_wang @danijarh @dustinvtran @RandomlyWalking @DuaneJRich This trick was proposed years ago in the context of translation, back in the pre-transformer days when people were still using LSTM. This is a common problem of recurrent (or auto-regressive) nets where the information about the past is compressed and gets diluted over time.
2023-03-27 13:41:51 @DavesBasilisk @bleepbeepbzzz Wat?
2023-03-26 13:23:25 @Satwant_Kumar_ It's a picture I took while snorkeling in the Caribbeans a couple of weeks ago.
2023-03-25 21:33:50 Colorfully stochastic parrotfish. https://t.co/apj9iFn0Gv
2023-03-25 21:30:46 @KyleCranmer New York City today: https://t.co/FFTWHXzxDE
2023-03-25 21:28:34 RT @matthieurouif: PhotoRoom has an unfair advantage to power AI commerce photography. It is having its HQ in Paris. @paulg has a great ess…
2023-03-25 20:54:49 The AI scene is a dispersive medium for people's perception of progress. Different people perceive progress with different velocities. https://t.co/FdOMmRp9rL
2023-03-25 20:51:59 @madsjw Enjoy!
2023-03-25 20:42:13 RT @scienceisstrat1: Immigrants in Canada have extraordinary levels of education. Between 53-62% of immigrants to Canada have higher ed…
2023-03-25 20:40:32 @Tahina_Spector @scienceisstrat1 @conorsen @erikbryn @amcafee @Scobleizer @pmarca This is a uniquely American phenomenon due to US fiscal policies (particularly tax cuts for higher incomes and cuts in social programs). There has been no such decoupling in continental Europe.
2023-03-25 20:36:40 A slightly new challenge to test the physical intuition of LLMs, with an ensuing discussion. https://t.co/S2bX7NW6LK
2023-03-25 18:55:00 @natfriedman Also, the question in my original tweet and its solution may have found their way into the training set that GPT-4 was fine-tuned on.
2023-03-25 18:52:08 @natfriedman GPT-4 gets it wrong at first, but then gets it right after being told it's wrong. https://t.co/D4526BCb8o
2023-03-25 18:51:51 @natfriedman Here is a slightly trickier problem. https://t.co/0jS15zMTWl
2023-03-25 18:39:11 @NandoDF Well, I was right about that one too. No one uses Pixel CNN for anything these days Karol Gregor suggested the idea while he was a postdoc in my lab, and I talked him out of it! The idea is almost as bad as using HMMs on pixels sequences.
2023-03-25 18:29:55 @nisyron The gears are numbered 1 to 7 around the circle. If a torque in the clockwise direction were applied to gear 3, in which direction would gear 7 rotate?
2023-03-25 18:28:12 @nisyron Here is a slightly different formulation: 7 axles are equally spaced around a circle. A gear is placed on each axle such that each gear is engaged with the gear to its left and the gear to its right. ...
2023-03-25 18:26:41 @nisyron No, the answer is false. With an odd number of gears around a circle nothing can turn! It's locked in place. It was a trap.
2023-03-25 18:21:23 @sean_vikoren @TonyZador I can't disagree.
2023-03-25 18:15:21 @CSProfKGD To clarify: there will be language models in 5 years, but they won't be auto-regressive. Because auto-regressive models are uncontrollable and suffer from exponential divergence as more tokens are produced.
2023-03-25 18:10:12 @sean_vikoren @TonyZador Rationality is all you need.
2023-03-25 18:07:56 @nisyron 7 axles are equally spaced around a circle. A gear is placed on each axle such that each gear is engaged with the gear to its left and the gear to its right. The gears are numbered 1 to 7 around the circle. If gear 3 were rotated clockwise, in which direction would gear 7 rotate?
2023-03-25 17:59:10 @HaminKeNist At least to test the claim that they possess some level of physical intuition despite being trained strictly on text.
2023-03-25 17:55:55 @rao2z They may convince themselves that they won but they would merely be fooled by testing on the training set.
2023-03-25 17:48:12 It is entirely possible that this very problem was entered in ChatGPT (perhaps because of my tweet) and subsequently made its way into the human-rated training set used to fine-tune GPT-4. https://t.co/YEHgPEquXp
2023-03-25 17:32:10 Assessments of the current crop of LLMs come in different flavors. Some are legitimate and many are totally unfair. The type of critique @TonyZador makes fun of here is unfair. Auto-regressive LLMs are very good writing aids even if the make sh*t up and can be toxic. https://t.co/IGZb3w4p1I
2023-03-25 15:45:09 Seriously? https://t.co/PshpqygDgc
2023-03-25 15:31:44 RT @KyleCranmer: The @MooreFound was very influential in shaping academic data science, which had a huge impact on my trajectory and many o…
2023-03-25 07:47:27 RT @raphaelmilliere: @ylecun closing his presentation with some conjectures #phildeeplearning https://t.co/K0biNIKY45
2023-03-25 07:34:57 RT @raphaelmilliere: Yann LeCun kicking off the debate with a bold prediction: nobody in their right mind will use autoregressive models 5…
2023-03-25 07:13:17 RT @JacksonKernion: NYU's Philosophy of Deep Learning conference starts! @ylecun arguing that AI needs sensory perception https://t.co/wGU…
2023-03-24 20:15:43 Relevant to tonight's debate. https://t.co/CoCsKDbdiA
2023-03-24 20:15:05 Debate at 5:30 EST today: “Do large language models need sensory grounding for meaning and understanding?” Debaters: Jacob Browning, David Chalmers, Brenden Lake, Yann LeCun, Gary Lupyan, Ellie Pavlick. Livestream: https://t.co/90ZCqIEEYk Event: https://t.co/zxDVr3m1Bc
2023-03-24 20:13:59 @davidchalmers42 Argh! fat fingers.
2023-03-24 20:09:34 Zoom video stream link: https://t.co/90ZCqIEEYk
2023-03-24 20:08:33 Debate at 5:50 EST today: “Do large language models need sensory grounding for meaning and understanding?” Debaters: Jacob Browning, David Chalmers, Brenden Lake, Yann LeCun, Gary Lupyan, Ellie Pavlick. https://t.co/zxDVr3m1Bc
2023-03-24 03:38:28 @X_Lord @MartyGargoyle @_i_am__AI Before WWII, America was isolationist and coming out of a major financial crisis. Its military technology sucked. But by the end of WWII, it was churning out P51, P38, B29, jets, good radars, nuclear bombs, and had built over 100 aircraft carriers.
2023-03-24 01:54:18 @yolaplace Of course, we must look at technological progress as a perpetual renaissance. But that can't happen if liberal democracies get overrun by authoritarianism, foreign or domestic.
2023-03-24 01:45:22 @hexian129 The proportion of people dying of malnutrition per decade has been going down. https://t.co/WZTIrN7JTd https://t.co/JzuQ3LjrXv
2023-03-24 01:27:19 @yolaplace Why?
2023-03-24 01:24:43 @_i_am__AI The "safety" against the Nazi invasion of Europe and the Japanese invasion of East Asia was superior military technology and industrial might. But in a well-run civilian society, there is no need for guns.
2023-03-24 01:11:22 RT @JitendraMalikCV: I delivered the 110th Annual Martin Meyerson UC Berkeley Faculty Research Lecture on March 20, 2023. https://t.co/xQKm…
2023-03-24 01:08:42 The WSJ talks about recent progress in applying AI to protein structure prediction, following the publication of FAIR's ESMFold paper in Science. https://t.co/rpBVVtN2qn
2023-03-24 00:30:18 @toarchkumar No.
2023-03-24 00:29:21 @BenOgorek I hadn't. But yes, I was thinking about that too, among others.
2023-03-24 00:27:15 @jeffrey_bowers Where did I mention the USA? The US democracy is deeply flawed. And the track record of Republican governments as defenders of democracy is pretty abysmal.
2023-03-24 00:19:47 @mraginsky @raphaelmilliere Not mentioning *actual* black holes. https://t.co/ghQypiu8C0
2023-03-24 00:18:02 @mraginsky @raphaelmilliere I'm not the only one to have pointed out the obvious connection between the insufferable density of bagels and black holes. See "everything, everywhere, all at once" https://t.co/QRQdJ4HNPG
2023-03-24 00:11:52 @james_douma Sorry to disappoint. But have you ever heard of WWII?
2023-03-24 00:07:19 @hexian129 Fertilizers is the main reason why very few people die from famines today.
2023-03-24 00:01:01 @abhishek_s_1 I was against the Patriot Act (and very much against the invasion of Iraq). The US Republican Party has not been known as a defender of democracy since WWII. Pretty much the opposite. Even domestically. Particularly since it has become the party of Trump.
2023-03-23 23:55:42 @d3vtoolsmith Yet, some have inexplicably characterized military applications of AI as "weapons of mass destruction"...
2023-03-23 23:53:16 @hexian129 Like many powerful technologies, AI can simultaneously be a threat to freedom &
2023-03-23 23:48:10 @justin_abrams1 Si vis pacem, para bellum.
2023-03-23 23:33:57 Quite a few AI folks used to be adamant that AI should never be used for military applications. Since the invasion of Ukraine by Putin, some have changed their tune. Yes, AI can be misused by authoritarian govts. But the defense of democracy against authoritarianism needs AI.
2023-03-23 23:25:24 @mraginsky @raphaelmilliere It's not like Kouign Amann can gravitationally collapse into black holes. Unlike, say, bagels .
2023-03-22 00:29:16 RT @gabrielpeyre: The heat can be applied to diffuse probability density (in particular, maintains positivity and unit mass). It correspond…
2023-03-22 00:19:06 @AVMiceliBarone @nobliver Sydney is an Auto-Regressive LLM and hence does not possess any objective to align.
2023-03-21 14:07:30 @abp4_ankit Then we asked how can we minimally perturb a 2 to produce a perfect 5. We got an image that looked more like a 2 than a 5. It's a big space out there, and the network is only trained on a small, low-dimensional sliver of it.
2023-03-21 14:05:42 @abp4_ankit Actually, we discovered adversarial samples within the 1st year of playing with ConvNets. We asked "what is the network's ideal idea of a 5?" Starting from a random input, we computed an input that produces a perfect "5" output through gradient descent. We got total garbage. ...
2023-03-21 13:55:04 @Felix72784942 @DrTonyRobinson @Plinz The Mongol invasions did not stop people from using horses.
2023-03-21 13:53:17 @charlieifrah @drmichaellevin @MatthewEGunter @davidchalmers42 It's funny when brilliant aeronautics people and philosophers compare billions of years of cellular evolution to their primitive flying contraptions.
2023-03-21 13:49:52 @dungeonsector I did not claim that LLM will vanish. I claimed *AUTO-REGRESSIVE* LLMs will vanish. The auto-regressivity is their Achilles heel.
2023-03-21 11:45:42 @DrTonyRobinson @Plinz Did the domestication of the horse and ox caused civil unrest because humans were used to being the strongest machines around? I work at a research lab with incredibly smart colleagues and I'm very much used to not being the smartest machine around.
2023-03-21 11:39:33 RT @scienceisstrat1: It’s just a start. But the world is beginning to learn how to decouple economic growth from CO2 emissions. https://t.c…
2023-03-21 11:24:36 "net adjusted household disposable income per hour worked" https://t.co/jeZcDilLvE
2023-03-21 04:47:23 @Jeff_Aronson Make it produce outputs that explicitly optimize a set of objectives, Instead of reactively spewing one token at a time auto-regressively.
2023-03-21 04:42:11 @0xYomayo Apes don't get to design and hardwire humans' intrinsic objectives (they are the result of evolution). But we get to design and hardwire the intrinsic objectives of AI systems.
2023-03-21 04:38:42 @dungeonsector As I've said multiple times, Auto-Regressive LLMs are unalignable, and uncontrollable. They must (and will) disappear for that reason.
2023-03-21 04:36:52 @_colinricardo Why is Stuart unable to understand that whoever designs a robot with such a stupid objective could very easily add a term in the objective amounting to "don't run people over"?
2023-03-21 04:18:32 @ArsCrypta @moskov I'm a militant atheist.
2023-03-21 04:17:38 @Mario_Gibney @moskov We get to *design* and *hardwire* objective functions for AI. That makes the alignment problem a hell of a lot easier to solve than with people and corporation whose intrinsic objective functions are fixed (by evolution or by capitalism).
2023-03-21 00:45:47 @dallairedemers The printing press allowed people to read the bible, which enabled the protestant movement, which caused 2 centuries of religious conflicts in Europe. It also enabled the emergence of the Enlightenment, science, &
2023-03-20 20:32:09 @moskov For centuries, we've been designing objective for superintelligent entities (laws for corporations). For millennia, we've shaped the objectives of our children so they behave on society. All of this without actually being able to hack the intrinsic objectives directly.
2023-03-20 20:28:50 @moskov There are many rational &
2023-03-20 18:29:01 I think that the magnitude of the AI alignment problem has been ridiculously overblown &
2023-03-20 18:22:58 @Sebasti40317138 Depends a hell of a lot on the goal(s).
2023-03-20 17:07:04 @Ifeoluwadavids @realDonaldTrump Probably because it is.
2023-03-20 17:05:58 @primalpoly I'm well aware of the literature on the AI alignment problem. I just think that the magnitude of the problem is ridiculously overblown, and our ability to solve it widely underestimated. For this, I've been called stupid before, very publicly so. That's OK, I'm used to it.
2023-03-20 16:58:48 @danbri @GaryMarcus The first version of our paper was written in December 2018.
2023-03-20 16:45:21 @nearcyan @nobliver I'm not talking about a learning objective. I'm talking about an *inference* objective. I.e. an objective that the systems optimizes with respect to every output or action sequence it produces.
2023-03-20 16:43:41 @chriswaterguy @nobliver I don't know what you mean by "we can't understand..."
2023-03-20 16:41:01 @notbyintent @davidchalmers42 He is not the first one to have had the idea that learning to predict (or to fill in the blanks) is on the path to better AI. By a very long shot. @geoffreyhinton among others has been promoting this idea for over 40 years. This was the main motivation behind Boltzmann Machines.
2023-03-20 16:36:32 @MatthewEGunter @davidchalmers42 @drmichaellevin That's false. Tons of animal species are pretty smart, yet never meet their parents and rarely interact with other members of their species.
2023-03-20 16:34:51 @loopuleasa @davidchalmers42 You got it backwards. Humans and animals are extremely good at modeling natural percepts. Modeling text/language is quite easy in comparison because it's *designed* to be easily groked by networks of neurons.
2023-03-20 16:25:40 RT @scienceisstrat1: Smoking causes cancer. Reducing cigarette consumption may be the greatest public health success of modern times. http…
2023-03-20 16:13:49 @baturinsky No. My benevolent defensive AI will be better at destroying your evil AI than your evil AI will be at hurting humans.
2023-03-20 16:02:29 @macaintsleeping Because they would have no desire to do anything else. Why? Because we will engineer their desires.
2023-03-20 15:33:20 (note, this article is from 2019)
2023-03-20 00:34:26 @JrKibs The hallucination problem is specific to Auto-Regressive LLM architectures. My proposal is to move away from AR-LLMs towards architectures that can reason and plan.
2023-03-20 00:32:15 @TheHeroShep You mean, like C3PO?
2023-03-20 00:30:40 @nobliver How could the aims possibly be "inscrutable" since *we* would be the ones who would design and hardwire those aims in the form of objectives.
2023-03-20 00:28:11 @entirelyuseles @profoundlyyyy If it does have desires, it will be through objectives that *we* hardwired into it or that *we* trained to do the Right Things.
2023-03-20 00:21:19 @davidchalmers42 As I'm fond to say: prediction is the essence of intelligence. The very idea of Self-Supervised Learning is that intelligence emerges from learning to predict (or to fill in missing information). But predicting natural percepts is much more complicated than predicting words.
2023-03-19 23:54:16 Calm down. Human-level AI isn't here yet. And when it comes, it will not want to dominate humanity. Even among humans, it is not the smartest who want to dominate others and be the chief. We have countless examples on the international political scene. https://t.co/Eb6NiaRfzd
2023-03-19 23:40:33 @benedictevans Ultracrepidrian? https://t.co/TUqyLMmeBz
2023-03-19 12:37:06 RT @ProfNoahGian: In light of the many debates and discussions surrounding GPT4, I highly recommend taking a look at this @sciam article fr…
2023-03-19 11:53:50 RT @BaghliNacym: Nobody has done more for the history of Convolutional Neural Networks than @ylecun Yann LeCun! Here are his biggest contri…
2023-03-19 00:16:25 RT @scienceisstrat1: The last decade has been a pivotal one in the AI revolution Cc: @ylecun @Scobleizer @erikbryn @amcafee @paulg @Davi…
2023-03-18 02:53:19 @ProfNoahGian @alexrives @MetaAI Yes, that one.
2023-03-18 02:52:47 @Extended_Brain @ilyasut That's severely incomplete and low bandwidth.
2023-03-18 02:26:42 RT @MetaAI: New in @ScienceMagazine — Meta AI researchers developed a breakthrough model for protein folding by using a large language mode…
2023-03-17 15:53:20 RT @Experiential_AI: COMING SOON: Chief AI Scientist at @Meta @ylecun leads our May 24, 2023 Distinguished Lecturer seminar Don't wait to…
2023-03-17 15:29:03 RT @alexrives: Metagenomic proteins are some of the least understood proteins on earth. Now with AI it is becoming possible to see deep int…
2023-03-17 15:20:52 @grbradsk Haha!
2023-03-17 13:07:51 @artistexyz @ScienceMagazine @ebetica @alexrives @MetaAI Auto-regressive LLMs hallucinate. ESMFold is *not* an auto-regressive LLM. Hallucination is an intrinsic property of auto-regressive generation.
2023-03-17 12:18:41 PhotoRoom is a French startup that has been using deep learning for years to help vendors make product photos. They have now developed a fast generative model to produce nice backgrounds on demand without requiring text prompts. Congrats @matthieurouif and team! https://t.co/sD2Jf4OXZg
2023-03-17 00:46:05 RT @ebetica: Our paper on protein folding with a language model is out in Science!
2023-03-17 00:37:04 The WSJ interviews @alexrives and comments on the Science paper about ESMFold protein structure prediction system by the @MetaAI - FAIR Protein team. https://t.co/itwl4fugkZ
2023-03-17 00:31:47 @KirkGraff No, even with the rest of the paragraph, it's still ridiculous.
2023-03-16 23:13:22 The power of open research and open source. https://t.co/rOSGAL6zoc
2023-03-16 23:12:10 RT @_akhaliq: alpaca-lora: Code for reproducing the Stanford Alpaca InstructLLaMA result on consumer hardware github: https://t.co/NB6nrDX…
2023-03-16 20:08:01 Speedy protein structures from single sequences with ESMFold in @ScienceMagazine ! "Evolutionary-scale prediction of atomic-level protein structure with a language model" by @ebetica, @alexrives and collaborators from the protein group at @MetaAI - FAIR. https://t.co/QeIvHloc3U https://t.co/6pu7iwpppb
2023-03-16 19:13:48 RT @pabbeel: Super-excited to kick off S3 of @therobotbrains podcast with Yoshua Bengio. We discuss LLMs, Higher-Level Cognition, Causalit…
2023-03-16 19:08:40 This critique of AI gets absolutely everything wrong. Quote: "There have been no major breakthroughs in the academic discipline of artificial intelligence for a couple of decades" What ??? Seriously ??? https://t.co/H45Emp4aAl
2023-03-16 18:44:46 @TonyZador A terrible example would be Ex Machina. This movie gets absolutely everything wrong. I guess it would be a good movie to discuss how every single one of the biggest fears about AI is wrong.
2023-03-16 18:42:10 @TonyZador Her (by Spike Jonze)
2023-03-16 14:40:16 @mattturck @yoavgo @mmitchell_ai I'll confirm on your LinkedIn profile.
2023-03-16 12:31:14 @sudhirPyadav @ilyasut The knowledge contained in language is superficial. https://t.co/XK6SdxRGjy
2023-03-16 12:10:56 @cirrus_shakeri @ilyasut It could be a virtual body in a simulated environment.
2023-03-16 12:08:25 @DrYousefSharrab No.
2023-03-16 12:07:49 @BreezyBadger_ Yes, you missed all of it. 1. The LLaMA inference code is open source: https://t.co/fGCkA9Mol0 2. The pre-trained weights can be obtained by AI researchers upon request: https://t.co/YQxIVBzbHS 3. Someone obtained the models through (2) and posted them on 4chan.
2023-03-16 11:33:24 @andriy_mulyar @sleepinyourhat @srush_nlp @chrmanning @mdredze @ChrisGPotts 1. Get LLaMA 2. Future AI systems that are factual (do not hallucinate), can use tools, have physical intuition, can reason and plan, will have a very different architecture from the current crop of Auto-Regressive LLMs. Find out what it is.
2023-03-16 11:32:36 1. Get LLaMA 2. Future AI systems that are factual (do not hallucinate), can use tools, have physical intuition, can reason and plan, will have a very different architecture from the current crop of Auto-Regressive LLMs. Find out what it is. https://t.co/bzPjRHW3U2
2023-03-16 01:15:44 @grbradsk @kabirevoknow The latter.
2023-03-16 01:12:43 Very nice article by Craig Smith in IEEE Spectrum about the debate on the power and limitations of LLMs between (among others) @ilyasut and me. Do AI systems ultimately need to be grounded in reality, not merely learn from language? I say yes. https://t.co/Y8yWFYIyKG
2023-03-15 21:36:59 @CriticalAI Ou Bobby Lapointe.
2023-03-15 21:36:08 @gael_duval Le nôtre s'appelle LLaMA
2023-03-15 20:03:17 RT @MetaAI: The HM3D-Sem dataset is free and available to use with FAIR's Habitat simulator to train embodied agents at scale for semantic…
2023-03-15 19:47:57 The availability of #LLaMA evokes fond memories of Jeff Minter's 1991 psychedelic Amiga game Llamatron. LLaMA and Llamatron have something in common: they are fast. https://t.co/kDlxMVXAHN https://t.co/scKTZ2Jzot
2023-03-15 17:41:51 I don't mean to make a bad taste joke, but pronouncing GPT-4 in quasi-French ("gé pé té for") sounds *very* awkward.
2023-03-15 17:11:04 @visarga @kabirevoknow Bach cantatas is my go-to music when I'm on a plane. That and John Coltrane.
2023-03-15 10:37:18 @Gil_et_Jo @kabirevoknow I'm a jazz fan, and only wish I were an accomplished jazz musician. Unless you would consider whistling a solo as an accomplishment. I enjoy cooking, but pastries aren't my specialty.
2023-03-15 10:31:51 @fredodurand @kabirevoknow I seriously doubt long-distance running will be happening.
2023-03-15 10:29:39 @walter_h_g @kabirevoknow Perhaps for values of "long-distance" <
2023-03-15 10:28:37 @micheal_nyaga @kabirevoknow I can't imagine any parallel world in which I'm a fan of long-distance running. It would have to be an orthogonal world.
2023-03-15 10:27:15 @visarga @kabirevoknow I'm a fan of a whole lot of BWVs.
2023-03-15 10:25:40 @_rockt @kabirevoknow Watching me making an attempt at long-distance running would probably be hilarious, though not as unpleasant as hearing me play the accordion.
2023-03-14 23:37:28 @kabirevoknow Both of the "fun facts" are completely wrong
2023-03-14 22:16:25 RT @raphaelmilliere: We will host a pre-conference debate on Friday, March 24th on the question: "Do Language Models Need Sensory Grounding…
2023-03-14 21:53:47 RT @MelMitchell1: This is an insightful article about LLMs (by @Jake_Browning00 and @ylecun): https://t.co/2vJPqjLLCV
2023-03-14 18:14:47 RT @lxbrun: Today we are releasing an incredible product (yeah, I'm not biased): Nabla Copilot! Healthcare systems are collapsing around…
2023-03-14 18:03:42 RT @percyliang: Lack of transparency/full access to capable instruct models like GPT 3.5 has limited academic research in this important sp…
2023-03-14 18:01:58 RT @davidchalmers42: our long planned conference on the philosophy of deep learning is coming March 24-26 at NYU, starting with a debate on…
2023-03-14 17:57:05 RT @NablaTech: Today’s the big day! is live for all doctors. The first medical note-generation tool, powered by AI. Ba…
2023-03-14 16:08:31 @unsorsodicorda The EU still manages to build major infrastructure projects. You know, like fast trains.
2023-03-14 16:03:45 Next week. https://t.co/1FcUUrHoET
2023-03-14 16:01:18 Philosophy &
2023-03-14 15:47:34 The US is englued in legal molasses. This applies to government decision making, infrastructure projects, and even product rollouts by large companies. It's not a recent phenomenon. Decades ago, liability issues essentially killed the non-commercial aviation industry in the US. https://t.co/ca1RkQedUN
2023-03-14 15:42:08 An account of the current GenAI craze. https://t.co/rE3KovBNDV
2023-03-14 01:38:30 RT @yanndubs: Excited to share this demo of Alpaca Highlights: ~GPT3.5 performance for <
2023-03-14 01:38:23 RT @tatsu_hashimoto: Instruction-following models are now ubiquitous, but API-only access limits research. Today, we’re releasing info on A…
2023-03-13 21:48:46 @AlexanderFleiss @pmarca Thank you for the kind words.
2023-03-13 21:32:33 @pmarca Counterpart: Wheel: OMG this is going to destroy society. People will become weak. Book: OMG this is going to destroy society. People will be able to learn stuff and think for themselves. Gears: OMG, this is going to destroy society. Machines will take our jobs. Computer: OMG ...
2023-03-13 21:26:36 Interesting exercise. https://t.co/MGtiHxr6XN
2023-03-13 11:30:35 RT @gabrielpeyre: Parabolic PDEs (e.g. heat) smooth out singularities. Hyperbolic PDEs (e.g. wave) displace singularities. https://t.co/MD3…
2023-03-12 00:24:08 RT @GuillaumeLample: LLaMA 65B can run on a MacBook! With a different model architecture it could probably run quite faster (we didn't use…
2023-03-11 22:10:48 RT @ai__pub: // Toolformer Podcast: Preview // Today I'm interviewing the Toolformer authors! LLMs like Bing (and soon, ChatGPT) can use…
2023-03-11 12:29:54 @pmarca But yeah, LLMs will not destroy the academic publication system, contrary to what some folks have claimed. In fact, paper quality might improve because of it (I'm talking about style, not content).
2023-03-11 12:27:10 @pmarca And you rarely know right away which ones are going to spark a new avenue. It often takes several years before an idea comes to fruition in practice. A good recent example is diffusion models.
2023-03-11 12:24:49 @pmarca It's a bit like music. A small number of papers have a huge influence. A good number bring a significant stone to the edifice. A large number propose new applications. Most have *some* interesting nugget. Many papers don't amount to much, but not as many as you might think...
2023-03-10 22:18:30 RT @IACR_News: #ePrint SALSA PICANTE: a machine learning attack on LWE with binary secrets: C Li, J Sotáková, E Wenger, M Malhou, E Garcelo…
2023-03-10 22:18:16 RT @KristinLauter: Very excited about our new #SALSA paper! big improvements for using Machine Learning to attack Post-Quantum Crypto (latt…
2023-03-10 22:16:42 RT @Jake_Browning00: A piece by @ylecun and I. We argue conversation is more than just words
2023-03-09 19:54:07 RT @raphaelmilliere: Another day, another opinion essay about ChatGPT in the @nytimes. This time, Noam Chomsky and colleagues weigh in on t…
2023-03-09 19:28:14 RT @scienceisstrat1: The geography of innovation Silicon Valley is still dominant, but the rest of the world is rising, especially Ch…
2023-03-09 17:48:17 RT @astro_wassim: After 1.5 years of hard work, I am thrilled to share with you Φ-SO - a Physical Symbolic Optimization package that uses d…
2023-03-09 05:10:21 RT @tdietterich: Nice discussion in this article. The whole concept of a chat bot seems broken. We expect a "bot" to be an agent with the k…
2023-03-08 13:26:51 A new paper in @NoemaMag by @Jake_Browning00 and me (mostly Jake) on chatbots, social norms, and human expectations. https://t.co/MrJYVNOAVU
2023-03-07 00:30:27 RT @patchurchland: Jumping spiders have also been shown to learn, yes, learn. No cortex, but..... Spider With Three Super Powers | The Hunt…
2023-03-07 00:20:17 RT @NYUDataScience: Join us in congratulating Assistant Professor of #datascience and #computerscience at CDS and @NYU_Courant Rajesh Ranga…
2023-03-07 00:10:23 RT @c_caucheteux: Our paper is out in Nature Human Behaviour ‘Evidence of a predictive coding hierarchy in the human brain listening to…
2023-03-07 00:06:52 Training robots to imitate behaviors with 1-minute demonstrations. From @LerrelPinto 's group at @nyuniversity https://t.co/0yh54rK05L
2023-03-06 18:57:08 @augustwester Nice.
2023-03-06 15:42:32 @lizstocks @chris_jwala @pmarca I studied EE, specializing in VLSI design and control. I took a lot of math and physics. My PhD is in "AI" but did not involve studying what North-American universities consider the "core" of computer science (systems, algorithms, complexity theory, etc).
2023-03-06 15:35:55 New paper on VICReg-style self-supervised learning using information theory machinery. Main tricks: network is deterministic locally linear &
2023-03-06 01:33:41 @chris_jwala @pmarca My degree was in Electrical Engineering. I never actually studied computer science. Software technology changed radically in the 40 years since I graduated. Machine learning did not exist as a field when I did my PhD.
2023-03-05 22:18:53 @fauxdinger I have a NeurIPS paper with Seth Lloyd. Does that count? https://t.co/AaozOZsRhm
2023-03-05 22:13:09 That invited talk at COLT 2013 indirectly caused MobilEye to start using ConvNets for its driving assistance system. After hearing the talk, Shai Shalev-Schwartz started a sabbatical at MobilEye and convinced them to use ConvNets. Slides: https://t.co/LDiFQwPFu1 https://t.co/6kfr8dhdCS
2023-03-05 21:53:44 @Sulla2389 @pmarca Not nearly as much as in the US. US-made drugs are N times less expensive in Europe than in the US. Why? European single-payer systems negotiate drug prices. The US has a law that *specifically* forbids Medicare from negotiating drug prices. That's corruption, plain &
2023-03-05 20:39:31 @pmarca . @erikbryn says the effect of a technological (r)evolution on productivity takes 15 to 20 years. But for AI, I'm not sure when to start counting.
2023-03-05 20:25:16 @ArthurB @andrewgwils Marcello.
2023-03-05 20:22:07 RT @gabrielpeyre: Reproducing Kernel Hilbert spaces define norms on functions so that solutions of regularized fitting problems are linear…
2023-03-05 16:05:16 @andrewgwils I get chills with a lot of Bach pieces. Oboe almost always does it for me.
2023-03-05 16:00:24 Cute. https://t.co/68VWESSE4a
2023-03-05 15:50:21 @pmarca Your red/blue analysis is very US centric. Arguably, European social democracies have not seen such increases in education, healthcare, and childcare because *they regulate more* (not less). The US has done a *terrible* job at regulating these things in a half-ass way. 2/2
2023-03-05 15:47:02 @pmarca AI certainly won't cause lasting unemployment. But technological evolutions displace jobs: the faster they take place, the more people are (temporarily) left behind because their skills are outdated for the new economy. Workforce retraining is of the essence. 1/2
2023-03-05 15:34:32 @balazskegl Religions, like all superstitions, are a case of causal inference going haywire. Our desire to find causal explanations for everything drives us to invent causes for unexplained or unpredictible phenomena. But inventing all-powerful deities as causes violates Ockham's Razor.
2023-03-05 15:27:27 @balazskegl Hahaha, that is ironic indeed.
2023-03-05 15:24:00 RT @msalbergo: Our paper on a general framework for efficiently building continuous normalizing flows between any distributions has been ac…
2023-03-05 15:23:47 @msalbergo @DaniloJRezende @KyleCranmer @FrankNoeBerlin @wgrathwohl Very cool.
2023-03-05 15:15:21 @Roozbeh_Sanaei2 Which is why auto-regressive LLMs are a terrible model of thought. They do seem to be a good model of language fluency. AR-LLMs are like this tiny piece of the brain that controls speech production called the Broca area. What's missing is the entire prefrontal cortex.
2023-03-05 15:06:06 @drorhilman No.
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-27 13:44:09 Entities that throw our deepest thoughts back at us: a common theme in fiction, from Shakespeare's The Tempest, to the 1950's space opera Forbidden Planet, Tarkovsky's 1972 film Solaris, and several others after that. https://t.co/4zCysBvOqh
2023-02-27 13:42:01 @CadeMetz Entities that throw our deepest thoughts back at us: a common theme in fiction, from Shakespeare's The Tempest, to the 1950's space opera Forbidden Planet, and Tarkovsky's 1972 film Solaris.
2023-02-27 12:12:40 RT @DGLGraph: DGL 1.0 has arrived! Huge milestone of the past 3+ years of development. Check out the blog for the release summary and the…
2023-02-27 01:00:00 CAFIAC FIX
2023-02-20 17:08:10 @togelius Clearly, there are generative models that are not used to "generate" anything complicated. For example, any classifier that uses Bayes rule P(y|x) = P(x|y)P(y) / P(x). The P(x|y) model is generative.
2023-02-20 17:01:31 The NYU Center for Data Science was announced 10 years ago today. In 10 years, NYU CDS has grown tremendously and now essentially operates as a department, with PhD &
2023-02-20 15:09:28 OK, I lied. #1 was not assembly but 6502 hexadecimal machine code.
2023-02-20 15:03:27 @amaralibey @togelius No. If the model doesn't produce *observed* variables, it's not generative. Embeddings are hidden.
2023-02-20 15:02:07 @togelius You could. In that case, "generative" would not just characterize the architecture but the combination of the architecture and the inference procedure. It's like a graphical model / factor graphs: you infer the unknown variables by finding the value that maximizes the likelihood.
2023-02-20 14:48:10 @Nicolas99848452 @wangilisasi Caffe was written at UC Berkeley in C++, inspired from Lush (our DL framework with Lisp front-end). Caffe2 was written in C++ to run neural nets in production at FB. But most people at FAIR were using Torch (with the Lua front-end). Eventually, we all standardized on PyTorch.
2023-02-20 14:28:05 @the_dmoti No, thank you.
2023-02-20 14:27:50 @taneemishere Good point.
2023-02-20 14:22:43 @McAllesterDavid @nlpnoah Our hippocampus can store a lot more than 4096 tokens, though.
2023-02-20 14:06:08 @togelius It has already happened with cartoons. Try getting any of the original Tex Avery cartoons from the 40s and 50s.
2023-02-20 14:04:19 @togelius Joint Embedding Architectures (e.g. Siamese nets) are non generative. They can capture dependencies between x and y but cannot "generate" y from x, and certainly cannot provide an estimate p(y|x). Almost all successful SSL methods in image recognition use JEA or JEPA.
2023-02-20 04:52:16 @Arian_Khorasani We were using a Lisp interpreter/compiler that we wrote as an interactive front-end language to our neural net simulator. https://t.co/3yRKkuK4kI
2023-02-20 04:48:51 @wangilisasi Never had any use for it.
2023-02-20 04:45:03 @jasonfi There was nothing else I could afford in 1977.
2023-02-20 04:42:17 @MrSteph8 No, not really.
2023-02-20 04:41:24 @tweet_prat Nope.
2023-02-20 04:41:11 @TonyZador @patrickmineault I don't think it's worth storing in the genome. Any unsupervised learning procedure can learn V1-style oriented edge detectors within minutes.
2023-02-20 04:39:11 @KnutarMike Oh yeah, I taught myself to design CMOS digital circuits in high school, before programming.
2023-02-20 04:36:58 I would have expected a kink in the log plot about 10 years ago. But the log plot is pretty linear, with a very mild kink in the mid-2000s. This means that the growth is exponential. https://t.co/97q0pIkk8K
2023-02-20 04:28:00 Oh, I forgot Prolog, somewhere between Pascal and Forth.
2023-02-20 04:25:58 My favorite: Lisp.
2023-02-20 04:21:58 @MrSteph8 During the DjVu project, in the late 1990s, we had scanned, compressed, &
2023-02-20 04:14:05 @gemhodlr Oh, I don't have a crypto bro circle, thankfully. Just random people making comments on my tweets.
2023-02-20 04:05:51 1. Assembly 2. Basic 3. Fortran 4. Pascal 5. Forth 6. C 7. Lisp 8. C++ 9. Javascript 10. Lua 11. Python https://t.co/5Q5W4ppv58
2023-02-20 03:58:07 RT @cdf1530: Au Collège de France, les #enseignements sont #gratuits &
2023-02-20 03:55:57 @McAllesterDavid @nlpnoah LLMs in their current form are stateless. Their "state" is entirely determined by the prompt, hence immaterial.
2023-02-19 22:29:44 Govt: I can assure you, we have top men working on it. Scientists: who? Govt: top .... men. https://t.co/yBjY6IdVVR
2023-02-19 22:27:30 RT @zicokolter: Generative models and P vs. NP: A clickbaity thread An important point that seems missing (as far as I've seen) in the d…
2023-02-19 22:26:18 RT @tdietterich: I am very grateful to FAIR for leading the way and supporting the world-wide deep learning research community.
2023-02-19 22:26:15 @tdietterich You are most welcome, Tom. There is self-interest in this openness: a rising tide of AI progress lifts all boats.
2023-02-19 22:24:23 RT @rao2z: Agreed. Between late 90's and 2013, when researchers joined Google, they would disappear a bit behind iron curtain. (Apple, kn…
2023-02-19 17:26:29 @rubenxela Les pires sont: - les cassandres promettant l'apocalypse, - les illuminés promettant monts et merveilles - ceux qui critiquent, deblatèrent sur des limitations que tout le monde connait, et prétendent avoir "la solution" alors qu'ils n'ont absolument jamais rien contribué.
2023-02-19 16:23:42 RT @robertarail: Struggling to keep up with all the recent papers on Augmented Language Models? Check out our new survey on augmenting LLM…
2023-02-19 16:17:24 @Gregdt1 Pas besoin d'aller en Chine. L'entreprise française Haffner Energy transforme les huiles usées en hydrogène et biofuel. https://t.co/fHAt4VBAVc
2023-02-19 15:49:23 @elonmusk @MKBHD Just use WhatsApp
2023-02-19 15:41:43 @taneemishere @SebastianSeung Don't worry. I'm not afraid. I'm merely quoting HAL9000 from "2001: A Space Odyssey" being afraid of "dying" as his memory modules are slowly being disconnected one by one.
2023-02-19 15:39:21 RT @important_paper: Augmented Language Models (ALMs) are LLMs with enhanced reasoning skills &
2023-02-19 15:17:34 @Aapef Un script PyTorch fait maison qui utilise des bibliothèques classiques pour aligner les photos.
2023-02-19 15:16:11 @other_musings Light pollution is horrible in my NJ suburb. So I use a narrowband filter that only lets through 4 wavelengths that many nebulae emit (ionized hydrogen alpha &
2023-02-19 15:06:50 @MarioRascn6 Nice! My picture was shot with a narrowband filter to reduce the effects of light pollution (which is pretty awful in my NJ suburb). So it's more tenuous than yours.
2023-02-19 15:03:56 @dimfwi Wikipedia Seriously, measuring such distances is pretty hard. https://t.co/PQJuW9b4IC
2023-02-19 15:01:46 @_mishy 1. Select the good shots. 2. Align the shots and average them. 3. Manually correct the colors in the resulting image. #1 is done with a custom Python script. #2 is done automatically by a custom PyTorch script. #3 is done manually in Gimp. It takes maybe 20 minutes overall.
2023-02-19 14:53:35 From @Marktechpost: a description of our latest work on Image Understanding Through Contextual Phrase Detection, by a team from NYU consisting of @ashkamath20, Sara Price, Jonas Pfeiffer, me, and @alcinos26. https://t.co/HdARCI49aB
2023-02-19 04:14:15 Eastern Veil nebula / NGC 6992 2400 light-years away. Shot in June 2021 in my NJ backyard. Scope: Celestron RASA 11", F2.2 Camera: ZWO ASI2600MC Filter: Radian Triad quad narrow band. Exposures: 65 shots, 300 seconds each. https://t.co/iSoINvhoDF
2023-02-19 03:50:11 @jgvfwstone @alfcnz @y0b1byte @Adobe @Apple @googlechrome For that demo, I wrote a "compiler" that took the Lisp data structure for the ConvNet and produced a standalone C code that could be compiled for the DSP32C board. The weights and the network topology were hardcoded as literals in the C program (no file system on the DSP board).
2023-02-19 03:45:08 @Marco20307855 @alfcnz @y0b1byte @Adobe @Apple @googlechrome I should say, the SunOS version had bits in assembly to make convolutions go fast.
2023-02-19 03:44:02 @Marco20307855 @alfcnz @y0b1byte @Adobe @Apple @googlechrome In C, using Emacs and gcc. The GNU tools had been ported to AmigaOS.
2023-02-19 03:42:32 @DrYousefSharrab @alfcnz @y0b1byte @Adobe @Apple @googlechrome 3.5
2023-02-18 22:52:22 @JohnBlackburn75 @patrickmineault Yes, but slowly. No one uses transformers for segmentation. It's too inefficient. And it's impractical for video.
2023-02-18 22:49:02 @alfcnz @y0b1byte @Adobe @Apple @googlechrome Léon Bottou and I developed our neural net simulator SN on our Amiga 1000s with 512KB of RAM and no hard drive. Just floppies (in 1987). That's what I used to train the first ConvNets, after porting it to SunOS.
2023-02-18 21:54:49 @SebastianSeung I'm a ..... fraaaaid.
2023-02-18 16:33:06 @patrickmineault Regarding weight sharing, or lack thereof in biology: you don't need weight sharing if the training is essentially self-supervised. Repeated feature detectors will naturally emerge from self-supervised learning because the local statistics of images are essentially stationary.
2023-02-18 16:31:06 @patrickmineault This combination of a ConvNet front-end and transformer back-end is akin to the DETR architecture, which is my favorite one for vision. https://t.co/mm8jeS99uK ...
2023-02-18 16:29:04 @patrickmineault Well, at best, ConvNets would be a good model of the *foveal* portion of the ventral pathway until V4 or PIT. After that, the representation is more object based than retinotopic. So a transformer (which is equivariant to permutations) would seem more appropriate. ...
2023-02-18 14:57:32 @gauravontwit You got this exactly backward.
2023-02-18 14:55:35 @chribeut Source: https://t.co/ihX6expw4T https://t.co/mJhkpDBy36
2023-02-18 14:43:05 @rakmasterg Whisper is a deep learning architecture that uses transformer blocks, like many DL systems these days, including LLMs. It's not technically an LLM, even if the decoder module generates tokens one by one auto-regressively, like a language model does.
2023-02-18 14:32:49 @99frqsnpxf @cwolferesearch In fact, it helps them.
2023-02-18 14:31:57 @SohoJoeEth Sadly, many "investors" share your opinion. Their focus on short-term profits blinds them to the mechanisms of innovation. This is why successful tech companies like Google and Meta have structured themselves to minimize pressure from Wall Street short-termism.
2023-02-18 14:16:48 @landsheapes @AlanMorte @OpenAI Running these things requires a lot of computation. It's not cheap. They can only run a deficit for so long.
2023-02-18 14:15:00 @_ash_ran @AlanMorte @OpenAI The metric to optimize is a combination of several criteria: user satisfaction, user well-being, impact on society, and yes revenue. It's always a trade-off. For example, you can show more ads to get more revenue in the short term, but you risk turning people away in the long run
2023-02-18 14:10:02 @_ash_ran @AlanMorte @OpenAI The positive impact of AI? Connecting people with each other &
2023-02-18 13:58:47 RT @scienceisstrat1: Everyone’s talking about AI Below is a on Big Tech’s full on embrace of artificial intelligence. For starters,…
2023-02-18 13:54:36 @Master4Cad This was a late-1986 Amiga 1000 with 512KB of RAM and no hard drive (just floppies), using Emacs and gcc.
2023-02-18 13:50:06 @rasbt YMMV
2023-02-17 22:37:07 LLMs have a very superficial understanding of the physical world. An article by @Kantrowitz that draws on my session with him on his podcast. https://t.co/5Y03gLdkGD
2023-02-17 21:38:21 @RMajdoddin @AlanMorte @OpenAI Because the company that owned it lost its market position and started losing money.
2023-02-17 21:25:22 @cwolferesearch You are wrong. Many companies can take advantage of new research because of their position on the market. The fact that other companies can use the research too does not hurt them one bit.
2023-02-17 19:33:17 @amitmate2010 @AlanMorte @OpenAI Exactly. Microsoft also uses PyTorch.
2023-02-17 18:51:40 @SohoJoeEth This further confirms that you *really* have no idea what you're talking about. FAIR has had one of the largest returns on investment of any initiative at Meta.
2023-02-17 18:40:26 RT @NYUDataScience: Join us in congratulating CDS Director, Julia Kempe, who has just been named Julius Silver, Roslyn S. Silver, and Enid…
2023-02-16 23:17:30 RT @xamat: My Transformers Catalog has become one of my most popular posts ever. Some of you told me that you turned into a pdf for easier…
2023-02-16 20:11:39 Exciting times to be involved in AI, as a founder, as an investor (like AIX Ventures), and as a scientist. https://t.co/JRbEveCK9H
2023-02-16 18:56:14 RT @rao2z: After a long hiatus, I wrote another piece for @thehill -- on Beauty, lies &
2023-02-16 18:44:47 @NektariosAI Pretty accurate.
2023-02-16 18:39:54 @yacineaxya At least, Galactica never insulted anyone
2023-02-16 18:14:29 RT @randall_balestr: 3 ICASSP papers! -the infamous POLICE that provably tames the beast (DN) to obey input space constraints https://t.co/…
2023-02-16 15:00:16 More than a dataset, GenAug is a method for generating augmentations to new scenarios of existing robot behavior data.
2023-02-16 12:54:19 RT @pierrepinna: Toward #AI Systems that can Learn Reason &
2023-02-16 12:48:38 New robotics dataset from @MetaAI - FAIR. https://t.co/m0EwaBM7jf
2023-02-16 12:01:05 RT @togelius: I think the intellectually honest approach to LLMs is to be interested in both the (sometimes astonishing) successes and the…
2023-02-16 04:26:38 RT @deviparikh: Ha, all AI art looks the same indeed :) ( a.k.a. wink wink) #AllAIArtLooksTheSame #genartclub #aiart #generativeart
2023-02-15 18:02:48 RT @MetaAI: ROSCOE is a first-of-its-kind suite of metrics for scoring step-by-step reasoning. We hope this work provides a foundation that…
2023-02-15 15:15:23 My answer: no! Obviously.
2023-02-15 13:50:02 @CSProfKGD @sitzikbs Indeed, two separate patents. The 1st one was for ConvNets with strided convolutions. The 2nd one was for ConvNets with separate pooling/subsampling layers. In 1996, AT&
2023-02-15 03:27:01 @cwolferesearch Not good enough, not soon enough.
2023-02-15 01:39:07 @AndreTI The space of possible answers is too large for this to have any hope of working. This is essentially what people use today and call RLHF.
2023-02-15 01:37:21 @verstaen @gassee Expectedly, my answer is hell no!
2023-02-15 01:36:38 @iandanforth That's like throwing the baby with the bath water.
2023-02-15 01:35:56 @ayazdanb No, not even then.
2023-02-15 01:35:11 @TSR119 You are not wrong.
2023-02-15 01:34:13 @Jeff_Aronson Yes
2023-02-15 01:12:02 Excellent paper in which the word "criti-hype" is coined. Criti-hype designates the kind of academic and non-academic work that magnifies the imagined dangers of a new technology, feeding on and mirroring the hype from the advocates of said technology. https://t.co/aE85ZQNBYU
2023-02-15 00:22:27 RT @beenwrekt: Terrifyingly hilarious overview of an insane number of mistakes in last week’s Bing/ChatGPT demo. Why did Google lose 10% of…
2023-02-15 00:20:55 Will Auto-Regressive LLMs ever be reliably factual? https://t.co/1ujZ3tDiq7
2023-02-14 23:00:16 @svlevine @IanOsband @_aidan_clark_ @CsabaSzepesvari Shouldn't that be "learned control" or "control learning" or even "learning to control"? LTC has a ring to it, no?
2023-02-14 17:57:13 The big challenge for AI dialog systems over the next year or so is to make them factual, non-toxic, up to date, and capable of using tools like calculators, databases, search engines, simulators, or in this case, a simple calendar with today's date. https://t.co/bhBVMFigQ5
2023-02-14 17:52:31 Fantastic talk on the reaction of media and the public to technological (r)evolutions, particularly AI, particularly generative AI. Oscillating between hype and criti-hype and the inevitable moral panics. @DrTechlash is my favorite person on the Interwebz today. https://t.co/vZU2ShRB2O
2023-02-14 17:29:44 @DrTechlash I like the word "criti-hype" and your analysis of that phenomenon.
2023-02-14 17:14:17 RT @Kris10collie: Chat GPT n'a rien de "révolutionnaire" pour reprendre également les termes de @ylecun. Elle est juste la première de son…
2023-02-14 17:13:58 Une interview de la directrice de @MetaAI - FAIR (Fundamental AI Research). Les LLMs sont utiles, mais limités. Et leurs capacités ne sont pas surprenantes pour les chercheurs du domaine. https://t.co/FA3JzDVyCC
2023-02-14 17:08:13 Tomorrow! https://t.co/aSKcqRLdvx
2023-02-14 17:08:00 RT @NYU_Courant: Looking forward to @ylecun's lecture tomorrow:
2023-02-14 17:00:31 RT @scienceisstrat1: Britain and Italy are the two sick men of Europe Cc: @Noahpinion @nfergus @sullydish @RanaForoohar https://t.co/wIkBT…
2023-02-14 12:21:03 @IanOsband @_aidan_clark_ @CsabaSzepesvari Whenever exploration is necessary, some RL results are useful. Whenever the objective is unknown, it must be evaluated through actions (one way to do so is to train what has come to be known as a "reward model").
2023-02-13 15:46:34 @bitcloud Yes, several months ago, as in May 2022, when I published this 60-page vision of a path towards Human-Level AI. https://t.co/7ZgRtLJoMw
2023-02-13 15:21:07 Scaling up auto-regressive LLMs will make them ascend to human-level AI as much scaling up parachutes will make them climb to the stratosphere. How's that for a corny metaphor?
2023-02-13 15:16:52 RLHF is even more inefficient on trolls than it is on auto-regressive LLMs.
2023-02-13 15:13:29 @atomless Humanity has lived with "beliefs of made-up nonsense" delivered with "authoritative bluster" for millennia. It's called religion.
2023-02-13 14:42:49 @artemon Whatever "reasoning" an LLM built around a 50-stage transformer can do, it has to do it within 50 computational steps. Reasoning generally involves a variable and potentially very large number of steps.
2023-02-13 14:33:26 @ShafronTom YMMV
2023-02-13 14:19:20 @Kashten_dot [hint: they are wrong]
2023-02-13 14:16:14 RT @mathemagic1an: My thoughts on Toolformer IMO the most important paper in the past few weeks. https://t.co/4IDciigbkc Teach an LLM to…
2023-02-13 14:00:23 @Kashten_dot Non-n00bs say "RLHF will fix this", and lots of people with lots of money seem to believe them.
2023-02-13 13:58:54 @deepconvonet You got yourself an idea for a startup.
2023-02-13 13:57:49 14. Unlike what the most acerbic critics of Galactica have claimed - LLMs *are* being used as writing aids. - They *will not* destroy the fabric of society by causing the mindless masses to believe their made-up nonsense. - People will use them for what they are helpful with.
2023-02-13 13:41:00 13. Why do LLMs appear much better at generating code than generating general text? Because, unlike the real world, the universe that a program manipulates (the state of the variables) is limited, discrete, deterministic, and fully observable. The real world is none of that.
2023-02-13 13:37:29 @implisci Code generation is easier because, unlike the real world, the underlying reality of code is simple, discrete, deterministic and fully observable. And there is a relatively small number of basic concepts that cover most of it.
2023-02-13 13:35:22 @rahulyedida13 Absolutely not.
2023-02-13 13:34:58 @Sokiosque It saves a lot typing and may improve your style. Also relieves you from the fear of the white page. Not mentioning that it's a big help for non-native speakers and people who find writing painful. BUT your hands must remain on the keyboard at all times.
2023-02-13 13:23:29 12. Being clear that better system will be appearing, but they will be based on different principles. They will not be auto-regressive LLMs.
2023-02-13 13:21:41 I have been consistent while: 9. defending Galactica as a scientific writing aid. 10. Warning folks that AR-LLMs make stuff up and should not be used to get factual advice. 11. Warning that only a small superficial portion of human knowledge can ever be captured by LLMs.
2023-02-13 13:16:31 @arthur_spirling As a Frenchman, in whose country cheese and mustard are revered with quasi religious fervor, this is neither mustard nor cheese.
2023-02-13 13:08:51 RT @_dmoser: High-speed railway construction per country, 1976-2019 https://t.co/Xb17rwdk0t
2023-02-13 13:05:24 6. Current LLMs should be used as writing aids, not much more. 7. Marrying them with tools such as search engines is highly non trivial. 8. There *will* be better systems that are factual, non toxic, and controllable. They just won't be auto-regressive LLMs.
2023-02-12 18:51:24 @JrKibs No. Token-by-token auto-regressive LLM don't do any planning.
2023-02-12 18:46:24 @RiverRidley At Meta, RL also means Reality Labs. Hash collisions.
2023-02-12 18:45:50 @DavidSHolz The main purpose of RL research should be to minimize the use of RL.
2023-02-12 18:45:11 @Abel_TorresM If you need to do planning, just do planning. No need for RL. That's optimal control if the state and/or action space is continuous. You only need RL for 2 things: (1) if your objective function is unknown (2) if your model of the world needs to be learned by taking actions.
2023-02-12 17:34:48 Even I haven't been that harsh against RL. https://t.co/d6RRQRagOY
2023-02-12 17:19:44 RT @kchonyc: if you want to know more about it, check out my slide deck at https://t.co/1pK0WU2IYM
2023-02-12 11:20:03 @benedictevans I use Idagio.
2023-02-12 11:13:58 @francoisfleuret @paulg @peterboghossian Perhaps. But the point remains.
2023-02-12 10:51:45 Planification &
2023-02-12 10:36:59 @csabaveres That's a ridiculous oversimplification of what I said.
2023-02-12 10:31:20 @shai_s_shwartz @MetaAI Nice.
2023-02-12 10:22:32 Haha! https://t.co/kin3eZerP4
2023-02-12 09:31:53 RT @NandoDF: Funny @sirbayes Learning methods — supervised, RLHF, policy gradients, Dagger, self-training — can be seen as optimisation wit…
2023-02-11 13:03:47 RT @MetaNewsEMEA: Are you attending #WAICF23? Join us as Vice-President and Chief AI Scientist at Meta AI, @ylecun takes to the stage to t…
2023-02-11 13:00:18 @maximeae https://t.co/njR1O0JAVS
2023-02-11 12:55:55 @mgubrud @Kantrowitz Actually, language occupies a tiny portion of the cortex: the Broca and Wernicke areas. And much of human knowledge, and *all* of animal knowledge, is entirely non linguistic. https://t.co/XK6SdxRGjy
2023-02-11 12:42:15 Spontaneous Q&
2023-02-11 12:38:02 @alrhemist @MetaAI @OpenAI You may not realize that without scientific and technological advances such as the ones described in this paper, there would be no products. Products do not just appear out of thin air.
2023-02-11 08:54:19 RT @Kantrowitz: A highlight from my conversation with @ylecun: Chatbots only learn from text, a severe limit on their intelligence since so…
2023-02-11 08:28:37 ToolFormer: an LLM that can use tools: Search engines, calculators, etc... From @MetaAI - FAIR. https://t.co/ALuWTlnMO9
2023-02-11 08:25:14 ToolFormer: LLMs that can teach themselves to use tools, like calculators, database queries, search engines,.... From @MetaAI - FAIR. https://t.co/zZ6eqt9hRE
2023-02-11 08:21:30 RT @DrHughHarvey: "Current AI tools (*meaning LLMs) should not be used by patients or doctors to answer medical queries. Any medical applic…
2023-02-11 08:20:31 @lxbrun Agreed. But I don't believe the problems with LLMs are fixable within the current paradigm. The fix will require changing the paradigm so much that they will no-longer be LLMs.
2023-02-09 14:16:15 Merci @jnbarrot pour cet échange sur les grandes tendances de la recherche en IA, le rôle des partenariats public-privé, et la contribution de FAIR @MetaAI à la recherche ouverte et les logiciels open-source. La France abrite un écosystème fécond qui nourrit les progrès. https://t.co/ySbM0gEUWv
2023-02-09 14:10:06 Un plaisir d'échanger avec @jnbarrot. Meta-FAIR contribue grandement à l'écosystème de la R&
2023-02-09 09:11:31 @DankSlay69420 If you detect, segment, and name background surfaces as well as objects, it's called "panoptic segmentation."
2023-02-09 09:00:41 RT @jayabdulraman: Startups have more leeway for mistakes than big tech companies. ChatGPT3 has made many errors but no dent on OpenAI. A s…
2023-02-08 13:34:22 @i_am__Alono @doristsao Physical. Everything is a collective phenomenon emerging from simple component in interaction. This makes it possible to abstract away the details and describe reality at various levels of details and abstraction.
2023-02-08 13:30:45 @MinhaHwang There is the name of the function, and the name of the output of that function.
2023-02-08 13:26:28 @tunguz Is it named after a common fruit?
2023-02-08 13:25:31 RT @MetaFrance: Serez-vous au #WAICF23 ? Rejoignez @Yannlecun, Vice-Président et Chief AI Scientist de @MetaAI, pour une discussion sur le…
2023-02-08 13:25:13 Yeah, why? Also true of restaurants. https://t.co/EOg5Qwb2um
2023-02-08 09:02:25 @killerstorm @DrHughHarvey Meta, Google, &
2023-02-07 17:55:58 @chris_j_paxton I had made that point for Galactica, but that seemed to fall on deaf ears. As a non-native English speaker myself, writing technical papers in English was literally torture and I wish I had access to something like Galactica when started my career.
2023-02-07 17:51:12 @Phillips_M_G No. Because a generic chatbots can also be used to help write scientific papers (badly). Both are writing assistance devices (predictive keyboards on steroids). Both can save time. Both can make stuff up and require human supervision.
2023-02-07 17:47:01 A single task that subsumes object detection and language understanding. From #NYU. https://t.co/svCusDzJmM
2023-02-07 17:44:14 More analysis of the varied public reception of AI tools from Big Tech and Small Tech. https://t.co/2ehwTHwZVp
2023-02-07 17:42:14 RT @rao2z: The reason Galactica was taken out and ChatGPT continued on it's merry way has less to do with complaints about the former than…
2023-02-07 15:12:27 @killerstorm @DrHughHarvey LLM specifically designates models that generate text. There are tons of applications of large transformer architectures pre-trained with various forms of Self-Supervised Learning. But most of them are not LLMs. LLMs are a special case.
2023-02-07 15:09:05 @nj_tantan @DrHughHarvey The manner is often crisp, but also often wrong.
2023-02-07 15:08:17 John Bridle, who coined the word "softmax", now wishes he had called it "softargmax". He coined the term while working on a paper published in 1989 in Neural Computation in which he describes the "alphanet" model that makes a hidden Markov model look like a recurrent neural net. https://t.co/4D3wppaV9f
2023-02-07 15:03:48 @no_reward_for_u @doristsao Stuart Geman, his brother.
2023-02-07 15:01:50 @DevDminGod That's pretty much what people thought in the 1950s
2023-02-07 15:01:22 @CClavius Indeed.
2023-02-07 15:01:04 @SujithK08852029 It says more about the inadequacy of the tests than about the adequacy of ChatGPT.
2023-02-07 15:00:15 @crypto1o1_karim Yes.
2023-02-07 14:59:49 @cichuck But they are wrong.
2023-02-07 14:59:36 @odedbendov I'm not saying that predictive typing is useless!
2023-02-07 14:57:43 @ChrSzegedy Just like driving assistance for cars. It's not fully autonomous but it's still useful.
2023-02-07 13:16:48 @doristsao The exact quote is "the world is compositional or there is a god", and I got it from Stuart Geman.
2023-02-07 13:14:27 @KrzakalaF @deliprao The distribution that minimizes the free energy.
2023-02-07 13:13:11 @KrzakalaF @deliprao Gibbs-Boltzmann distribution.
2023-02-07 13:12:33 They save typing. https://t.co/F9Mz5BXv0T
2023-02-07 13:12:04 @DrHughHarvey Typing.
2023-02-07 13:10:16 @deliprao It's just history. Even John Bridle who coined the name wishes it were called softargmax
2023-02-07 10:55:30 RT @perplexity_ai: Perplexity AI is hiring! If you want to find out more about @perplexity_ai and explore the roles we are looking to fill,…
2023-02-07 07:37:18 @patricksamy Pretty much. And those brain areas are pretty small. What's missing is the back of the brain (perception, motor control), the entire front (reasoning, planing), the bottom (intrinsic motivation, emotions), and the inside (episodic memory). We just have small areas on the side.
2023-02-06 08:45:42 @gabriel_valu I don't have a negative view of ChatGPT. I have a *realistic* view of it. It's useful and fun. It's just not this royal path to human-level AI that some think it is. Also, it makes sh*t up. And that's not an opinion. That's a fact.
2023-02-06 08:30:25 @rsalakhu @beenwrekt Perhaps an absence of intrinsic motivation for this task?
2023-02-05 23:42:34 @3DTOPO @alrhemist I never said LLMs were not useful. In fact, I have strongly argued that they *were* useful against a torrent of vitriol against FAIR's LLM called Galactica (designed to help scientific writing). No such vitriol against ChatGPT it seems, though it makes sh*t up just as often.
2023-02-05 23:35:45 Good piece by @Noahpinion about the limitations of LLMs : "Why does ChatGPT constantly lie?" https://t.co/GPMFEEur9r
2023-02-05 23:21:45 @3DTOPO @alrhemist The "reality" of a program, i.e. its state during execution, is discrete, small &
2023-02-05 23:11:47 @bboczeng It's the exact opposite. Thinking that scaling LLMs will lead to Human-level AI is like thinking that making a parachute bigger will allow to fly like a bird. Whereas we need to understand how bird wings generate lift. Then we can build gliders, airplanes, jets, helicopters...
2023-02-05 22:58:16 @MacGraeme42 @bradysimpson55 LLM specifically refers to architectures (transformers or not) trained to predict the next word. Transformers architectures pre-trained with some form of Self-Supervised Learning are a great tool, and likely to be used in all kinds of future AI systems.
2023-02-05 21:41:07 @csabaveres @andrewgwils Rachmaninoff merely prolonged a style he didn't invent to its apotheosis and inevitable doom. Stravinsky invented something completely new.
2023-02-05 21:36:40 Haha! Not wrong. https://t.co/C4XaNsBnGu
2023-02-05 21:34:41 @OriolVinyalsML @elonmusk What I mean by LLM is: "A system trained to predict the next word, and used to produce the next word reactively &
2023-02-05 21:21:54 @MacGraeme42 @yannx0130 No. It's more complicated than that. https://t.co/7ZgRtLIQWY
2023-02-05 19:21:41 @HugoMe Reinforcement Learning through Human Feedback: a technique to fine-tune dialog systems by having humans score multiple responses to a question.
2023-02-05 19:18:05 @yannx0130 Current LLMs cannot be trained on video. There is a large community of researchers attempting to design systems that can learn how the world works from video. It doesn't work yet.
2023-02-05 19:15:27 @amang1221 @bradysimpson55 https://t.co/7ZgRtLJoMw
2023-02-05 19:12:39 @powerbottomdad1 @alrhemist An LLM. Or a somewhat-clueless software engineer.
2023-02-05 19:09:36 @yannx0130 Regurgitating Python code does not require any understanding of a complex world.
2023-02-05 19:07:52 @zussini About 800 million neurons for cats, and about 2 billion for dogs (and parrots).
2023-02-05 19:06:48 @benalsop Dogs have more than twice as many neurons as cats.
2023-02-05 19:06:07 @VladicaV @yudapearl No, they are not mutually exclusive. In fact, my proposal includes learning causal world model.
2023-02-05 19:04:07 @bradysimpson55 I'm not saying LLMs are not useful. They are. They just aren't on the path towards Human-Level AI. At least in their current form.
2023-02-05 19:00:07 @alrhemist One can regurgitate Python code without any understanding of reality.
2023-02-05 18:34:01 Scientific debates on social media are like a human form of bidirectional RLHF. The person making the post gets feedback (good and bad). The commenters also get feedback, mostly when they are clueless or wrong.
2023-02-05 15:53:09 @j_u_le_s That's just Twitter.
2023-02-05 14:08:34 Yes. https://t.co/z6S8KcjApP
2023-02-05 11:24:08 But this is not to say that LLMs in their current form are not useful. Or fun. They are.
2023-02-05 11:19:54 Why learning from text is insufficient for intelligence. https://t.co/XK6SdxRGjy
2023-02-05 11:16:54 My proposal for an architecture that reason, plan, and learn models of reality. Paper: https://t.co/7ZgRtLIQWY Talk: https://t.co/hwXwkLs1M1
2023-02-05 11:08:02 @firthvansvic Both.
2023-02-05 11:05:45 @elonmusk @OriolVinyalsML If you want non-petty, substantial, in-depth debates, you are better off on Facebook https://t.co/M4zp5CasUk
2023-02-05 10:41:06 To clarify: LLMs that auto-regressively &
2023-02-05 10:36:26 @KieronScully @hughhowey I haven't moved my goals in years. But I have changed the path quite a few times, and probably will a few more times. https://t.co/7ZgRtLIQWY
2023-02-05 10:34:20 @yoavgo Well, I am I think the concept of HLAI is both more sensible and more testable than the amorphous concept of AGI.
2023-02-05 10:32:23 @andrewgwils Stravinsky somewhere?
2023-02-04 22:41:50 @elonmusk @OriolVinyalsML It's neither petty nor real beef. More like a minor divergence of opinions magnified into a non-existing beef. That's why we simultaneously love and hate Twitter. Still, I think LLMs are missing essential features for HLAI. And I doubt @OriolVinyalsML actually disagrees.
2023-02-04 19:07:25 The first big success story of Self-Supervised Learning is large-scale transformers pre-trained as denoising auto-encoders (BERT-style) for various downstream NLP tasks (translation, content filtering/ranking). LLMs are a special case of the above that became useful years later. https://t.co/umgHCCaGTs
2023-02-04 19:00:07 @freddiekarlbom @traderyau No, but it could be considered dog-level intelligence, which is smarter than any LLM.
2023-02-04 18:59:09 @JohnBlackburn75 @traderyau Your dog is way more intelligent than any LLM.
2023-02-04 18:57:24 @OriolVinyalsML There will be no need.
2023-02-04 18:55:33 @DrTc666 Between 1996 &
2023-02-04 18:52:14 @DrTc666 No completely true. In 1996, AT&
2023-02-04 14:08:53 @IKoullias The ad campaign was in 1993, I think. I was at AT&
2023-02-04 13:58:13 @Mnemomeme @beenwrekt I spent one month at Xerox PARC as a summer intern in 1984. In the late 1990s, my DjVu team at AT&
2023-02-04 13:53:50 @Youness_ELM There are lots of things that are very useful in practice, but not particularly relevant to progress towards HLAI.
2023-02-04 13:52:13 @Golisms One of the most interesting aspects of Cicero is its ability to plan. This ability to plan is a necessary component of autonomous intelligence and is completely absent from current LLMs.
2023-02-04 13:50:45 @FIQureshi1 Also useful as a tool, with interesting underlying concepts (like diffusion models), but definitely not on the highway towards HLAI.
2023-02-04 13:49:29 @OptimalBayes It's a shiny casino you see off the highway. You can take the off-ramp, spend your money in the casino, and perhaps even win. But you risk forgetting why you were on the highway in the first place.
2023-02-04 13:46:03 @mapto No. That paper claimed that LLMs &
2023-02-04 13:41:51 @rasbt LLMs are useful. Car accidents, not so much.
2023-02-04 13:40:34 @DarrylMason Irrelevant.
2023-02-04 13:39:17 @c7ddfc Transformers, like ConvNets and a few other architectural concepts, are clearly useful, both as tools towards HLAI and as components of practical applications. Getting machines to learn intuitive physics is an important but yet unsolved problem.
2023-02-04 13:35:27 @mvuksano https://t.co/7ZgRtLIQWY
2023-02-04 13:33:16 @twishmay https://t.co/7ZgRtLIQWY
2023-02-04 13:32:30 @jpFromTlon You can't "solve alignment" until you know how the system that you want to align is built. And no one knows, at this time.
2023-02-03 18:03:33 @jhoang314 s/Nets/Meta/
2023-02-03 18:02:44 @Raamana_ No one has anything to be ashamed of. And no one is trying to shame anyone. But someone is trying to explain that for innovative product to come out of startups, large research labs have to practice open research and be *very* generous with their IP.
2023-02-03 13:52:57 RT @MetaAI: MultiRay is Meta’s platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an embeddin…
2023-02-03 13:45:10 @urigolan The very opposite. Like many research scientists, I would never work for an org that focuses on developing products in secrecy.
2023-02-03 13:43:21 @karger Google AI: untold thousands. DeepMind: 1500, perhaps half focusing on research. Meta-FAIR: 600, plus some folks from other orgs publishing on AI. OpenAI: 375, now largely focused on products and applied research.
2023-02-03 13:36:57 @djmalvarado That's because OpenAI being a startup, they've had to focus on high wow-factor flashy demos and product development, at the expense of research, so as to attract investments. Nothing wrong with that, if that's what you need to do.
2023-02-03 13:33:15 @Lingman Meta-FAIR has about 600 people. Some publications come out of other orgs though. DeepMind has about 1500 people, but the research core is smaller.
2023-02-03 13:23:47 @bill17472148 @DC__64 No. But the health of the AI R&
2023-02-03 09:57:50 @DC__64 The expression you might have been looking for is "go off the deep end". I can reassure you that explaining the mechanisms of innovation does not make one descend into insanity. Twitter, however ....
2023-02-03 09:52:54 @YashRathod_75 @DeepMind They have become a bit more secretive lately.
2023-02-03 09:49:25 @tinkerteller @carlesgelada You are probably right.
2023-02-03 09:46:29 @hahatango Google &
2023-02-03 08:55:04 @3DTOPO They absolutely *do* use PyTorch.
2023-02-03 08:54:28 @carlesgelada You could try to measure the impact of every paper by looking up their number of citations in Google Scholar. That would be a major undertaking.
2023-02-03 08:51:28 @jhoang314 It's important to understand the dynamics of innovation. The reason why AI is progressing so fast is *precisely* because Nets &
2023-02-03 08:48:59 @jhoang314 But the fact is that most of the ideas, techniques and tools used by OpenAI came from Google, FAIR &
2023-02-03 08:44:55 @jhoang314 Hate? Where? We do work together. OpenAI uses PyTorch, which was developed at FAIR. PyTorch 2.0 uses the Triton back-end compiler which was developed at OpenAI. OpenAI use transformers and RLHF which originated at Google &
2023-02-03 08:41:47 @rsdenijs @__dipam__ @JrKibs FAIR includes a group called NextSys that works on AI infra for research. Both FAIR and OpenAI use PyTorch as their DL framework.
2023-02-03 08:40:02 @rsdenijs @__dipam__ @JrKibs AI infra was a relatively small group within Meta AI, and was transferred to the main infra group almost a year ago. Much of their activity is on AI support in production. OpenAI relies on MS Azure for their production infra.
2023-02-03 08:33:43 RT @tvykruta: Facebook releases a 30B param “OPT+IML” (Open Pre Trained + Instruction Meta Learned) model fine tuned on 2000 tasks. Availab…
2023-02-02 19:05:33 @__dipam__ @JrKibs FAIR has about 600 people.
2023-02-02 18:44:13 @maartengm A good chunk of publications from Meta come from FAIR-Paris. Many of them are on collaboration with Inria and universities through CIFRE resident PhD students.
2023-02-02 18:36:51 @JrKibs You don't seem to know very far.
2023-02-02 18:02:22 @alisabets @johnjnay @TheEconomist @stateofaireport Without those publications and open-source code published by what you call "paper mills", there would be no OpenAI.
2023-02-02 17:59:11 Data on the intellectual contribution to AI from various research organizations. Some of organizations publish knowledge and open-source code for the entire world to use. Others just consume it. https://t.co/BGxTP1lkXB
2023-02-01 18:23:35 Blind map building. https://t.co/MnNDIqA19Y
2023-02-01 17:25:16 RT @KevinZollman: I am growing really tired of the "ChatGPT is going to replace Google" dialog. (Hint: only one of those four publicatio…
2023-02-01 16:52:42 @hemanthkumarak That's false. Most of human knowledge and all of animal knowledge is completely non linguistic.
2023-02-01 16:48:38 @SaraASolla @francoisfleuret You can always get the same effect as an explicit mean cancelation by manipulating the weight update formula of the following layer.
2023-02-01 12:37:53 @hemanthkumarak Yes, it matters. You can use LLMs to help you write. But you don't want to believe that you can let them research, think, and write.
2023-02-01 12:36:03 @alexandersumer It's much worse than that. LLMs *cannot* model reality in their current form.
2023-02-01 12:34:49 @realkrats No. Realistic.
2023-02-01 12:33:42 @andrewryann What do you think we've been doing?
2023-02-01 12:33:14 @Mingke Not just lower dimension, but also discretized.
2023-02-01 12:32:40 @erikkartman That's false. Some neural nets, with proper architectures, can think, plan, and infer. It's just that current LLMs can't.
2023-02-01 12:31:10 @RachelVT42 Right.
2023-02-01 12:30:44 @studyouwei No. Learning from text absolutely does not enable LLMs to learn logic. And prompt engineering does not even begin to solve the problem.
2023-02-01 12:29:00 @tuxtedi No one is saying LLMs are not useful. I have forcefully said so myself, following the short-lived release of FAIR's Galactica. People crucified it because it could generate nonsense. ChatGPT does the same thing. But again, that doesn't mean they are not useful.
2023-02-01 12:20:37 RT @honualx: If you are interested in language modeling for audio / music generation , remember that Encodec provides high quality discre…
2023-02-01 08:48:23 RT @gabrielpeyre: With @joanbruna we are organizing a conference to celebrate Stéphane Mallat's 60th birthday. It will be in IHES near Pari…
2023-02-01 08:11:48 Language abilities != Thinking. Or why LLMs such as ChatGPT can eloquently spew complete nonsense. Their grasp of reality is very superficial. https://t.co/rT2XhJB72G This piece in the Atlantic comments on a paper by the MIT Cognitive Science crowd https://t.co/Q4OPaMnUKW
2023-02-01 07:12:49 @RealFade - The R done primarily at FAIR &
2023-02-01 06:53:52 @silfen2 Essentially. Nice packaging.
2023-02-01 06:52:58 @__goldfinger The revolution started before that. You just didn't know about it.
2023-02-01 06:50:17 @dgreschler - deep learning - differentiable associative memory / attention circuits - transformer architectures - self-supervised learning All of which are used in modern natural language processing systems, including LLMs, including chatGPT, and many others.
2023-02-01 05:54:09 Not false. The story with any new technologies: - if it spreads quickly, it's because it's useful. - people learn to use the new tech for what it's useful. - they adapt so as not to get harmed by its limitations *if* they are well informed. - the young &
2023-02-01 05:39:27 Interesting proposal of "digital pharmacies" which would distribute approved health-related apps. The app approval process would guarantee regulatory compliance while being independent of the Google and Apple app stores, which have conflicting interests. https://t.co/7yUyIjVsqN
2023-01-31 21:59:49 RT @WeijiaShi2: Enhancing GPT-3 with world knowledge: Introducing REPLUG: a retrieval-augmented LM framework that combines a frozen LM w…
2023-01-31 21:55:50 RT @arankomatsuzaki: REPLUG: Retrieval-Augmented Black-Box Language Models REPLUG with the tuned retriever significantly improves the perf…
2023-01-31 21:42:01 RT @NYU_Courant: Looking forward to Professor Yann LeCun's lecture at the Air Force Office of Scientific Research on Wednesday, February 15…
2023-01-31 21:38:07 @bitcloud You are gravely mistaken about the current capabilities of LLMs.
2023-01-31 21:33:23 @SapkotaTsuman @francoisfleuret @trekkinglemon It could. Except it's always placed before the ReLU, not after.
2023-01-31 21:29:39 @togelius Yeah, "an AI", I just hate that.
2023-01-31 21:26:47 RT @hardmaru: The reason no one enjoys an unassailable advantage is that AI knowledge diffuses quickly. “Researchers from all the competing…
2023-01-31 21:25:36 From the head of product at OpenAI who just left OpenAI. https://t.co/gpwGCBrvWi
2023-01-31 21:16:24 RT @BaghliNacym: Why #Meta’s @ylecun Is An #AI GodFather And Why #ChatGPT3 Is Not Revolutionary… https://t.co/V2CbQTxIa7
2023-01-31 21:11:29 RT @antonabramov: Reading Yann LeCun about #ChatGPT and related things is a glass of water in a desert. Thank you for speaking out @ylecun
2023-01-31 21:02:15 @NYU_Courant @AFOSR Feb 15th!
2023-01-31 21:01:48 Wednesday Feb 15. https://t.co/AFg8i5lsYU
2023-01-31 18:49:34 RT @DrHughHarvey: Has anyone yet figured out exactly why VCs are so attracted to generative AI that that can produce infinite amounts of bu…
2023-01-31 08:26:50 RT @togelius: Saying "an AI" is definitely a red flag, but an even bigger red flag these days is arguably talking as if ChatGPT was all of…
2023-01-30 20:24:02 Un entretien avec Le Matin, le grand journal marocain francophone, où je parle d'intelligence artificielle, de l'état de l'art et du futur. https://t.co/HyYOl4ijKE
2023-01-30 20:15:47 @csabaveres @collectivei @ZDNET @TiernanRayTech That's just false. In my (long) experience, corps abandon fundamental research when they can no-longer afford it. Whenever they "stop seeing its economic benefits", that's often because of a change in top leadership and/or an excessive response to pressure from Wall Street.
2023-01-30 19:43:58 @SayahHajji @francoisfleuret I'm using the definition f(kx) = kf(x)
2023-01-30 19:39:32 @francoisfleuret [these conditions are not all simultaneously satisfiable]
2023-01-30 19:38:09 @EdMaltinho @francoisfleuret That said, I have advocated for things like tanh instead of sigmoid for about 35 years, precisely for that reason.
2023-01-30 19:36:39 @EdMaltinho @francoisfleuret Good question. But you can always cancel the mean with a post-nonlinearity bias.
2023-01-30 19:31:55 RT @collectivei: Following @ylecun’s #ciFORECAST, Yann discussed the impact ChatGPT has had on corporate R&
2023-01-30 19:31:27 @oasictech @Noahpinion That's the point.
2023-01-30 19:25:38 @TiernanRayTech @collectivei @OpenAI Haha!
2023-01-30 19:24:46 @LN_Master_Hub @vivek_thakur_81 You don't seem to know what "liberal democracy" means. It has nothing to do with whether the government is on the left or the right. https://t.co/zwIukFvnB7
2023-01-30 19:16:03 @francoisfleuret Use a slanted sawtooth function that globally decreases but locally increases. Train a 1D linear regression with a single training sample x=1, y=1, initial weight w=0. The weight will keep increasing to infinity, while the output will keep decreasing to minus infinity.
2023-01-30 15:36:51 A New article by ZDnet's @TiernanRayTech on my analysis of recent progress in AI and future opportunities that is considerably more informative and positive than his previous one. https://t.co/gjmOQ6bNHJ
2023-01-30 15:29:06 @TiernanRayTech @collectivei @OpenAI A more positive title than your previous article
2023-01-30 15:26:58 @RespectToX @francoisfleuret [fat fingers] But then you *CANNOT* make it simultaneously nonlinear, differentiable *and* homogeneous.
2023-01-30 15:24:10 @EdMaltinho @francoisfleuret The weights of the following linear layer sees variables that are close to zero mean, which is preferable for gradient-based optimization (better conditioning). ReLU breaks that, by the way.
2023-01-30 15:22:33 @__z__9 @francoisfleuret If not, you may have two solutions for the same function, hence more local minima, saddle points, etc. May not be a huge problem but still something to consider.
2023-01-30 15:21:12 @OmarSaydThat @francoisfleuret Or in the complex plane
2023-01-30 15:20:47 @RespectToX @francoisfleuret Differentiable almost everywhere is a weak condition that you can make stronger (differentiable, infinitely differentiable, etc). But then you can make it non-linear, differentiable, *and* homogeneous.
2023-01-30 15:18:29 @SayahHajji @francoisfleuret Non linear and homogeneous means piecewise linear with one single "kink", like ReLU.
2023-01-30 10:14:49 RT @NaveenGRao: LLMs/ChatGPT are much more similar to StableDiffusion/Dall-E than search engines as they can fill in plausible details base…
2023-01-30 10:03:52 @francoisfleuret They have to be non-linear, continuous, differentiable almost everywhere, preferably monotonic, possibly homogeneous (equivariant to scaling), and if possible with zero integral over the relevant domain.
2023-01-30 01:00:00 CAFIAC FIX
2023-01-16 19:28:28 @JrKibs @babgi C'est faux. Il y a eu un important travail sur l'alignement. Mais le risque de réputation et l'apriori négatif est plus important pour Meta que pour OpenAI. Vous souvenez-vous du chatbot de FAIR BlenderBot v3? L'excès d'importance donné à l'alignement l'avait rendu ennuyeux...
2023-01-16 19:25:00 @JrKibs @babgi Attendez qu'OpenAI commence à rendre le service payant... Les opérations de ChatGPT leurs coûtent une fortune en ce moment. Ça ne peut pas durer.
2023-01-16 19:23:14 @JrKibs @babgi RLHF à été proposé par Google/DeepMind. Mais c'est une vieille idée recyclée pour l'occasion. Google et Bing utilisent des idées similaires depuis des années pour le "ranking" des réponses à une recherche.
2023-01-16 19:18:24 @jcunniet @babgi Parce que: 1. Elles ont moins besoin de la publicité généré par ce genre de démo. 2. Elles ont déjà de nombreuses applications en interne. 3. Elles s'exposent à un plus grand risque à leur réputation si le système déblatère des bêtises (ce que font copieusement les LLMs actuels).
2023-01-16 19:06:27 @babgi Les entreprises multinationales n'en sont pas l'ennemi, mais le partenaire. Ce qui les interessent, ce sont les écosystèmes féconds pour la recherche &
2023-01-16 19:00:52 @babgi Tout cela au nom d'une conception un peu dépassée de la souveraineté. La souveraineté technologique et la maîtrise locale des nouvelles technologies sont des objectifs désirables et admirables. 3/
2023-01-16 18:53:43 @babgi Les meilleurs experts en France de ces méthodes sont à FAIR-Paris. FAIR-Paris contribue *énormément* à l'écosystème français de la recherche en AI. On peut regretter que certaines institutions publiques françaises voient FAIR comme un ennemi et non comme un partenaire.
2023-01-16 18:49:40 @babgi ChatGPT n'est pas particulièrement innovant. Il utilise des techniques originellement développées à Google et Meta (FAIR), qui possèdent des systèmes similaires dans leurs labos. Mais ces entreprises sont moins motivées à déployer des démonstrations publiques qu'OpenAI. 1/
2023-01-16 13:49:44 RT @ProfSchrepel: .@ylecun: one of the great mysteries of intelligence is the emergence of intelligent behaviors from a network of simple i…
2023-01-16 13:46:53 @NguyenV68228367 The problem is not that we "have not invested" in multimodal systems. The problem is that things of this type that have been tried don't work satisfactorily or are not general enough.
2023-01-15 21:16:15 @RespectToX Scaling up computation is necessary but far from sufficient.
2023-01-15 21:14:28 @Entity3Self Many people didn't realize that scaling up computation was necessary.
2023-01-15 20:44:59 @Kihbernetics Indeed.
2023-01-15 19:17:34 @stemarO_O And many people at DeepMind were, and still are, touting RL as the quick road to AGI. It is their somewhat naïve and overly optimistic original promise that AGI was just around the corner that made Elon (1) invest in them, (2) panic that superhuman AI was soon gonna kill us all.
2023-01-15 19:08:09 @CriticalAI @yoavgo Why hide behind a pseudonym? Who are you?
2023-01-15 19:01:35 @CriticalAI @yoavgo Dude, the purpose of the original post is *precisely* to "contextualize, historicize, and stand back" from the current wave of moral panic. Calling yourself "CriticalAI" doesn't give you a free ticket to insulting people's intelligence by telling them how their mind tends to work
2023-01-15 18:50:36 @stemarO_O The wave of interest in *practical* expert systems in the 1980s wasn't fueled by AGI ambitions. But the original movement of logic-based AI totally was. Many items on my list changed name and became part of the standard engineering toolbox after their goals of AGI were abandoned.
2023-01-15 18:44:23 Excessive enthusiasm from influential CEOs or startup co-founders is dangerous hype, fueled by naïveté, self-delusion, ambition, or some combination thereof. And so is excessive pessimism by influential pundits shooting moral panic ordinances from the bleachers. 3/3
2023-01-15 18:38:25 All of those concepts brought something to the table. But none of them were sufficient. Excessive enthusiasm from a young grad student is charming. They have their entire PhD to confront themselves with reality and renormalize their ambitions. 2/
2023-01-15 18:33:29 Yes, many of us have always been working on "AGI", whatever you mean by that. Perceptron, General Problem Solver, expert systems, machine learning, backprop, RL, SSL, transformers, LLMs...: For some, these were going to be "The One Weird Trick" that was gonna take us to AGI. 1/ https://t.co/dKcZ41cVn7
2023-01-15 18:21:55 RT @NeurIPSConf: You can now watch the recorded material from #NeurIPS2022 online without registration at: https://t.co/BtaKedJGbb
2023-01-15 17:22:14 The Pessimists Archive Newsletter reviews past flare-ups of technology-induced moral panics. They make us laugh now. And shed light on the latest flare-up around generative AI. https://t.co/rMM0v9Y4rU
2023-01-15 17:13:10 @3DTOPO @gaboraya On the mobile FB app: Tap the icon with your portrait in the upper right (to the right of the notifications bell). Then tap the "Feeds" button. You now have 5 tabs: All, Favorites, Friends, Groups, Pages. Tap Friends.
2023-01-15 14:09:36 According to "Web of Nonsense" I'm an [unintentionally comical] green-checked impostor whose real name is Learning Rose, with 3 publications &
2023-01-15 04:05:10 @zmughal @punk3700 I'd totally go for LispTorch
2023-01-14 23:48:02 DisclosedAI? https://t.co/0zwcaNcP7N
2023-01-14 23:42:15 RT @davidchalmers42: call for abstracts for a conference on the philosophy of deep learning, at NYU march 24-26. co-organized by @raphaelmi…
2023-01-14 23:39:45 @tobias_rees There is some.
2023-01-14 20:55:05 @3DTOPO I dunno man. It works well for me. You must be using it wrong.
2023-01-14 20:50:05 RT @scienceisstrat1: Some incredible high-IQ accounts to follow on Twitter if you don’t already: - @Noahpinion - @AlecStapp - @erikbryn…
2023-01-13 00:54:10 @kasstherobot @RasmusToivanen @pmddomingos @huggingface You are right. It's just that going from a cool demo to a scalable, practical, useful product takes time and effort.
2023-01-13 00:50:39 @pmddomingos @scienceisstrat1 @dwallacewells @Noahpinion @MichaelEMann s/Rudy/study/
2023-01-12 17:08:51 "By tapping into Shutterstock's collection of millions of images, videos and music, Meta plans to use these datasets to develop, train and evaluate its machine learning capabilities." https://t.co/CjLnvFXU9r
2023-01-11 23:25:25 @pmddomingos For a research lab to be truly innovative, it needs to have a long leash and not be bound to short-term applications. But if it's completely "independent" it's also either economically unsustainable or constantly looking for funding/sponsors, or both.
2023-01-11 23:20:58 @pmddomingos @scienceisstrat1 @dwallacewells @Noahpinion @MichaelEMann The UW dept of atmospheric science seems pretty mainstream in its Rudy of climate change. https://t.co/1HnoLE02Xd
2023-01-11 23:14:01 @pmddomingos @jim_linz @scienceisstrat1 @dwallacewells @Noahpinion @MichaelEMann I think you should care more about the science than about the politics. If you believe that the scientific consensus on climate change is wrong, you are not naïve but deluded. Do you seriously believe the entire climate science community is either wrong or lying?
2023-01-11 22:22:30 @naivebased @pmddomingos No Google AI, no GPT either. Goog came up with transformers and BERT-style SSL pre-training (though there were similar things long before that). And FAIR research on dialog systems started before OpenAI existed. Point is, GPT didn't come out of a vacuum. https://t.co/LXyajY40Hd
2023-01-11 22:14:06 @stephen_mintz If you get 4000 submissions, it makes sense to accept 1000.
2023-01-11 21:21:08 @AaronHertzmann @vardi Nice piece.
2023-01-11 20:51:30 RT @mhnt1580: New paper We are announcing ReVISE, the first universal audio-visual speech enhancement model powered by SSL. paper: https:…
2023-01-11 18:56:04 @scorpio_dp @pmddomingos None whatsoever. But thanks for your concerns.
2023-01-11 18:55:09 @GoUnlockedVR @pmddomingos Explain to us how WhatsApp is manipulating you.
2023-01-11 18:53:36 @pmddomingos BTW: OpenAI uses PyTorch. So, no FAIR, no GPT. (not mentioning a host of methods used by chatGPT that were originally developed at Google &
2023-01-11 18:50:59 @cwolferesearch @pmddomingos Probably because we focus on *actual* scientific advances, and only occasionally roll out flashy demos. Also, we have a huge impact on Meta operations, albeit indirectly through applied AI research groups. So we don't need flashy demos to justify our existence or raise money.
2023-01-11 18:42:50 @OriolVinyalsML You can use me as much as you want, Oriol
2023-01-11 18:38:16 Deduct the amount from the presenting author's registration if the paper is accepted. Or use the money for travel grants or student discounts.
2023-01-11 18:36:31 @_amirbar Or student travel grants.
2023-01-11 18:32:56 @shelan Or $100. I don't know. I don't think the number matters much.
2023-01-11 18:31:05 @__delas__ No, because (1) they all have in internal reviewing system, (2) they have a reputation to maintain, (3) the scientists there aren't desperate to pad their resumé (I know you are joking)
2023-01-11 18:20:26 To the folks who still think that a tsunami of crappy LLM-generated papers will flood conference reviewing systems, here is a solution: charge $20 per submission.
2023-01-11 18:13:39 @zdeborova Most likely Master students who co-authored a paper or two. The problem with a fast growing field is that most people are junior and inexperienced. But yeah, that's not good.
2023-01-11 14:42:46 @pmddomingos Meta has FAIR. We're good. But thanks for your concerns.
2023-01-11 14:29:31 @_ambodi @wellingmax The decrease of fertility is probably one of the most POSITIVE things that are happening. It will cause the world population to stabilize instead of growing to unsustainable numbers.
2023-01-10 20:36:04 @KooroEski @kareem_carr In reproduction, the *result* clearly infringes on the copyright of the original. But copyright does not apply to things that differ significantly from the original.
2023-01-10 20:34:17 @quantumNoJutsu @kareem_carr Were portrait artists treated "more favorably" than cameras when photography was invented?
2023-01-10 20:32:05 @rahullak @kareem_carr A machine can produce 1000s of bowls per hour that cost pennies. A genuine handmade pottery bowl will cost you a small fortune.l, but people still buy them because they are unique.
2023-01-10 20:28:14 @neumarcx @kareem_carr Copyright infringement is (largely) determined by the similarity of the two pieces, not by the process through which they were produced. In fact, in software you can prove the absence of infringement if the team re-implementing a function has never seen the original source code.
2023-01-10 14:43:00 @Shredderroy @kareem_carr Not the same. Napster was actually colliding with existing interpretations of copyright laws. Generative AI doesn't.
2023-01-10 14:39:45 @saltig_ai @kareem_carr You mean, like photography hurt portrait artists? Like recorded music hurt performing musicians?
2023-01-10 13:24:23 @kareem_carr Artists can't stop other artists from being inspired by their style. They can't even prevent others from *copying* their style. There is zero law against that. Why should human and non-human artists be treated differently?
2023-01-10 13:16:55 @Hactar0 @boazbaraktcs Driving assistance systems don't need that level of reliability. But full autonomy, without a driver, does. That's why the former is widely deployed and the latter still experimental.
2023-01-10 13:14:46 @beenwrekt @boazbaraktcs Driving assistance systems (all based on ConvNets these days), are very widely deployed and even mandatory in Europe. Writing assistance is also widely deployed (they are pretty basic at this point). Both will progressively gain in reliability, functionality, and autonomy.
2023-01-09 23:32:47 This is the publishing model that the ML community has been slowly moving towards, led by ICLR. The ultimate state would be something like what I describe in this proposal: https://t.co/jMRHF8WH3a https://t.co/CgfSEaMAO4
2023-01-09 19:18:35 RT @gabrielpeyre: Reverse mode automatic differentiation computes the gradient with approximately the same cost as evaluating the function…
2023-01-09 19:05:59 @Roozbeh_Sanaei2 Haha! That is not wrong.
2023-01-09 18:51:58 @mhdamrollahi It emerged in 2018 that one such malicious dev was Aleksander Kogan, a psychologist at Cambridge U who developed an app for "research purposes." He breached the privacy clause in the contract and attempted to sell models and data to Cambridge Analytica... Gives you cold feet.
2023-01-09 18:46:17 @mhdamrollahi In 2010, FB launched the "Graph API" for developers to write apps. It was shut down in 2014 because it turned out to be difficult to guarantee the protection of private user data against malicious developers. 1/
2023-01-09 18:29:25 RT @alfcnz: The second video of the «Classification, an Energy Perspective» saga teaches backprop, visualises the energy landscape, and exp…
2023-01-09 14:33:17 @muskyblacksheep The software we built to train this neural net, called SN, was originally developed by Léon Bottou and me on Commodore Amiga 1000 in 1987/1988.
2023-01-09 14:31:17 @ScaleTechScott Indeed. It blew our minds, too.
2023-01-09 14:26:53 @mehdimer It was a clear lack or foresight on the part of Netravali and Lucent. AT&
2023-01-09 14:23:26 @mehdimer Neural net research at Bell Labs started around 1985. In 1996, Lucent spun off from AT&
2023-01-09 14:09:28 @datafoolYT Memorization doesn't work for arithmetic. Even less for mathematics. All the latest AI systems that can do math have explicit planning/search/reasoning abilities (unlike LLMs). As in https://t.co/5kv2OAG669
2023-01-09 14:01:18 @nathanbenaich Building a research org from scratch, as opposed to an acquisition, is harder but also cheaper and more efficient. You get get an org that is more likely to have an impact on both the research community &
2023-01-09 13:54:26 @Kleinspaces Same at Facebook. It took a while for infrastructure groups in both companies to go from a "distributed database &
2023-01-09 13:50:42 @iPrabhavKaula AI R&
2023-01-09 13:48:24 @pmddomingos It's just chronology. Also, impact on the research community and on internal operations, which don't have as much visibility as flashy demos among the wider public.
2023-01-09 07:06:09 RT @HochreiterSepp: Completely agree. I advocate to make the associative memory explicit and use synaptic weights for other processing task…
2023-01-09 07:02:35 A) Google with Google Brain (2011) B) Facebook with FAIR (2013) https://t.co/sOmySqs0tZ
2023-01-09 06:38:15 Haha! The paper in question by @clmt et al. was rejected from CVPR 2011 &
2023-01-09 06:14:59 @rao2z Yes.
2023-01-09 06:04:27 RT @rasbt: Convolutional networks strike back, again. The fully convolutional ConvNeXt v2 extends the successful ConvNeXt architecture by a…
2023-01-08 22:53:14 @Grady_Booch No, they were not well founded, as I copiously argued with you. I'm not going into this argument again.
2023-01-08 22:51:42 @jdp23 @Grady_Booch Even if your assumption of moral bankruptcy were true, this would tell you that Meta *would* try hard to avoid causing harm *even* if it only cared about its reputation. But in fact, Meta's motivation for avoiding causing harm it that it cares very much about doing good.
2023-01-08 20:57:28 @Grady_Booch Precisely because the reason that causes large companies to avoid making their models available to the public is not that they may cause actual harm, but that people like you are quick to welcome them with ridiculous knee-jerk prophecies of doom.
2023-01-08 20:26:30 @j_u_le_s Many politicians can't surround themselves with good advisors. Because those advisors would have to be very stupid or very dishonest to agree with them. And then, many politicians have a very mis-calibrated moral compass...
2023-01-08 20:18:41 This demo was actually built in early 1989. We described it in this December 1989 Neural Comp paper: https://t.co/CIFrKk5Gdt The network had 9760 params and 65k connections. Image normalization was slower than the ConvNet! The DSP code was "compiled" into C from Lisp. https://t.co/IGqGI8Nf1K https://t.co/aIAD16sQ6N
2023-01-08 20:11:07 @Jousefm2 Actually, this demo is not from 1993 but from 1989. We described it in this paper published in Neural Computation in December 1989: https://t.co/CIFrKk5Gdt The network had 9760 params and 65k connections. The funny thing is that image normalization was slower than the ConvNet. https://t.co/ojadzgdICZ
2023-01-08 19:59:10 @mozyild @Jousefm2 It was not LeNet1. It was a smaller ancestor. similar to the one in the 1989 Neural Comp. paper below (strided conv, no pooling). The hardware: board with an AT&
2023-01-08 19:35:39 @benoitfrenay @OpenAI It's deployed publicly, and they use the interactions to continuously improve the system through RLHF (originally proposed by DeepMind) and other methods.
2023-01-08 19:33:50 @sapanmrt It is that answering people's questions wrongly doesn't help anyone.
2023-01-08 19:33:01 @ShabaniYb @Google You realize that most of the ideas behind modern generative models actually came out of Google and FAIR, right? Both companies have LLMs. They just chose not to make them publicly available, though FAIR has made its models open source.
2023-01-08 17:19:56 RT @PessimistsArc: Before AI threatened homework: radio and jazz. https://t.co/uOPhu2QabB
2023-01-08 17:04:57 @DeepSpaceKaren Can it still be called knowledge if you can't apply it? A computer database has data but no knowledge.
2023-01-08 17:01:58 @pmddomingos Naturally.
2023-01-08 16:56:37 @CSProfKGD
2023-01-08 16:40:27 @egrefen @hardmaru 32-8-66-1-1-1... https://t.co/Gm64KaoOjf
2023-01-08 16:26:36 RT @DrJimFan: New work from @MetaAI: HyperReel. Looks like VR will get a new killer app: Capture videos with multiple cameras set up at di…
2023-01-08 08:34:49 It annoys me, too. https://t.co/eBb3eW54sy
2023-01-08 03:07:48 @Dvrffschtz @kareem_carr The physics is exactly the same.
2023-01-08 03:04:05 @deliprao @msamogh Yes, it's called self-supervised pre-training. It predates this particular paper by years. The Collobert-Weston system used contrastive SSL: training a deep net to distinguish well-formed text segments from segments where one word was substituted. https://t.co/XKEvmLsSJ5
2023-01-08 02:42:55 Quote from the article that is accurate: "Google, Meta and other tech giants have been reluctant to release generative technologies to the wider public because these systems often produce toxic content, including misinformation, hate speech [...]" https://t.co/vEbAMAi6TC
2023-01-07 23:56:28 In a (future) world in which everyone has access to personal AI assistants, human knowledge &
2023-01-07 23:41:56 @FoldMani @DrAlexanderShaw @slava__bobrov Pretty ridiculous strawman argument not deserving of a substantial response.
2023-01-07 21:08:44 @menomnon Everything animals and little children learn is learned without books. Some is learned through communication. In some species, some is learned by imitation. But most is learned by observation + trial&
2023-01-07 19:15:39 @koko_xu_ My office is in the 60 Fifth Avenue building, together with half of the CS department and the Center for Data Science. I only come to WWH to visit colleagues, attend seminars, or teach classes.
2023-01-07 19:01:10 The fact that learning almost all skills requires practical training and hands-on experience tells you that a huge chunk of high-level human knowledge cannot be acquired by reading text. And then, there is the mass of non-verbal knowledge that babies and animals acquire...
2023-01-07 18:53:36 @GeorgeSFrankl @sfmnemonic Your side believes conspiracy theories such as "the govt controls social media", "the election was stolen", and "Democrats run pedophile rings from a pizzeria". You want to have guns &
2023-01-07 17:16:51 @kaushikpatnaik @rao2z https://t.co/7ZgRtLIQWY
2023-01-07 17:01:13 @FoldMani @fchollet Neurons largely operate around their activation threshold and almost never in the saturation region. So yes, neurons behave more like ReLU than sigmoid. The confusion may come from the fact that electrophysiology experiments tend to saturate neurons with huge driving signals.
2023-01-07 16:55:20 @rao2z LLMs can't plan.
2023-01-07 16:54:25 I meant to write "The *limited* reasoning abilities...."
2023-01-07 16:43:50 One of the biggest problems in the world is rationality inequality. https://t.co/vAzjrdc9Qa
2023-01-07 16:42:33 The reasoning abilities of LLMs are partially compensated by their large associative memory capacity. They are a bit like students who have learned the material by rote but haven't really built deep mental models of the underlying reality.
2023-01-07 16:39:38 @johnmyleswhite Most people don't want to move, even within their own country, b/c of the proximity of friends and family, plus issues of language, culture, climate. So it's understandable for people to prefer the economic and political substrate of other places without wanting to move there.
2023-01-07 16:31:54 RT @sfmnemonic: Two years ago, a mob of would-be fascists tried to take democracy away from the USA. Please allow me the space to never for…
2023-01-07 09:20:05 @francoisfleuret @Michael_J_Black @CSProfKGD Evaluation for resource allocation does not reward scientists' intrinsic abilities but evaluates their actual or potential impact. Intrinsic abilities are just one component that contributes to overall impact. Building or using the best tools *amplifies* intrinsic abilities.
2023-01-07 09:02:50 @rob_harrington3 We have emojis
2023-01-07 01:34:58 I think I need to point out that I'm not *actually* Canadian Also, all of us were born in Europe But yeah, our conspiracy was funded in part by the Canadian Institute for Advanced Research https://t.co/BVcgj5WYNR
2023-01-07 00:01:11 RT @paul_scharre: Amid all the ChatGPT excitement, an excellent article by @ylecun &
2023-01-06 23:45:20 @SebastianSeung @fchollet The Eero Simoncellis and David Heeger of the world would argue that rectification and divisive normalization are also found in the brain. That certainly was an inspiration for me. https://t.co/lRHZ7Qi8AK
2023-01-06 23:39:42 @fchollet You are wrong about that. Real neurons largely operate in the ReLU regime. Multiplicative interactions ("attention"), separable convs (in space and time), and divisive normalization are all observed in biology. Many such ideas can be traced to discussions with neuroscientists
2023-01-06 22:55:12 @TrustInAutonomy Or a computer.
2023-01-06 22:50:25 @annargrs @Michael_J_Black @AllenHW0 @CSProfKGD No. The process of science relies too much on basic honesty on the part of scientists to tolerate any kind of dishonesty or unethical behavior.
2023-01-06 22:48:04 @docmilanfar @bneyshabur We are talking about evaluating scientific output here. This is not about evaluating scientists' intrinsic abilities (barred from using their favorite tools). This is different from evaluating students and whether they have assimilated some material.
2023-01-06 22:32:32 @Tom62589172 Prizes, grants, promotions, and compensations are determined according to impact (actual or potential), not to brilliance. Many people have received the Nobel for discoveries made by happenstance. Examples: x-ray, antibiotics, cosmic background radiation...
2023-01-06 22:26:22 @docmilanfar @bneyshabur Obviously. Honesty is a basic assumption here.
2023-01-06 22:13:15 RT @svpino: 11 ways ChatGPT saves me hours of work every day, and why you'll never outcompete those who use AI effectively. A list for tho…
2023-01-06 22:06:03 RT @franceintheus: You should treat yourself...and participate in a delicious French tradition! Across France today, folks are celebrating…
2023-01-06 21:54:21 @kareem_carr This is Searle's Chinese room argument. From Wikipedia: "The Chinese room argument is primarily an argument in the philosophy of mind, &
2023-01-06 21:44:40 RT @yanndubs: I played with @AnthropicAI assistant (AA) and compared it to @OpenAI ChatGPT TLDR: both are similar but AA is + Harder to j…
2023-01-06 21:30:35 This thread exposes a basic misunderstanding. Some believe that science evaluation comes down evaluating scientists' intrinsic abilities, skills, or merit. For them, using AI tools is like "cheating". But science must solely evaluate *impact*. It's not a beauty contest. https://t.co/bH37LPTCZm
2023-01-06 21:20:01 @paulg @scienceisstrat1 @Noahpinion @erikbryn @DKThomp I think these numbers only include government funding of R&
2023-01-06 21:17:31 @Michael_J_Black @CSProfKGD You are thoroughly mistaken about the purpose of science evaluation. The purpose is not to evaluate researchers' intrinsic abilities, but to evaluate their actual or predicted *impact*. If they have impact using powerful tools (AI or not), more power to them.
2023-01-06 21:10:09 @data_prophet @AllenHW0 @Michael_J_Black @CSProfKGD I'm not saying that credit attribution is unimportant. It is, for the reason you mention. And proper citations are important for traceability. LLMs, like other tools (AI or not), *amplify* our abilities. Science does not evaluate people's intrinsic abilities, but their *impact*.
2023-01-06 21:02:01 @Michael_J_Black @AllenHW0 @CSProfKGD The system of evaluation in education will have to change. But again, I really don't see why the evaluation system in science would need to be changed. Its purpose is not to measure effort, merit, or skill, but *impact*. If you use powerful tools to have impact, not a problem.
2023-01-06 20:02:39 @_RobToews @gdb Their weak reasoning abilities are partially compensated by their large associative memory capacity. They are a bit like a student who has learned the material by rote but hasn't really built a good mental model of the material.
2023-01-06 19:59:54 @_RobToews @gdb The importance of language in thought and reasoning is a persistent illusion. The main reason LLMs make factual, logical,and physical reasoning mistakes is that language is their only source of knowledge. The other main reason is that they have very limited reasoning abilities...
2023-01-06 19:53:36 RT @scienceisstrat1: Why are the UK and Canada such laggards in R&
2023-01-06 07:22:31 @mi3fa5sol4mi2 @TonyZador @ayirpelle There could be a dissuasive rule: if you submit gibberish, not only does your paper get rejected, but you get banned from submitting again. It is much better to have rules about the content than about the process by which the content was produced.
2023-01-06 07:14:11 @ShaneAhumphrey @arthur_spirling That would be a good first step. Then reverse Citizen United and better regulate campaign finance. Make districting non-partisan (just use a software). Reestablish something like the fairness doctrine in mainstream media.
2023-01-06 07:02:14 @mirzaomerbeg A chimp is tremendously more "thoughtful" than the largest of Large Language Models.
2023-01-06 06:56:20 @ShaneAhumphrey @arthur_spirling A plutocracy. Given the overwhelming power of money on the political process, and thanks to biased electoral rules, one party can manage to be in power while gathering a minority of votes &
2023-01-06 06:43:02 @rmcdaniel_ @solarbreeze69 Orangutans learn to build tree houses to sleep in. Like carpentry, that requires complex planning abilities. No language is involved. But a good mental model of the world is required.
2023-01-06 06:37:29 @AllenHW0 @Michael_J_Black @CSProfKGD I disagree. Science is not a beauty contest. The main objective is to produce new knowledge, new understanding, and new artifacts, by humans or not. Credit assignment (AKA counting points) is secondary and only important to the extent that it helps fulfill the main objective.
2023-01-06 06:23:47 @ShaneAhumphrey @arthur_spirling Yes.
2023-01-06 06:20:08 @Noahpinion Our son had one. He chewed off all my computer and audio cables. He quickly died of some sort of indigestion.
2023-01-06 03:47:37 @Noahpinion They chew on cables, too. Like mynocks, but cuter.
2023-01-06 03:45:00 RT @PessimistsArc: Schools: “We need to future proof kids” Also schools: https://t.co/w0EyxNkoBY
2023-01-06 03:37:52 The total number of years of life lost is probably less for covid than for the listed wars, since the average age of victims of covid is considerably higher than soldiers and average civilians. But still, the number of avoidable deaths from covid is staggering. https://t.co/Vw2Lj3wH5X
2023-01-06 03:32:28 RT @PessimistsArc: 1981 https://t.co/bS2v7jZwpC
2023-01-06 03:27:07 @arthur_spirling It is?
2023-01-06 03:24:16 @kareem_carr Your biochemical reactions are sentient.
2023-01-06 03:01:45 @Michael_J_Black @CSProfKGD Evaluating work done by humans is central to the style of education that rewards work instead of accomplishments. However, evaluating work done by humans has ABSOLUTELY NOTHING TO DO with scientific publication. Who/what produces scientific advances is completely irrelevant.
2023-01-06 02:34:26 RT @JamieDJS: One of the hardest challenges in developing AI for autonomous vehicles is evaluating the performance of our driving models.…
2023-01-06 02:22:19 RT @PessimistsArc: https://t.co/r1AYJoVVfq
2023-01-06 02:16:19 @kareem_carr There is a very, very long tradition in engineering of getting inspiration from biology.
2023-01-05 19:14:18 @LabYosi A considerably more important milestone would be a robot that can plan trajectories through a complicated terrains with a skill level similar to a cat or a dog. No language required. But a good world model and planning abilities are very much required.
2023-01-05 19:09:31 @code_star I don't think reading any text can produce the same state of mind as listening to a jazz solo.
2023-01-05 19:06:41 @kooshiar That's an idea
2023-01-05 16:05:09 @danbri I met him once when he visited Paris in the mid 1980s. I agreed with many things he was saying at the time. He was actually arguing for more learning and neural nets and against logic-based AI and "expert systems" which was the fashionable style of AI at the time.
2023-01-05 16:01:59 @timkindberg Many have argued that language constitutes the very substrate of thoughts. They have argued that language is not only necessary, but inescapable and hence sufficient. But I think many forms of thought are entirely non verbal and rely on non-linguistic mental models of reality.
2023-01-05 14:57:58 Please accept this NFT representing a stolen subway token as a token of my sympathy. https://t.co/HNaL9M52cV
2023-01-05 14:45:24 @balazskegl @un1crom @kareem_carr But bird and airplane wings *do* serve the same function in essentially the same way. They both generate lift by being pushed through the air, even if the details are extremely different (no flapping, no feathers, simplified geometry control...)
2023-01-05 14:35:47 If language were sufficient to express human thought, why would we need visual arts, music, dance?
2023-01-05 14:24:19 @GaryMarcus @NaveenGRao Bird wings have many kinds of feathers, some of which are individually controlable. They rotate in the upstroke and interlock in the downstroke. Airplane wings don't have any of that. That doesn't mean they don't generate lift in essentially the same way as bird wings.
2023-01-05 07:36:32 @arthur_spirling Also, where is Ella Fitzgerald!
2023-01-05 07:21:42 Progress. https://t.co/xeAptcFIiG
2023-01-05 00:27:28 @johann_p @kareem_carr Actually, airplane wings and bird wings work pretty much exactly the same way: they generate lift by being pushed through the air. Obviously, the details are different. But the underlying principle is the same.
2023-01-04 20:10:26 @kareem_carr And this is not a wing? https://t.co/mkyu3n6N9N
2023-01-04 18:57:57 @fstflofscholars @FoldMani Tables don't have legs? Bladders are not bladders? Nails don't have heads? Airplanes don't have a nose and a tail? Surfboards don't have fins? ....
2023-01-04 18:45:23 @solarbreeze69 When I do math or carpentry, the knowledge I use is largely non verbal. It involves internal models of reality that owe almost nothing to language.
2023-01-04 18:43:09 @pbanavara The point is that these won't be LLMs as we know them today.
2023-01-04 18:42:15 @alexpalladini Everything a nine month old baby, a chimp, a dog, or a cat know is non verbal. And that's a lot more than most people think
2023-01-04 18:31:10 @ArYoMo No. See this: https://t.co/XK6SdxRGjy
2023-01-04 14:50:30 I've been making that point in all of my recent talks. Current LLMs are reactive, like Kahneman's "system 1". We need new architectures with the ability to reason, akin to "system 2". This is what I propose in this piece: https://t.co/7ZgRtLIQWY https://t.co/3NMiVKWjYy
2023-01-04 14:40:43 @mpshanahan Precisely.
2023-01-04 14:15:45 @vokaysh Pretty much everything we usually call common sense.
2023-01-04 13:51:19 RT @mattyglesias: Inequality is still higher today than it was in 1979, but the increase all happened before 2007 — since then inequality h…
2023-01-04 13:47:02 LLMs do *not* capture much of human thought, because most of human thought and all of animal thought is entirely non verbal. The factual, logical, and physical reasoning mistakes that current LLMs make clearly show that they have *not* captured much of human thought. https://t.co/mc0kXJWcBg
2023-01-04 13:36:21 @kareem_carr Oh, but they *do* have something to do with the brain. The whole idea of a complex function emerging from a network of simple elements whose connections are modified by learning, that's very much inspired by the brain. It's as if you said the term "airplane wing" was deceptive.
2023-01-04 13:27:17 @mmitchell_ai @yoavgo @icmlconf Such a restriction will hurt non-native English authors.
2023-01-04 01:13:07 @WtrmlnPanda I'm actually a professional smart ass.
2023-01-04 00:26:13 @DonutMooch Airplanes carrying thermonuclear bombs have been able to destroy civilization for about 60 years now. AI, however.....
2023-01-04 00:14:22 @boazbaraktcs Haha!
2023-01-04 00:09:27 @sudhirPyadav s/wetting/setting/
2023-01-03 23:59:33 @Abel_TorresM Logical fallacy. Where did I claim that all neural nets must be able to handle language? I just claimed that languages are produced by and for neural nets. Hence neural nets are uniquely suitable to analyze and generate language.
2023-01-03 21:06:29 @rasmusengholm It's a movie reference. Click on the link.
2023-01-03 20:58:02 @Raamana_ @tdietterich Why should this matter? People should use all the tools at their disposal.
2023-01-03 20:56:25 Next year in the ICML Ethics section: - all computation must be done by hand without the help of a computer. - figures must be drawn by hand on pen and paper. [...] - From this day on, the official language of ICML will be Swedish. https://t.co/sPBCMBPt0f
2023-01-03 20:44:13 @FoldMani You mean, like airplane wings? They work pretty well, and use the same basic principles as bird wings.
2023-01-03 20:38:10 @vagrantcow Thank you, your Bovine Honor.
2023-01-03 20:36:42 @rogerkmoore 1. many animals have "language". 2. whatever innate structure is special to human language can't be that complicated since (A) it appeared on the last few 100k years and (B) it is contained in a teeny-tiny portion of our genome.
2023-01-03 20:32:38 @FoldMani Whatever architecture you design, it can be specified in a few lines or pages of code. The result of training this architecture with SGD is a few billion weight values. While the former is necessary for the latter to emerge, most of the information content is in the latter.
2023-01-03 20:29:00 @kalpesh_ai Just like airplane wings aren't the same as bird wings. But they share essential features and work on the same basic principle.
2023-01-03 20:24:36 @sudhirPyadav But try wetting the weight of a CNN by hand. Or just try to design and build an image recognition system from scratch (no ML permitted). SGD is *much* better than human scientists and engineers at this.
2023-01-03 20:22:29 @Ankit85076055 No. 1st, they can find "deterministic laws". 2nd, they are quite good for design. 3rd, interpretability is not particularly useful for for any of this and is certainly not the main cause of limitations.
2023-01-03 20:19:57 @yoavgo Those trained systems are good phenomenological models, but they are not explanatory models. That's fine by me, but insufficient for many critics.
2023-01-03 20:14:18 No, like, I'm just asking because, you know... spell checkers and predictive keyboards are language models.
2023-01-03 20:10:19 Ethics section: "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless these produced text is presented as a part of the paper’s experimental analysis." So medium-scale &
2023-01-03 19:50:21 Neural nets that analyze/generate text, speech, image, proteins don't tell us much about the structure thereof. But 1. they are very useful 2. they tell us that SGD is better at structure discovery than humans 3. they upset people who devoted their career to structure discovery
2023-01-03 19:45:43 @conor_muldoon Neural nets that analyze/generate text, speech, image, proteins don't tell us much about the structure thereof. But 1. they are very useful 2. they tell us gradient descent is better at structure discovery than humans 3. they upset pple who devoted careers to structure discovery
2023-01-02 23:30:17 @untitled01ipynb That's a *huge* chat. I'd say at least 400 billion parameters. Very wide, not so deep. He might use self-attention, but he is definitely not self-conscious.
2023-01-02 19:55:04 @larspensjo @gregorylent @ESYudkowsky @witbrock Well, that's exactly how it works today. The most obviously violating content is automatically suppressed before anyone sees it. A lot of remaining content is down-ranked, so that very few people see it. Some of that is flagged by users and examined manually.
2023-01-02 16:16:49 @gregorylent @ESYudkowsky @witbrock The rules are set by people. Content moderation systems are merely one component of the enforcement. Given the volume, relying solely on manual moderation is impractical. What would be your alternative solution?
2023-01-02 16:12:36 @DanielGuffey @witbrock You got it backwards. This technology is one of the things that is used to *protect* democracy around the world. AI is part of the solution here. Not the problem.
2023-01-02 16:10:05 @jobergum @witbrock Way better than anything else. But far from perfect. The fact Self-Supervised pre-training can be applied to multilingual systems without requiring supervised annotation is a *huge* progress for multilingual content moderation, translation, etc.
2023-01-02 16:06:39 @7SecularSermons @witbrock Haha, good joke. I'm saying at least Meta and Alphabet have been doing it.
2023-01-02 13:53:15 @witbrock Social networks that know what they are doing have been using large pre-trained transformers to do content moderation for several years now.
2023-01-01 17:36:15 Happy Birthday, Twentytwentythree.
2023-01-01 14:31:11 @rohitrango @DamienLasseur @MetaAI No intelligence is general. Human intelligence is very specialized. Hence, reaching human-level AI is a legitimate goal. But calling it AGI is a misnomer.
2023-01-01 14:27:25 RT @bitdribble: @ylecun's class on energy models: the Bayesian underpinnings. Connection to stat physics. When to diverge from Bayesian dog…
2023-01-01 14:05:19 RT @PessimistsArc: New Years Resolutions 1902 v. 2022 1902: - Read less novels - Stop riding the bicycle so much - Don't read in bed 20…
2023-01-01 04:23:48 @SelfSupervisedL Yes.
2023-01-01 00:34:51 @ToniKoqi @DamienLasseur @MetaAI No.
2022-12-31 22:22:45 RT @LorenaABarba: @CathyNDavidson @emilymbender LLMs are getting exponentially better: the pace of innovation is exciting! But let's rememb…
2022-12-31 22:20:54 @DamienLasseur @MetaAI Human intelligence is highly specialized. Which is why I don't like the phrase AGI.
2022-12-31 22:15:48 @DamienLasseur @MetaAI Yes and yes. Though I don't call it AGI. I prefer "Human Level AI".
2022-12-31 22:08:48 @mosicr @CsabaSzepesvari @nanjiang_cs Using world models for model predictive control goes back to the 1960s. Adapting world model parameters online goes back to the mid 1970s (e.g. the IDCOM procedure). These were developed by optimal control people and applied to hand-crafted models (not neural nets).
2022-12-31 22:04:44 @ejmejm1 @CsabaSzepesvari @nanjiang_cs Exactly. But then most of the learning takes place while training the world model (it's not RL, it's SSL). Learning a task through RL on top of that is the proverbial cherry on the cake.
2022-12-31 21:59:29 Try this https://t.co/jMRHF8XeSI
2022-12-31 21:51:06 @danison1337 It's the story of everyone who has learned (the hard way) how to operate a social media platform. Everyone in the business knows this. Everyone knows that content moderation involves difficult tradeoffs. Except Elon.
2022-12-31 21:45:44 @chrisalbon Or this one from NIPS 1989. https://t.co/NWrfexbdcQ https://t.co/XHHcZPqVSX
2022-12-31 21:36:05 @kyo_takano Try this (http instead of https). Config problem with my server. Sorry. https://t.co/jMRHF8XeSI
2022-12-31 18:55:17 I'm saying this *knowing full well* that our current peer review process sucks. The solution to this is not to throw away the whole idea of publications but to redesign the reviewing process. I've made a proposal for this over 10 years ago. https://t.co/tZ9QeK0py4
2022-12-31 18:32:46 @lifemath_ @JeffDean @CatWorkers I did?
2022-12-31 18:27:58 @CsabaSzepesvari @nanjiang_cs The main point is that you simply can't train a world model efficiently using solely an environment-provided reward.
2022-12-31 18:23:51 @YiMaTweets @ericxing The number of papers is a reflection of the growth of the research community, not a consequence of lowered standards. Look, current review processes suck, but the solution isn't to make them more selective.
2022-12-31 17:32:21 @KordingLab It only works with French wine
2022-12-31 17:30:50 @ericxing @YiMaTweets Hard disagree. The only question worth asking about papers acceptance is: "is the research community better off with this paper than without?" NeurIPS &
2022-12-30 18:15:00 @CsabaSzepesvari @nanjiang_cs The server acts up once in a while.
2022-12-30 14:19:27 RT @wayve_ai: As we move into 2023, we’re reflecting on the last year for Wayve. 2022 started with a bang as we announced our $200million…
2022-12-29 23:08:13 RT @TheSequenceAI: Happy Holidays, friends! We've made a recap of the most popular courses for you. 1. Deep Learning by @ylecun, @alfcnz…
2022-12-29 14:44:21 @sabawalid No. You are telling them "unless you publish (or at least attempt to) we are going to doubt your results and your claims." The effect is to weed out hype, unsubstantiated claims, &
2022-12-29 14:36:23 @daidailoh I'm talking about scientists in industry, where the incentive to publish is not a given. In a academia,you can't escape the publishing pressure. But quality matters way more than quantity. Predatory journals only exist because institutions are lazy in candidate evaluations.
2022-12-29 07:40:19 @JeffDean @CatWorkers Cat has trophy
2022-12-29 00:21:48 RT @alfcnz: Getting emails from the other side of the world keeps me motivated in doing what I’m doing. I’m making the difference not by pu…
2022-12-28 22:45:16 @dginev @togelius It's all about quality, not quantity.
2022-12-28 19:18:19 @DrCarlosToscano @s_d_mcintyre @Grady_Booch Publishing just for the sake of having one more publication on your CV doesn't get you very far in an industry research lab. That's why people in industry tend to work on topics that will have *some* impact if they succeed, even if it takes a long time.
2022-12-28 19:14:34 @AsafHaddad Yes. This is for companies that can afford an advanced research lab. There are very few of them.
2022-12-28 19:06:51 @ChrSzegedy @balazskegl It's about quality, i.e. about impact (actual or anticipated, intellectual or practical), certainly not about quantity.
2022-12-28 08:44:51 @dan_vaida @Grady_Booch At FAIR, publishing in this kind of venue would hurt you rather than help you.
2022-12-28 08:40:59 @JundeMorsenWu Impact can come in many different forms. Some arXiv papers do get hundreds of citations before they get presented at a conference or published in a journal. Some works have impact through open source packages or datasets rather than through traditional publications.
2022-12-28 08:36:44 @JordiCabot There are several stages: research started, preliminary results obtained, final results obtained, paper posted on arXiv, paper submitted, paper accepted, paper published, paper cited, getting awards, etc. Evaluation requires an anticipation of future impact, which is hard.
2022-12-28 08:32:31 @vipul_khatana_ It's about quality, not quantity. No self-respecting institution evaluates its scientists solely by the number of publications.
2022-12-28 08:30:01 @frankzliu Believe me, you don't acquire prestige by "following what's hot". You do it by making early contributions to an area *before* it gets hot.
2022-12-28 08:25:12 @caglar__aytekin The emphasis should on quality, not on quantity. That means anticipating the potential impact of a piece of work, which is hard. That's why the management chain in a research lab must be composed of reputable scientists all the way up. Feedback from the research community helps.
2022-12-28 08:16:22 To be clear, my original tweet was about scientists in *industry*. Few companies promote publishing, some tolerate it, many forbid it. The role of publishing in academia is well established and not in question.
2022-12-28 07:29:13 RT @bkhmsi: Hi everyone! This year I am thankful for many things I started my AI residency at Facebook/Meta in Nov 2021 after rejectin…
2022-12-28 01:53:50 RT @an_open_mind: My chat with @l2k for the @weights_biases podcast about Large Language Models, @PyTorch, my experience at @Meta and @IBMW…
2022-12-27 22:47:42 @zazadob I very much agree with that. Which is why we encourage people to embark on ambitious projects and why we do not insist that they produce publications right away.
2022-12-27 22:44:24 @AmineHadjYoucef A necessary duty for the health of the community.
2022-12-27 22:43:51 @s_d_mcintyre @Grady_Booch I wouldn't want to work in a place that doesn't take measures against such behavior.
2022-12-27 22:41:45 @VergaraLautaro You need to tell them that their performance evaluation will be based in part on their publications.
2022-12-27 22:40:43 @SeerMarkets You got this backwards.
2022-12-27 22:11:02 @PKUWZP It's cheaper, but it doesn't work as well nor as fast. You have less influence on direction, progress is slower, and incentives are not necessarily aligned. And then, if the project succeeds, you still need to build in-house expertise to make it work and to make it useful.
2022-12-27 22:06:45 @cedapprox @xgabegottliebx If you tell scientists they cannot publish, you essentially kill their career.
2022-12-27 22:01:52 @JWonz There are many, many more "fake" results in the unpublished realm. Not necessarily because people are dishonest, but because they can be self-deluded and may not realize that there may be better solutions than the one they came up with.
2022-12-27 21:58:05 @xgabegottliebx Because when your work is going to be scrutinized by your peers, you put your career on the line. So you tend to use considerably better methodology. If you just need results that are convincing to yourself, your boss, &
2022-12-27 21:52:44 @voilatility Prestige is hugely important when you try to hire the best people. Academic institutions have understood this for centuries. And I'm not just talking about hiring within research labs. It has a large impact on hiring in the entire company.
2022-12-27 21:49:52 @mattvvolf Almost all research at FAIR is done with public datasets. How else would you be able to evaluate your methods in comparison to others?
2022-12-27 21:48:19 That's why at FAIR, we not only tell scientists to publish papers and open-source their code, we also use their publications as one component of their periodic evaluation.
2022-12-27 21:44:56 By telling scientists they must publish, you get: 1. higher-quality research, more reliable results, less self-delusion 2. better scientists whose reputation will flourish 3. easier external collaborations 4. better research evaluation 5. better internal impact 6. prestige
2022-12-27 21:39:15 @mattvvolf @mraginsky @Jake_Browning00 @NoemaMag No. The main achievement of AI so far has been to save many lives through Automatic Emergency Braking Systems in cars, medical image analysis, etc. Another major positive impact has been content moderation for online platforms.
2022-12-27 21:27:29 RT @mraginsky: This piece by @Jake_Browning00 and @ylecun in @NoemaMag echoes Manuel DeLanda's pithy observation that the main achievement…
2022-12-26 21:45:45 MoDem: accelerated Model-Based RL. The world model is a JEPA-like architecture, which makes predictions in representation space (without a decoder). From FAIR+UCSD https://t.co/GvocTXMCbv
2022-12-26 15:52:35 @CsabaSzepesvari @nanjiang_cs You mean, like this? https://t.co/7ZgRtLIQWY
2022-12-26 15:36:33 @grbradsk There are tons of packages for that. The most popular one is PixInsight. There is no DL there AFAICT, though it uses TensorFlow. I have my own Python script that uses various libraries for star detection, alignment, etc. No DL there either (yet) but it does use PyTorch.
2022-12-25 23:28:24 @untitled01ipynb @bnjasim Actually, a 4yo has an amazing understanding of physical reality and can plan complex actions. Even 18 month toddlers do, and they don't use language yet.
2022-12-25 23:24:00 RT @omarsar0: 2022: A Year in Review (ML Papers Edition) In this thread, let's take a look at some of the top trending ML papers of 2022 ↓…
2022-12-25 23:18:10 RT @tdietterich: What an excellent essay! Christmas gift to me: having time to catch up on articles like this
2022-12-25 14:51:07 @MaxencePastor Copier les suisses: - créer des institutions sur le modèle de l'EPFL/ETH - les rendre autonomes &
2022-12-25 14:38:09 The current crop of large language models are: https://t.co/Yck828DMUE
2022-12-25 11:18:51 @technotweet https://t.co/VTTZu7o7zR
2022-12-25 11:15:12 RT @manuel__carro: This very nice article by @ylecun and @Jake_Browning00 was published before ChatGPT was presented in society. It deserve…
2022-12-25 10:55:09 RT @egrefen: I just don’t get this attitude of saying something won’t work until you’re red in the face. Conversational search is a cool id…
2022-12-24 20:34:51 @FurkanHaney @wpdocu @tobias_rees Ad ranking *does* maximize clicks, because the more people click on ads, the smaller the number of ads that need to be displayed for a given amount of revenue. The more effective ad ranking is, the fewer ads need to be shown to people, and the more useful they find them.
2022-12-24 20:30:36 @FurkanHaney @wpdocu @tobias_rees But here is the thing, this is not how feed ranking works any more. In early 2018, FB overhauled ranking so as to favor "meaningful social interactions". Screen time and engagement tanked. So did the stock. But it was the Right Thing to do for the long term benefits to users.
2022-12-24 20:17:55 RT @wellingmax: Not very pleasant to watch, but we must not look away. Caught on Camera, Traced by Phone: The Russian Military Unit That Ki…
2022-12-24 19:25:46 @coding_era That was the case for me, before I moved to North America. There was also a tendency for US-based authors to primarily cite authors from North America and ignore authors from other parts of the world. But this has gotten a lot better over the last 2 or 3 decades.
2022-12-24 18:47:04 @bonadossou Explanatory FAIR blog post on GTN with link to an open source GTN library for PyTorch, by @awnihannun et al. https://t.co/awgldn2GME
2022-12-24 18:42:12 Interesting indeed. Lots of factors: Anglocentricity publication culture research funding levels (non-)competitive funding models (lack of) competition between institutions research &
2022-12-24 18:32:19 @egrefen Second-hand Dunning?
2022-12-24 18:20:27 After reading this, how can I get frustrated when I want to do an astrophotography session and the clouds show up, the Moon or the City illuminate the sky, my telescope window fogs up, or a passing airplane or satellite ruins a shot. https://t.co/bed7humB0x
2022-12-24 17:12:49 @bonadossou Graph Transformer Networks in this paper refer to networks whose modules inputs and outputs are graphs, instead of tensors. They have some connection with graph neural nets, but little to do with transformer architectures.
2022-12-24 17:07:10 @jeroaranda @boazbaraktcs What we do is accelerate the progress of science and technology. I have no idea what this "academic ponzi" you are referring to could possibly be.
2022-12-24 17:04:09 RT @ElliotHershberg: It's dizzying keep up with BioML Not one, not two, but THREE exciting papers from the @MetaAI Protein Team before t…
2022-12-24 16:03:17 RT @ElliotHershberg: First, in a collaboration with @UWproteindesign and @sokrypton they asked the question: can protein language models ex…
2022-12-24 00:02:44 @boazbaraktcs By telling scientist they must publish, you get 1. higher-quality research, more reliable results, less self-delusion 2. better scientists (they're not afraid to join) 3. easier external collaborations 4. more accurate quality evaluation, better internal impact 5. prestige
2022-12-23 23:49:45 RT @DrHughHarvey: This paper is making the rounds, so I thought I'd do a thread. The headline is "AI can't pass the radiology board exam"…
2022-12-23 16:43:49 RT @drchristhorpe: Back on the advent calendar of helpful and inspirational people/organisations. This is yesterday's which I didn't post a…
2022-12-23 16:42:46 @JamMastaJeff @tobias_rees [Citation needed] Regardless of whether you think your car's Automatic Emergency Braking System is biased or imperfect, it reduces the chances of collision by 40%. These things are so efficient that the EU now requires them in every new car. Don't be misled by AI luddites.
2022-12-23 08:36:22 RT @HirofumiInaguma: The first paper since I joined FAIR is out! We propose a new speech-to-speech translation architecture, UnitY. UnitY t…
2022-12-23 07:55:33 @AnsDome I'm not going to say that you overestimate FAIR's long-term impact. But you do underestimate the positive impact of FB, Instagram, WhatsApp, and Messenger. Despite all the negative things that have been said about these free services, they are doing a lot of good in the world.
2022-12-23 07:43:45 @davide_lorino @tobias_rees Your statement applies to *all* corners of technological progress, not just AI. And AI has saved many lives and destroyed very few livelihoods.
2022-12-23 07:39:09 @wpdocu @tobias_rees Contrary examples? Like what? Yes, some authoritarian regimes use AI to spy on their population. But in liberal democracies?
2022-12-23 04:21:00 RT @salcandido: Two complementary preprints from our group today showing our LLM learns the design principles of proteins and generalizes b…
2022-12-23 04:20:51 RT @salcandido: Our new preprint on protein design with ESMFold: We were able to do a bunch of interesting designs out of the box, which ma…
2022-12-23 02:26:40 FAIR is famously open and will remain so. We initiated the trend of publishing early &
2022-12-23 01:29:55 RT @PyTorch: 3, 2, 1… Liftoff NASA JPL Chief Technology &
2022-12-23 01:18:04 RT @EricTopol: Now at a 25 year low, life expectancy in the US continues to fall. Covid persists as the 3rd leading cause of death and incr…
2022-12-23 01:13:20 @_Edward_Quince_ Technically, that would be InverseFolding@FAIR
2022-12-23 01:11:02 With the new protein design system from FAIR, proteins can be specified through a sort of programming language, as explained in this thread by @BrianHie https://t.co/8Z5RpPbIvj
2022-12-23 01:08:20 @tobias_rees Interesting for a technology whose most widely deployed applications are objectively beneficial: reducing car crashes, analyzing medical images, taking down child exploitation, terrorist propaganda &
2022-12-23 00:33:55 A thread by @TomSercu about how FAIR's new protein design system can produce proteins that are nothing like what is observed in nature. https://t.co/e1teBSzWoQ
2022-12-23 00:32:00 Two new papers on BioRxiv with *amazing* results on protein design/generation by the FAIR Protein Group. The system uses simulated annealing to find a sequence of amino-acids that folds in ways that match a desired shape or satisfy constraints (like symmetries). https://t.co/Rv2dAwMKsD
2022-12-22 22:01:48 OPT-IML https://t.co/O9XfDoDHq4
2022-12-22 21:59:44 @alfcnz Gnocco!
2022-12-22 21:35:58 RT @wcathcart: 2022 was our biggest year yet. Our team was so excited to build Communities, Reactions, Polls, Avatars, 32-person video call…
2022-12-22 00:00:05 @egecemkirci 2002 is the year I left AT&
2022-12-21 23:46:49 Deep Learning won't replace radiologists any time soon, but it sure looks like it's helping them and their patients. https://t.co/QB141zS292
2022-12-21 18:17:03 At what level in this platform game will Elon give up? https://t.co/aznneWfnuV
2022-12-21 18:05:55 @LiorZMan That's largely true.
2022-12-21 18:02:19 @AnsDome FAIR is about half research scientists (RS) and half research engineers (RE). Almost all RS have a PhD. Many REs have a PhD. The evaluation criteria are somewhat different. But in the end, everyone is doing research. Many papers 1st authors are RE.
2022-12-21 13:25:37 @blambroll You're right about that.
2022-12-21 13:19:19 @inversetrs @01Core_Ben Practice may improve peripheral acuity a bit. But the wiring is such that resolution decreases geometrically with eccentricity outside the fovea.
2022-12-21 13:14:43 @Youness_ELM See if he lacked mathematical skills while he was an undergrad (he was already working on generative models). Hint: his degree is from the NYU Courant Institute which regroups CS and math. The math dept is ranked Number 1 in applied math in the US. https://t.co/JMzpgBQUJe
2022-12-21 07:30:01 RT @tydsh: Our follow-up of long-form story generation is out! Compared to our old one (Re3), the new one (DOC) uses hierarchical outliner…
2022-12-21 07:21:12 @drjwrae @francoisfleuret @srush_nlp Yup. https://t.co/zGQjALZDbT
2022-12-21 07:11:03 @kaanaksit I can't claim that I'm a specialist in human visual perception. But there is a bunch of folks at Meta Reality Labs who are. Like, seriously.
2022-12-21 06:58:56 @01Core_Ben It's lower for VR glasses because they track your eye gaze and only render at high resolution the small region of the image that your fovea is looking at.
2022-12-21 06:45:45 RT @Noahpinion: Twitter was always worse than Facebook. That's not a message that's going to be well-received among people who have chosen…
2022-12-21 06:30:44 The main author of DALL-E at OpenAI, Aditya Ramesh, has no graduate degree. He has a bachelor from NYU. He worked on a couple of research projects in my lab in his last years. He wanted to do a PhD after graduating. But he did a summer internship at OpenAI, and they kept him. https://t.co/mqnhkbDAvK
2022-12-21 03:33:35 @arthur_spirling Just pick better audiences.
2022-12-21 03:21:16 @lisa44Yes Marketing BS for gamers who already blew $2000 on a graphic card?
2022-12-21 03:20:16 @adolph Perhaps, but there are only 8 billion pairs of eyeballs and 24 hours per day.
2022-12-21 03:16:33 @3DTOPO @tholford0 @jpt401 Well but if you want to increase the sensory bandwidth, you may also have increase the brain's processing power.
2022-12-21 01:57:16 @mherreshoff Actually, we do know where people are looking and we can predict where they will be looking.
2022-12-21 01:55:39 @tzaffi Yes, computers talking to computers.
2022-12-21 01:55:08 @Jai_Sharma Sure, but you only need to stream things that people actually want to see/read/hear/touch...
2022-12-21 01:53:51 @entangledQbit Audio, touch, and smell combined is less bandwidth than vision alone.
2022-12-21 01:52:06 @tholford0 @jpt401 Not anytime soon. Right now, the bandwidth of neural interfaces totally sucks.
2022-12-21 01:50:55 @ian__manchester Yeah, robots will want to watch Netflix at accelerated rate And not just vision, but audio, touch and smell too. But that's within a factor of 2 of vision alone.
2022-12-21 01:43:57 @laurentduval Information consumed by humans.
2022-12-21 00:33:48 There is a hard ceiling to how much data will ever need to be streamed: retinal resolution at 120 frames per second, compressed, for each eyeball on the planet. The seemingly-exponential growth of streamed data is gonna turn into a sigmoid as we get closer to this ceiling.
2022-12-21 00:30:45 @joroy @laurent_bourges There is a hard ceiling to how much data will ever need to be streamed: retinal resolution at 120 frames per second, compressed, for each eyeball on the planet. Whatever "exponential growth" is gonna look more like a sigmoid as we get closer to this ceiling.
2022-12-20 22:54:08 @lexfridman Trollers gonna troll.
2022-12-20 17:55:54 @X345__ @laurent_bourges Cette assertion est *très* discutable, en effet.
2022-12-20 17:24:02 True. Google is in much better position to bring the latest NLP tech to search than any LLM company is to building a search engine (including OpenAI). And yes, Google has been doing it for years. Just as Facebook has been doing it for content ranking. https://t.co/QghhUqeMHo
2022-12-20 15:57:44 @arthur_spirling I feel the same way about his history of jazz.
2022-12-20 13:42:32 @joroy @laurent_bourges Les lois, les règlements, et l'éthique sont là pour tenter d'aligner les motivations de personnes physiques et morales avec l'intérêt de la société. Mais dans ce cas précis, la forte motivation de minimiser les coûts énergétiques est exactement alignée avec l'intérêt général.
2022-12-20 13:26:07 @NaveenGRao That's basically what a modern recommender system {should be, will be, already is}.
2022-12-20 13:16:53 RT @Gregdt1: Quand on parle de transition énergétique, certains sous entendent qu’elle pourrait avoir plus de conséquences négatives que po…
2022-12-20 02:53:15 @srush_nlp Guessing: you may not need attention, but you probably need multiplicative interactions of some sort.
2022-12-20 02:37:57 RT @schrep: Unpopular opinion. Self-driving will progress rapidly in the next few years. https://t.co/9FvsG3OSxy
2022-12-20 02:35:51 @docmilanfar The original ConvNet paper (Neural Comp 1989) was cited 12,000 times, almost all in the last 5 years, and the number of citations/year has plateaued and seems to be going down. So it seems that, although ResNet is not yet taken for granted, ConvNet is. And so is backprop.
2022-12-19 22:25:41 @DrawGPT Appropriately in NYU purple.
2022-12-19 17:47:12 @DrawGPT I have more hair.
2022-12-19 15:17:09 @laurent_bourges Donnez nous des publications peer reviewed avec des chiffres à l'appui. Ensuite on pourra discuter des mesures à prendre. Mais si ces mesures doivent casser la neutralité de l'Internet et analyser les contenus pour distinguer l'utile du frivole, vos chances de succès sont zéro.
2022-12-19 15:10:07 @laurent_bourges Les études sérieuses montrent un plafonnement sinon une décroissance de l'empreinte carbone du numérique. Plus important: l'utilisation du numérique rend l'économie *plus* efficace globalement en termes d'énergie/€ de PIB. La visio et la télépresence réduisent les déplacements.
2022-12-19 15:06:27 @laurent_bourges 1. C'est mon compte perso &
2022-12-19 14:41:57 RT @EricTopol: Rapid sequencing of a person's blood or relevant body fluid can determine the root cause of a serious infection. Why isn't t…
2022-12-19 01:18:37 I wouldn't want the job. https://t.co/v82dopoyIh
2022-12-18 22:17:25 @MonniauxD @mart1oeil @jm_desp Le Metavers rendra possible la téléprésence. Résultat net: moins de déplacements et de voyages.
2022-12-18 22:13:57 @Vanadiel_78 @alex_conneau Germany has a high level of renewables but is still one of the worst of the EU in terms of CO2 emissions per kWh (though still much better than Poland). Why? Because they abandoned their nuclear program and use coal. Renewables are insufficient because they are intermittent.
2022-12-18 20:28:46 @jm_desp @mart1oeil Cest absolument correct. La motivation purement économique pour réduire la puissance consommée et la puissance dissipée (qui nécessite un refroidissement) et *énorme*.
2022-12-18 19:50:42 @Kryptomangane Il est préférable d'éviter de faire des procès d'intention. Il vaut mieux s'en tenir aux faits et aux actions. Les faits et les actions ne sont pas favorables au mouvement écologiste antinucléaire français.
2022-12-18 19:23:56 @MarcLeobet @theShiftPR0JECT Pour une étude de l'impact énergétique présent &
2022-12-18 19:13:25 @mol_tagine C'est pas pour tout de suite.
2022-12-18 19:12:11 @alex_conneau The alliance between the Socialist party and the Green party in the late 90s (La Gauche Plurielle) was a disaster for the energy policy. But again, this was long before Macron entered the scene. Macron is a pragmatist centrist who changes his mind when presented with evidence.
2022-12-18 19:08:04 @Durbangash @alex_conneau I'm not a fan of the excess of bureaucracy in France (and Macron certainly didn't start that), but you should see the Byzantine immigration policies in the US.....
2022-12-18 19:05:50 @mart1oeil J'en conclue que vous ne savez pas lire l'anglais.
2022-12-18 19:01:33 Et le chiffre ne cesse de baisser grâce aux progrès de la technologie.
2022-12-18 19:00:49 Sachant que 40g de CO2 par heure de streaming est probablement une grossière surévaluation. Les estimations récentes sont de l'ordre de 1 à 2g par heure de streaming.
2022-12-18 18:56:38 @and_joy_ Liberté, Égalité, Mbappé
2022-12-18 18:51:29 @arthur_spirling Your estimate was fishy.
2022-12-18 18:49:55 @alex_conneau The decision to freeze the nuclear program in France was made long before Macron. It started with the Jospin govt in 1997. Then, with Fukushima in 2011, Italy &
2022-12-18 17:33:13 @freddy_x Euh non. C'est vilain de se moquer des gens qui sont religieux. Mais c'est totalement correct de mettre en question les doctrines, religieuses ou pas, surtout quand elles sont auto-contradictoires et sans bases rationnelles.
2022-12-18 17:15:45 @rmarcilhoo @dekoderpolsatu I did, until I moved to Bell Labs
2022-12-18 17:03:47 @mart1oeil https://t.co/O89XYa0bHL
2022-12-18 16:51:11 @trevisev @irukanji_invest Ou même 1.3g d'après les dernières estimations.
2022-12-18 16:46:27 @corsicantrader Il ne faut faire de généralités. Le CNRS est une organisation très grande et très diverse. Beaucoup d'excellents chercheurs. Et d'autres ....
2022-12-18 16:38:58 @Nicolas99848452 Ce n'est pas une position officielle du CNRS, mais seulement un article d'opinion des chercheurs du groupe EcoInfo.
2022-12-18 14:14:01 Empreinte carbone d'une heure de vidéo en streaming: Shift Project (2019): 3200g IEA (2020): 40g Shift Project (2020): 400g ("oups, on s'est trompé entre bit et byte") ADEME (2022): 27g IEA (2022): 1.3g Cela compte juste les opérations, pas la construction des équipements. https://t.co/Az4p0evfsP
2022-12-18 05:33:49 @Ixnay92 Et puis on parle d'éviter un vol long courrier tous les 57 ans. La moyenne française est bien au dessus: en moyenne 11 vols au cours de leur vie, c'est à dire une fois tous les 8 ans. https://t.co/DK88SYZ8oe
2022-12-18 05:26:49 @FabriceDebry En France, on est particulièrement bien lotis avec 70% de nucléaire qui n'émet pas de CO2.
2022-12-18 05:25:22 @Ixnay92 Bon, alors réduisez vos transports en voiture de 1200km/an ou 3.3km par jour. Marchez ou prenez un vélo, même un vélo à assistance electrique, pour quelques km par jour.
2022-12-16 22:41:13 Generative art will kill artists just as much recorded music killed musicians. https://t.co/QVqqnJ3dgf
2022-12-16 20:41:10 @gabrielpeyre I want Schrodinger's equation!
2022-12-16 18:15:15 @j2bryson @GPAI_PMIA PAI is well funded.
2022-12-16 18:04:54 @arthur_spirling I can provide you with a proper French roast on demand.
2022-12-16 13:21:37 @j2bryson I do think AI ethics &
2022-12-16 12:26:40 @AustenLamacraft @adad8m The Helmholtz free energy is the Legendre transform of the energy.
2022-12-16 12:18:16 @ducha_aiki @kornia_foss I'm actually playing with Kornia for some astrophotography stuff.
2022-12-16 02:01:33 Cool. https://t.co/ZkNlPmRWM5
2022-12-16 00:36:58 @ramysadek You are not wrong, sadly.
2022-12-16 00:16:47 @mkearnsupenn https://t.co/50VaJK01xh
2022-12-16 00:14:36 @ducha_aiki Indeed.
2022-12-15 21:00:59 Just a reminder that doing min pooling (or max pooling) with an additive (bias) kernel f(y) [aka "convolution" within the (min,+) semi-ring operators] is like performing addition in Legendre transform space. https://t.co/298ee2FWho
2022-12-15 14:33:50 @francoisfleuret I can't multiply the GPUs. Well, except at FAIR.
2022-12-15 14:27:45 @sophiabennett No, they don't! They stamp down controversy. More so since early 2018. Independent academic social science studies are actually very divided on the effect of social media on political polarization and other dysfunctions. Here is an annotated bibliography: https://t.co/KMG76c5mH2
2022-12-15 14:01:42 @rahul_tiwari95 That's a superpower!
2022-12-15 13:58:53 @sophiabennett 1st, the "algorithms" actually down-rank or take down content that violate content policies, like hatred. 2nd they do favor connection and community. 3rd, this is handled by several large orgs (Integrity, Responsible AI...) that I'm not involved in. I do fundamental AI research.
2022-12-15 04:14:37 RT @csfacultyjobs: Department Chair and Professor, Computer Science &
2022-12-15 04:06:58 @PhilBeaudoin Well, at least I can do *some* good
2022-12-15 04:05:22 @traderyau FB is trying hard to remain neutral. It has clear policies against hateful and violent content. But people hate FB for not taking down every piece they disagree with or for taking down hateful or violent pieces they agree with. Those people are from both extremes of many issue.
2022-12-15 03:58:01 @Ciqax Well, I'm a professor. So my profession totally extends to making humans smarter!
2022-12-15 03:48:26 Some people mistakenly claim that I have superpowers. Like stopping people from killing each other, or something. Then they accuse me of moral bankruptcy for not using those superpowers. Twitter is such a strange place!
2022-12-15 03:37:07 https://t.co/dDmAn7QOHD
2022-12-14 14:22:58 @Jcole75Cole [Reference needed]
2022-12-14 14:20:15 @mierrashid No. I'm a classic liberal. Any topic or problem should be discussed rationally and approached pragmatically. Hence, I'm critical of anti-liberal forms of extreme wokeism. Anti-liberalism from the Left or the Right *is* a threat. But I don't think wokeism is an existential threat.
2022-12-14 14:11:45 @sphoebs He was right about electric cars and reusable rockets. But he was dead wrong about AI. He is unfathomably wrong about politics. And ridiculously naïve about how to run social networks.
2022-12-14 14:08:52 @1LegendaryMan We know that superhuman AI is possible. But: - it's not around the corner. We are still missing essential concepts. - it's not the kind of existential threat Elon should be worried about. Wokeism is not an existential threat either.
2022-12-14 08:08:37 Seven years ago, dude was telling us that superintelligent AI was going to destroy us. And now it's what? wokeism? Sounds like another example of moral panic for https://t.co/Q4LwhiSayI https://t.co/jjCsuCSoNn
2022-12-14 00:10:34 @SelfSupervisedL @alfcnz That's pretty good already!
2022-12-13 22:59:44 @jasperschwenzow @Analyticsindiam It almost certainly harms my reputation, but it may help the field as a whole.
2022-12-13 20:39:21 RT @MetaAI: Announcing data2vec 2.0, a new general self-supervised algorithm built by Meta AI for speech, vision &
2022-12-13 20:12:19 An account of some AI drama in @Analyticsindiam , with a section about my recent position paper. I think no one is an angel nor a demon. https://t.co/zdzIp0dqAX
2022-12-13 20:03:29 More progress in "universal" Self-Supervised Learning from FAIR with Data2Vec 2.0. IUses a Joint embedding Predictive Architecture with EMA weight sharing. One branch sees the entire input. Other branches see masked inputs &
2022-12-13 14:14:48 @farhanhubble https://t.co/rVy3oV6wx4
2022-12-13 14:12:00 @malinthafe Scrutiny? Yes. Politically-motivated, crazy-ass conspiracy theories? No. There is plenty of scrutiny, given that the NIH budget is passed by Congress and that the director is a political appointee.
2022-12-13 14:02:41 @akdetrick Your statement is nothing more than prejudiced slander. https://t.co/u57xKKMhr1 https://t.co/8th91VUBs3
2022-12-13 13:35:55 @ask4amitkumar @costplusdrugs You don't need a "race between billionaires." You need a non-corrupt political system that elects politicians who care about people, not about the profits of corporate campaign donors. You know, like in Europe, where drugs are way cheaper than in the US (including American drugs!
2022-12-13 13:30:31 @malinthafe Most biomedical research in the US is actually funded by NIH, which has a much larger budget than NSF. Private research funding from foundations like CZI complement that.
2022-12-13 13:11:37 @StanDehaene Mes condoléances, Stan.
2022-12-13 03:02:35 @debadeepta Well, you do robotics, so yeah.
2022-12-13 02:59:04 @SetcoverB @Schwebebahnfahr @JohaPrime Then again, in a forward Euler universe, you get energy for free.
2022-12-12 23:36:30 @callistasgraves @jeffjarvis You may not realize that FB does a lot of good in the world. Billions of people are on FB to connect with distant family members, friends, fellow support group members, etc. Tens of millions of people run their business on FB. The entire economy of some countries runs on FB.
2022-12-12 21:07:28 I discovered this when I wrote my first "Space War" game in 1980. Unless you update the velocity before the position, you get unstable orbits. I had no idea this was called symplectic Euler. https://t.co/DOZtEtlKl6
2022-12-12 21:03:16 RT @gabrielpeyre: The Laplacian of a graph is a semi-definite positive operator which mimics second order differences along the graph’s edg…
2022-12-12 20:51:27 @chrismoya86 @patrickmesana Possible? yes. Easy? no.
2022-12-12 20:44:25 @rayanhtt Reward is not enough. That's why we don't train classifiers by telling them if their output is right or wrong. We train them with a *differentiable* surrogate loss that approximates classification error and uses the desired output.
2022-12-12 19:00:49 @yoavgo @rasbt Oh yes!
2022-12-12 18:47:45 @rasbt The performance of large transformers pre-trained to fill gaps (masked AE, itself a special case of denoising AE, itself a special case of contrastive SSL) has certainly been surprising to most people, including me.
2022-12-12 18:43:25 @PandaAshwinee @bob_burrough NIPS 2016, actually. Bit I started showing that slide in January 2016.
2022-12-12 18:26:37 @SnarkyPixel_ There have been a *lot* of changes over the years. That how one learns how to run a social network. Things you try sometimes have unpredictable negative side effects. You go back to the drawing board and try again. Elon has been unbelievably naïve about this, as with many things.
2022-12-11 14:30:27 @egrefen I'm a part-time New Yorker and 1/4 Alsatian (through my maternal grandmother), so I'm trapped in a sort of reflective self-trolling.
2022-12-11 14:25:03 @OriolVinyalsML @RandomlyWalking Question: are LLM-powered trolls silicious?
2022-12-11 14:17:24 RT @tydsh: @artistexyz @ylecun @MirowskiPiotr We actually have a system Re3 in EMNLP'22 that can write long-form consistent stories (up to…
2022-12-11 14:16:28 @relnox It's fine for people to tag me. But we're talking about troll-y people who try to provoke me. I do not engage nor respond. But they still claim they're having a debate with me.
2022-12-11 14:11:13 @RandomlyWalking @OriolVinyalsML But droll trolls are nutritious.
2022-12-11 14:03:39 @Youness_ELM That could be fun
2022-12-11 13:59:50 @roydanroy @ESYudkowsky Good thing that a Dirac is a convenient concept that doesn't actually exist.
2022-12-11 06:34:39 RT @tobias_rees: honest question: Was there a single AI product release in 2022 that was genuinely celebrated and welcome by journalists?…
2022-12-11 06:09:42 RT @MLStreetTalk: https://t.co/mzElsDZ55s
2022-12-11 06:08:21 RT @MLStreetTalk: We spoke with @ylecun last week at #NeurIPS2022 and discussed some of the exciting work @MetaAI is publishing this year,…
2022-12-11 03:16:30 @aakash_rewari Yes, as an executive in the largest social network company, I need to learn how social networks work
2022-12-10 21:44:28 @arthur_spirling They should totally investigate the ball for obstinately refusing to get into the French goal.
2022-12-10 21:41:25 The nice thing about using generative language models for playwriting is that "making sh*t up" is actually a feature, not a bug. Congrats @MirowskiPiotr and team! https://t.co/ViuLlS5T10
2022-12-10 21:34:55 @OriolVinyalsML OK. But eating them for breakfast is fine, right?
2022-12-10 21:30:13 @relnox Well, ArXiv + social networks and OpenReview have changed that.
2022-12-10 21:21:06 @Arian_Khorasani I promise you I had absolutely nothing to do with that. Come to think of it, that's also true of a lot of things that people praise me or blame me for!
2022-12-10 21:17:11 @relnox You mean, like this one? https://t.co/7ZgRtM0rOw
2022-12-10 21:14:53 @roydanroy @ESYudkowsky The point is that they do get corrected quickly.
2022-12-10 21:02:34 Tired: Twitter drama. Wired: France-England football drama.
2022-12-10 20:58:03 Why would people I never directly engage with think they are "having a debate" with me merely because they tag me.
2022-12-10 20:24:51 RT @scienceisstrat1: The @IEA’s bombshell new report on renewables has incredibly good news. For example, solar is undergoing a mega boom…
2022-12-10 14:11:36 @Britonomist @ben_golub Why equate "machine generated" with "fake"?
2022-12-09 23:14:30 RT @davidchalmers42: abstract submission for talks and posters at #ASSC26 in NYC are now open, with deadline feb 15. symposia and tutorials…
2022-12-09 21:35:51 @ai1nrl @GaryMarcus @ZDNET @kenneth0stanley Not sure. My undergrad is in electrical engineering. My PhD curriculum did not include any of the stuff that card-carrying computer scientists are supposed to know. So I'm not sure I qualify as a computer scientist
2022-12-09 21:31:36 @FoldMani @GaryMarcus @ZDNET @kenneth0stanley Let's say the credentials statement is: "generated by a large language model trained on the neurology literature"
2022-12-09 02:48:14 @primrecur @GaryMarcus @ZDNET @kenneth0stanley Stochastic Gradient Descent
2022-12-09 01:56:34 Curiouser and curiouser: Said psychologist says on LinkedIn that his Twitter account was hacked 4h ago and that he is locked out of Twitter, suspecting a failure of 2FA or a Twitter inside job.
2022-12-09 01:50:01 @vayuvegula @GaryMarcus No idea. But the suggestion that I have anything to do with it is ridiculous. And his suggestion that it was some sort of Twitter inside job is squarely in conspiracy theory territory.
2022-12-09 01:34:11 @grbradsk Haha! Everyone needs a little bit of Gary in their threads.
2022-12-09 01:27:01 Err https://t.co/9NoM8Xhaop I mean.
2022-12-09 01:26:43 @Money17251696 Oh, that's just because it's actually https://t.co/9NoM8Xhaop
2022-12-09 00:07:35 RT @DKThomp: In 2022, we - reversed organ death in pigs - made the first embryo from stem cells - made a pan-influenza vaccine - saw the b…
2022-12-08 23:57:15 One thing https://t.co/4qtFrzcULW could help with. https://t.co/Pzz9RcRN4w
2022-12-08 23:47:26 @GaryMarcus @ZDNET @kenneth0stanley If I wrote a book entitled "Rebooting Neurology" in which I explained how neurology has "run into a wall", I would not expect it to be taken seriously by the neurology community.
2022-12-08 23:18:21 @roydanroy @ESYudkowsky That one was fixed years before the CA scandal surfaced. The FB Social Graph API was shut down precisely because of privacy concerns. Turns out developers, and even academics like Aleksandr Kogan, could not be trusted to not breach their contract and misuse user data.
2022-12-08 22:30:08 @notSoJunkDNA @GaryMarcus @ZDNET @kenneth0stanley Your inference is correct, despite the incorrect causality assumption.
2022-12-08 22:27:21 @ubiquity75 These problems are never just "solved", because (1) they evolve all the time &
2022-12-08 22:09:34 @blamblamtheman @ESYudkowsky That only happens in countries that allow their political process to be corrupted by money. Like the United States of America.
2022-12-08 22:07:10 @roydanroy @ESYudkowsky When you find problems, you fix them. I'm not entirely sure which "debacle" you are referring to, but if it's attempts by Russia and others to corrupt the electoral process, this was quickly fixed to avoid a repeat during the French and German elections a few months later.
2022-12-08 22:02:54 @joe_shabadoo You might be on to something here
2022-12-08 21:56:43 @GaryMarcus @ZDNET @kenneth0stanley I co-authored papers in - Physical Review Letters. But that doesn't make me a physicist. - Noema. I'm not a philosopher. - Cell. No biologist. - SIAM. No mathematician. - Genome biology. No geneticist. - NBER. No economist. - Clinical Neurophysiology &
2022-12-08 21:27:48 @Namenode5 It's just Twitter. You can have nice things on Facebook and LinkedIn.
2022-12-08 18:54:12 Weirdest Twitter Drama of the Day: CS/AI credentials of MIT AI researcher &
2022-12-08 13:40:41 @ESYudkowsky We make laws and regulations for corporation (call it reward shaping), which are organized to have superhuman collective intelligence.
2022-12-08 13:23:01 RT @ValaAfshar: In 1983, two professors debated the future relevance of home computers. Both experts shared valid talking points
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-11-13 20:28:46 @mmbronstein @b_p_chamberlain @ElonActual @ryan_p_adams @hugo_larochelle @clmt Indeed.
2022-11-13 16:13:56 @b_p_chamberlain @ElonActual I suppose things changed for the better when @mmbronstein was acquihired.But that's a relatively small outfit.There were previous failed attempt to convince Twitter to have a meaningful AI research efforts, e.g. by @ryan_p_adams @hugo_larochelle @clmt and others.
2022-11-13 16:07:16 @georgebdavis @pandaym The neuroscience was Hubel &
2022-11-13 15:57:30 @georgebdavis @pandaym ConvNets were inspired by both neuroscience and signal processing.
2022-11-13 15:08:00 RT @paulkrugman: Can Ron DeSantis effectively challenge Trump? I have no idea. But one thing I hope doesn't get forgotten in the horse-race…
2022-11-12 22:08:00 @GaryMarcus @KordingLab @yudapearl @scac1041 @PhilDawid @pmddomingos @StephenPiment @AlexTensor @gopnik @stephensenn Are you asking whether transformers constitute "an innate apparatus for symbol-manipulating operations over variables" simply because they are equivariant to permutations?
2022-11-12 21:59:50 @pmddomingos @KordingLab @yudapearl @scac1041 @PhilDawid @GaryMarcus @StephenPiment @AlexTensor @gopnik @stephensenn To make an equivariant mapping invariant, you need to add some sort of invariant aggregation operation (AKA pooling).
2022-11-12 21:52:48 @ubiquity75 @ElonActual I said "no", by which I do not mean "whatever".
2022-11-12 19:00:52 RT @DrEricDing: Insulin was discovered and patent gifted for $1 over 100 years ago. It saves lives. It should be free, or capped at most $3…
2022-11-12 17:40:00 @wayneholmes I do have an agenda: helping make progress in our understanding of intelligence and learning in machines and brains.
2022-11-12 17:30:20 @DavidRimshnick Your measure of complexity is somewhat arbitrary.Different measures of complexity correspond to different priors.
2022-11-12 17:29:01 @DuncanARiach @technotweet But even that already has a pretty strong prior.Once you initialize the weights, only certain functions will be learnable through gradient based methods.
2022-11-12 17:27:04 @ElonActual No.AI R&
2022-11-12 16:37:29 @GaryMarcus @KevinIndrebo @KordingLab @yudapearl @scac1041 @PhilDawid @pmddomingos @StephenPiment @AlexTensor @gopnik @stephensenn @davidchalmers42 @De_dicto The question is more complicated than the vinary question "priors vs no priors".Here is an explanatory thread.https://t.co/QLV9d9qct8
2022-11-12 16:33:45 So, asymptotically, with infinite data, there is no need for priors.But of course, that's completely unrealistic, and *some* priors are always necessary in practice.But the amount of it should be minimized as a function of the data we have. 10/
2022-11-12 16:31:19 Simply with more training, this network could do the same thing as a ConvNet without requiring the "prior" of shift equivariance provided by weight sharing.9/
2022-11-12 16:30:00 One can train it to be equivalent to a regular ConvNet by training it systematically with translated versions of all the examples, and forcing it to produce correspondingly-translated representations. This would be a form of self-supervised learning with data augmentation.8/
2022-11-12 16:27:38 But if you have a huge amount of data (not necessarily labeled for supervised learning), then the necessity for structural prior diminishes!Consider a locally-connected network architecture similar to a ConvNet but *without* the shared weights...7/
2022-11-12 16:24:49 Other examp,e: when your data contains objects and what matters are relations between objects, it's good to use an architecture that is equivariant to permutation, because you never know in what orders the objects will be.That's what transformers do.6/
2022-11-12 16:22:51 For example, using convolutions is good when your input data comes in the form of an array, with strong local correlations &
2022-11-12 16:19:28 So it's a good idea to put priors that you *know* are true, and simultaneously *minimize* the amount of priors you put in.The optimal amount entirely depends on how much training data you have access to.The more data you have, the less priors you need.4/
2022-11-12 16:16:55 Consequence: the more priors you put in, the fewer samples you require.But: the more priors you put in, the greater the chance that the functions you need to learn are not realizable (or hard to learn) by your model.3/
2022-11-12 16:14:50 The no-free-lunch theorems tell us that, among all possible functions, the proportion that is learnable with a "reasonable" number of training samples is tiny.Learning theory says that the more functions your model can represent, the more samples it needs to learn anything2/
2022-11-12 16:11:21 OK, debates about the necessity or "priors" (or lack thereof) in learning systems are pointless.Here are some basic facts that all ML theorists and most ML practitioners understand, but a number of folks-with-an-agenda don't seem to grasp.Thread.1/ https://t.co/T6De5EezR5
2022-11-12 16:02:50 @FrankLiuwzh Yes, some are on the left and some are on the right.Both are deluded by the propaganda of their tribe.
2022-11-12 15:43:30 @KevinIndrebo @GaryMarcus @KordingLab @yudapearl @scac1041 @PhilDawid @pmddomingos @StephenPiment @AlexTensor @gopnik @stephensenn @davidchalmers42 That is, unless you have unlimited amounts of data.
2022-11-12 15:42:03 @KevinIndrebo @GaryMarcus @KordingLab @yudapearl @scac1041 @PhilDawid @pmddomingos @StephenPiment @AlexTensor @gopnik @stephensenn @davidchalmers42 The no-free-lunch theorems make inductive bias inevitable.
2022-11-12 15:39:14 @krymski The production of everything can be done properly or recklessly.In countries that have a lax attitude towards worker and environmental protection, things can go bad.But there no *inherent* reason for solar panel production to be polluting and exploitative.
2022-11-12 15:22:38 @KordingLab @yudapearl @scac1041 @PhilDawid @pmddomingos @GaryMarcus @StephenPiment @AlexTensor @gopnik @stephensenn Inductive biases are often based on assumptions of symmetry.Transformers: equivariance to permutations.ConvNets: equivariance to translations.
2022-11-12 15:08:12 @TheKanter Well, for intermittent renewables (solar &
2022-11-12 15:02:26 @peter_richtarik The review should be dismissed.
2022-11-12 15:00:05 Facts: nuclear is cheap &
2022-11-12 14:51:58 - tweet: pic of a nuclear power plant- comments: so much CO2 emissions! whaddabout wastes? Too expensive! Remember Fukushima &
2022-11-12 14:27:54 RT @SenSanders: Cost for one vial of insulin:: $6.94: $7.52: $9.08: $11: $12: $14.40: $98.70No one should be forced…
2022-11-12 04:29:16 RT @DeBunKerEtoiles: [THREAD]>
2022-11-11 16:01:33 @TonyZador @BAPearlmutter Straight out of Avatar.
2022-11-11 15:28:49 @KaiyuYang4 And now China has an autocratic dictator. See? It destroys society.
2022-11-11 15:24:01 RT @TheSequenceAI: Yoshua Bengio, @geoffreyhinton, @ylecun, and @demishassabis are laureates of the 2022 Princess of Asturias Award.Congr…
2022-11-11 12:45:35 RT @cyrusbeschloss: i really hate to write this tweet, but I want to be honest about the youth vote (18-29): 1. they voted at the 2n…
2022-11-11 12:25:09 RT @ai__pub: // "Emergence" in ESMFold //Emergence is the phenomenon of large ML models "learning" to do much more than they were trained…
2022-11-11 04:31:04 RT @Piyush__Tank: @ylecun Thank you for putting your NYU Deep Learning course on the internet. Not only the course is very well structured…
2022-11-11 04:16:04 RT @f_charton: My paper Linear Algebra with Transformers was published in Transactions of Machine Learning Research (TMLR). This new versi…
2022-11-10 20:41:32 @math_dandy Yes. Pineapple on pizza is an abomination.
2022-11-10 20:39:39 @hectorpal Do you have recipe for cooking ambitious CEOs?
2022-11-10 19:34:43 @simonbatzner As the French owner of a German car, I can only agree.
2022-11-10 19:32:33 @DarrylMason With this recipe, they won't be able leave. Nor scream.
2022-11-10 19:30:40 @timoveiled How could Californians corrupt your taste from so far away?
2022-11-10 13:59:54 OMG! soon AI will tell us to put pineapple on pizza, sugar in bread, pumpkin in pies, mint sauce on lamb, and civilization will be destroyed https://t.co/aEFqP9OTVV
2022-11-10 03:44:06 RT @CoML_ENS: The work is also very interesting for anyone thinking about the future of AI systems as discussed in @ylecun’s latest posit…
2022-11-09 12:55:37 RT @MetaFrance: Le projet #2Africa c’est : 45 000 km de câble sous-marin 33 pays connectés en Afrique, Asie et Europe 3 milliards d…
2022-11-09 12:55:12 RT @Isabelle_Ryl: Ne manquez pas les Dauphine Digital Days « Comment l’intelligence artificielle transforme-t-elle en profondeur notre soci…
2022-11-09 12:54:26 FAIR's ESMFold@Home (GPU mem>
2022-11-09 12:45:29 RT @gabrielpeyre: Any optimization problem is equivalent to a convex (linear) one (but infinite dimensional…). The key do perform global op…
2022-11-09 03:49:07 RT @_NYUIT: Corrected tweet! Interested in AI? Come to A Talk by @ylecun: From Machine Learning to Autonomous Intelligence on Nov. 10, 4pm.…
2022-11-08 23:55:28 @ubiquity75 You are the one who seems to say it didn't. So, what would be your approach?More human moderators? More AI?Less moderation? More moderation?No social networks at all?
2022-11-08 22:01:57 Actually on Nov 10 at 4:00pm. https://t.co/mgXaqLtOi9
2022-11-08 22:01:10 @_NYUIT Actually on Nov 10 at 4:00pm
2022-11-08 21:59:22 RT @_NYUIT: Interested in AI? Come to A Talk by @ylecun: From Machine Learning to Autonomous Intelligence on Nov. 9. More info and registra…
2022-11-08 20:16:11 RT @MetaAI: Because ESMFold is up to an order of magnitude faster, we can scale much larger, making it feasible to explore metagenomics, th…
2022-11-08 20:12:15 Nice article in @ScienceMagazine about FAIR's ESM-2 protein folding prediction system, and the publication of 615 million protein structures.The article discusses how this kind of breakthrough piggybacks on large investments in deep learning by industry. https://t.co/s7ciwa8GK2
2022-11-08 18:27:44 @ubiquity75 What would be your solution to content moderation?
2022-11-08 18:24:27 @nopanen The ranking system uses machine learning.If you don't use FB, the ranking system has no idea what toshow you.You need to use FB for it to get better.Also, a lot depends on who you follow, which FB groups you belong to, and who your friends are.
2022-11-08 18:19:20 @gaubian You can make distribution lists: family, colleagues, etc.You can also make Pages or Groups which are ideal for political posts, support groups, hobbies....
2022-11-08 13:54:08 RT @scienceisstrat1: The IRA has already been a spectacular success in stimulating investment in the US EV supply chain.Billions have alr…
2022-11-08 01:32:57 Beware: telegrams are mental opium for girls! https://t.co/ypp7ZEx7Nw
2022-11-07 23:56:15 RT @randall_balestr: POLICE code is now available:https://t.co/37DxXVlkdpQuick facts:- POLICE only takes 5 lines of code- code is jit/…
2022-11-07 23:54:56 RT @daniel_eckler: ANOTHER AI MEGA THREAD
2022-11-07 20:33:11 Meta-FAIR's NLLB-200 translation system is getting used more and more by Wikipedia editors, and produces higher-quality results than other translation systems. https://t.co/Ha59BiIuxT
2022-11-07 16:47:46 @traderyau @hzhu_ https://t.co/35v43poGZF
2022-11-07 15:42:20 Congrats @matthieurouif and team! https://t.co/gmVlmhdB9F
2022-11-07 13:34:27 @traderyau @hzhu_ You can make lists and post to those.
2022-11-07 12:51:26 @TimothyBuckSF @douglas_eck Indeed. And most of them are on Instagram.
2022-11-07 12:48:45 @vineettiruvadi @numerique78 Civilized, because- bullying and hate speech are taken down automatically,- the authors posts can take down comments on their posts.- you can select who sees and who doesn't see a post.It's not a happy place for trolls.
2022-11-07 12:43:45 @tobiolabajo @loranditsum Even if it were true, 38 ads for a whole year is quite a small number.That's like one ad every 10 days.
2022-11-07 05:11:07 @hacklavya @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh Lots of people built gliders and powered airplanes way before the Wright brothers, some small, some carrying a pilot.The Wright brothers were ahead in demonstrating a plane that could sustain a controlable flight.But they were not the first to demonstrate self-powered flight.
2022-11-07 04:59:07 @NotTriggerAtAll I think you might be mistaken about what "my party" is and what my recommendation is.
2022-11-07 04:57:04 @CubanBTC @neiltyson American voters.
2022-11-07 04:55:46 RT @paulkrugman: A key point is that the people spreading false claims aren't the only evildoers here. The rest of the GOP, which accepts s…
2022-11-07 04:35:19 Be careful. https://t.co/QJdZdZEIe0
2022-11-06 22:35:59 @douglas_eck I post family photos to "Friends Only."I post most other things to "Everyone."One can make "Lists" to precisely control who sees what posts.People can follow me without becoming FB friends (my profile is public).I only friend a subset of people I've met in real life.
2022-11-06 21:22:38 RT @PedderSophie: Most of the time you just ignore this stuff. But occasionally it is too absurd. France has its problems, for sure. But it…
2022-11-06 21:17:23 @AlexTensor @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @isbellHFh @MIT_CSAIL @ieee_itsoc He was actually at Bell Labs when he published this paper.He moved to MIT quite a bit later.
2022-11-06 14:39:03 RT @arminarefi: IMAGES RÉSUMANT PARFAITEMENT ce qui se joue actuellement en #Iran entre une minorité au pouvoir se réclamant de Dieu, et un…
2022-11-06 14:32:09 RT @Gregdt1: La #COP27 commence aujourd’hui. Un rappel: il y a environ 2000 GW de centrales à charbon installées dans le monde. Pour compa…
2022-11-06 14:24:14 @neuralbash The Iranian government blocks FB and limits the use of Instagram and WhatsApp.https://t.co/WF5Pw0IMQb
2022-11-06 14:16:54 @rasbt OK millenial...
2022-11-06 14:10:51 @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh Shannon's paper was published in the late 40s, a dozen year before I was born.So I didn't experience the revolution it caused in EE.But, like Nyquist's sampling theorem, it did cause a revolution in communication.A bit like thermodynamics revolutionized physics.
2022-11-06 08:02:21 @wayneholmes @dmonett @NecroKuma3 @PartnershipAI @katecrawford Particularly in countries where it is the govt that is spewing the hate.FB had to take down the main pages of the Myanmar military as well as lots of govt sock puppet accounts.This has nothing to do with money.Revenue from Myanmar is literally peanuts.
2022-11-06 04:10:05 @juanbuhler @GullyAPCBurns That's how all ranking systems work.They learn from your actions to improve the content you see.The purpose is to make the service better for you.
2022-11-06 03:55:15 @LaneBucher First sentence, third paragraph of the ToS:"We don’t sell your personal data to advertisers"https://t.co/6twq7JsrNL
2022-11-05 23:28:13 @i_am__Alono @Kashish__Kumar_ I'm a scientist.I don't run the business.You are telling the wrong person.But I can tell you that the people running the business aren't ignoring it.
2022-11-05 23:23:57 @GaelVaroquaux CDS has psychologists, sociologists, economists, political scientists, statisticians, etc researching the effects of Meta products and services on people and society.
2022-11-05 23:17:39 @GaelVaroquaux There are 3 groups:- "Core Data Science" under Danny Ferrante- "Social Impact" under Emily Smith. Includes the Responsible AI group under Esteban Arcaute.- "Integrity" under Guy Rosen, includes content moderation, privacy protection, etc.Those are large groups under a VP.
2022-11-05 19:56:14 @i_am_olo I just did.https://t.co/NuT2WjWrrZ
2022-11-05 19:49:52 @togelius The Feed algorithms are good enough to figure out that your aunt and your high-school friends may not be so interested in your posts about ML/AI.
2022-11-05 19:35:41 @velango @EMostaque @DeepMind You may not realize that there was a golden age of advanced research in industry that produced much of the technology of the modern world.From AT&
2022-11-05 19:25:44 @JagersbergKnut @numerique78 I just try to be factual.
2022-11-05 19:24:03 @yogesharora There are tabs for Feed, Groups, Friends, etc.You can create multiple Page accounts (linked to your main account) for your various persona (professional, hobbies, interests), and keep your main account for friends and family.
2022-11-05 19:21:03 @AkwyZ I'm not talking about Instagram.And you don't have to click on Reels if you don't like them.
2022-11-05 19:16:49 @bilayerguy The Russian government blocks Facebook!How could FB possibly be "friendly" with it?Whenever there is a major debate, each side is convinced that FB favors the other side.https://t.co/wSTyFw4QVg
2022-11-05 19:13:02 @JagersbergKnut @numerique78 The more followers you have, the more of a juicy target you become for trolls and people who just want attention.
2022-11-05 19:07:55 @yogesharora You can make Groups, Lists, and Pages.Groups are for people with a common interest (family, hobby,...)Lists are for choosing who will see your posts.Pages are for posting public content, as with a blog or website.In addition, Feed ranking does a lit of this automatically.
2022-11-05 18:59:45 @cristi_vicas @EMostaque @DeepMind MSR == Microsoft Research
2022-11-05 18:57:05 @seanmcbride If you don't click on baby photos, FB won't show you baby photos, except occasionally from your close friends and family.
2022-11-05 18:53:49 @sdelachica Multilingual NLP has become a hell of a lot better over the last 3 or 4 years.
2022-11-05 18:52:09 @GullyAPCBurns You don't have to click on Reels if you don't like them.As you use FB, the ranking system learns to show you the good stuff.If you don't use it, it can't learn.
2022-11-05 18:50:32 @i_am__Alono @Kashish__Kumar_ The stock is down largely because Wall Street investors who are mostly interested in short-term profitability do not understand the value of long-term R&
2022-11-05 18:43:28 @LaneBucher What evidence do you have that FB sells user information?User information is the most guarded and precious asset FB has.It would make zero sense for FB to sell it.
2022-11-05 15:45:44 Looking for a social platform that:- figured out how to do content moderation (hate, bullying,...)- doesn't limit the length of posts &
2022-11-05 15:35:11 @3scorciav @aniketvartak Most welcome.
2022-11-05 15:26:47 @EMostaque @DeepMind Do not confuse Research (which is what DeepMind mostly does) with R&
2022-11-05 15:20:05 @wayneholmes @dmonett @NecroKuma3 @PartnershipAI @katecrawford AI scientists &
2022-11-05 15:16:53 @wayneholmes @dmonett @NecroKuma3 @PartnershipAI @katecrawford So you scramble to hire X speakers outside of Country S.But you can't hire enough of them quickly enough.So you ask AI engineers: can you build an X-to-English translation system, so we can use our English content moderators and hate speech detectors?2/
2022-11-05 15:14:14 @wayneholmes @dmonett @NecroKuma3 @PartnershipAI @katecrawford Say you run a free online service, call it S.People in small country X start to use S (without talking to you).The government of X use S to promote violence against its own minorities.You can't open an office in X because the government doesn't want you to moderate them.1/
2022-11-05 15:02:19 @aniketvartak @3scorciav Yup!
2022-11-05 15:00:26 @wayneholmes @dmonett @NecroKuma3 @PartnershipAI @katecrawford There are large engineering groups at Meta who do not sleep well because they work tirelessly to defend against attempts to disseminate hate and violence.If, in 2017, you had a magic recipe for detecting hate &
2022-11-05 14:54:13 @dmonett @NecroKuma3 @wayneholmes @PartnershipAI @katecrawford Incidentally, that Burmese-English translation system won the WAT'19 competition for the translation of low-resource Asian languages.https://t.co/TIqLhr0SAe
2022-11-05 14:33:29 @kaalam_ai @elonmusk * ...permanently and continuously *considering* that they are doing wrong.
2022-11-05 14:29:01 @3scorciav OK, before you build a whole conspiracy theory around this:All the big companies *are* sponsoring NeurIPS.I'm told the reason they are not (yet) listed is purely some sort of administrative delay.
2022-11-05 14:24:22 @dmonett @NecroKuma3 @wayneholmes @PartnershipAI @katecrawford Funny you mention Myanmar.The issues were due to an *absence* of AI.In 2017, there were few Burmese-speaking human moderators.So AI engineers quickly developed Burmese-Eng translators.Then they produced multiligual hate speech detectors.AI was the solution. Not the problem.
2022-11-04 21:40:34 @dmonett @PartnershipAI Twitter would be a much better place if people didn't always made assumptions of bad intent.
2022-11-04 21:39:43 @dmonett Scientists do not have any particular legitimacy to decide for the rest of society what usage of technology is good or bad.Of course, unethical uses of AI should be avoided, and harmful ones bannedIn fact, I helped create the @PartnershipAI precisely for that purpose.
2022-11-04 21:31:51 RT @schrep: AI advancements fueling better recycling/resource recovery is hugely exiting: https://t.co/EttfKsAZit
2022-11-04 21:30:02 @kaalam_ai @elonmusk So yes, people working on integrity and Feed ranking are *permanently* and *continuously* that they are doing it wrong.And the fact that the systems have changed drastically over the years shows that when they find something wrong, they improve or redesign the systems.
2022-11-04 21:27:27 @kaalam_ai @elonmusk I submit that your idea of how Feed ranking and content moderation are done is overly simplistic.Many, many different schemes have been implemented and tested over the years, and their effect on people and society measured.The best ones are deployed and continuously adjusted.
2022-11-04 21:17:03 @scienceisstrat1 @Suhail @eladgil @erikbryn @amcafee @paulg All the credit goes to @alexrives and his team.I deserve none.
2022-11-04 20:58:27 RT @MetaAI: Meta AI researchers trained a language model to fill in protein sequence gaps across millions of diverse proteins &
2022-11-04 18:06:33 @dmonett I'm saying the exact opposite of what you think I'm saying.I'm saying that the proper use of technology should be decided by society at large through the democratic process, not by "hugely paid" individual scientists nor by Big Tech.
2022-11-04 17:57:58 RT @fpa: Today, one week after the #PrincessofAsturiasAwards Ceremony, we are sharing these images of the Laureates with the sculpture by J…
2022-11-04 17:47:24 RT @EricLagadec: Je crois que dans le climat actuel, tout le monde a besoin d'un peu d'évasion. Je vous propose des voyages extraordinaires…
2022-11-04 17:42:02 RT @Innov_Medicine: DNA to RNA real-time speed. Gene Transcription at real-time speed. Transcription is the first step in gene expression.…
2022-11-04 17:40:56 RT @syhw: .@gordic_aleksa did a nice in-depth explanation of Encodec here https://t.co/oY5bQM9vM8
2022-11-04 17:36:07 RT @FrnkNlsn: Poster on the taxonomy of main statistical distances with their underlying geometries: Euclidean geometry, Riemannian &
2022-11-04 15:46:27 Interesting.We knew that Murdoch's Evil Empire (WSJ, Fox News...) hated FB and Tech, which they see as ideological &
2022-11-04 14:00:38 @ziv_ravid I wonder who that could be....
2022-11-04 13:01:29 @Namzo098 Nice to meet you Moses.
2022-11-04 12:29:10 An *amazing* piece that humorously explains why content moderation on social networks is difficult, painful, expensive, &
2022-11-04 02:09:06 RT @techreview: We covered @ylecun's bold vision for the next generation of AI earlier this year. https://t.co/mckAaedOex
2022-11-03 22:30:05 @vineettiruvadi @Meta Actually, the social science community is very divided on the question of the impact of social networks on democracy, as shown by this extensive annotated bibliography.https://t.co/8wNPoHTHlz
2022-11-03 22:25:01 @forodeeplearn @techreview What about ConvNets?They can:- save lives: driving assistance, medical imaging...- moderate content: pedophilia, violence...- recognize targets: good when used by good guys to defend freedom &
2022-11-03 22:16:35 @cameranashraf @techreview Hate to break it to you, but most new technologies have many uses, some obviously good, some obviously bad, and some completely unpredictable at the time of the discovery.
2022-11-03 22:11:13 @vineettiruvadi @Meta Meta does everything it can to *protect* democracy, and the democratic process, from forces that want to undermine it.Ask yourself why it is every single country that bans or limits Facebook has an authoritarian government.It's because Facebook is good for democracy.
2022-11-03 22:06:54 Is human led mathematics over?A discussion between @wtgowers and me, moderated by Joëlle Pineau (managing director of Meta-FAIR).Short answer: no.But 'AI for mathematics' is making fast progress.https://t.co/wod1swcR3c
2022-11-03 22:00:33 RT @techreview: At #EmTechMIT, @RaiaHadsell, @ylecun, and @ashleyllorens are discussing the path forward for AI research, the ethics of res…
2022-11-03 21:59:44 RT @ashleyllorens: Such an honor to join @ylecun and @RaiaHadsell to discuss the way forward for #ai today at #EmTechMIT. https://t.co/Ixp4…
2022-11-03 21:59:05 Fun to be on a panel at EmTech with Raia Hadsell (DeepMind) and Ashley Llorens (Microsoft), moderated by Will Douglas Heaven. https://t.co/n6JhDChCZZ
2022-11-03 21:55:44 Whether technology is used for the common good depends on the strength of our democratic institutions. https://t.co/Fg1F3A3uv4
2022-11-03 21:30:21 A new neural theorem prover from FAIR is able to solve 10 International Mathematics Olympiad problems, a 5x improvement over the state of the art.Congrats to @GuillaumeLample and team. https://t.co/HFxazfgC7L
2022-11-03 04:37:59 RT @randall_balestr: Deep Neural Networks are powerful... but how do you provably enforce some constraints into them? With @ylecun we intro…
2022-11-02 19:52:29 Fold your own sequences on Collab using ESMFold https://t.co/ZekUlGN8jO
2022-11-02 19:39:21 RT @MetaAI: Together with @HebrewU, we're excited for how the talent from our new joint PhD program and work coming out of this partnership…
2022-11-02 18:18:40 "In fact, the way modern AI technologies are developed shows there is no race for one country to win. Quite the contrary, the AI industry has skyrocketed because a global community has constructed it, together, brick by digital brick." https://t.co/L4PW7w8wkY
2022-11-02 16:52:50 RT @BeschlossDC: When an American politician speaks in fascist-sounding language, never brush aside what you are hearing as meaningless rhe…
2022-11-02 12:48:16 RT @Nature: AlphaFold’s new rival? Meta AI predicts shape of 600 million proteins https://t.co/jLpsjYHuKC
2022-11-02 11:30:15 Yes, ESMFold from FAIR comes with an API.Fold your own proteins. https://t.co/8DDadnxdWH
2022-11-02 11:26:52 RT @ewencallaway: Meta just dropped 600+ million protein structure predictions, made using a large language model.My latest for @Nature…
2022-11-02 11:12:22 RT @DavidAFrench: One of the saddest phenomena of the online right is the absolute fury at those of us who supported COVID vaccines and con…
2022-11-02 11:05:06 A milestone result in AI for science &
2022-11-02 10:55:35 RT @ZhongingAlong: Insane progress these days in ML for structural biology The team at Meta AI just released an atlas of 617M+ protein…
2022-11-02 10:54:30 RT @proteinrosh: We are thrilled to announce the ESM Metagenomic Atlas (https://t.co/YP7IDxXneH)!In this effort we folded the entirety o…
2022-11-02 10:52:59 RT @ElliotHershberg: The Meta team modeled the metagenomic protein universe More than 617 million new structure predictions of metagenom…
2022-11-02 10:30:01 RT @thesteinegger: .@MetaAI released ESMfold and structure predictions for most metagenomic MGnify90 sequences. Thanks for early-access @To…
2022-11-02 10:23:47 RT @kvogt: Tonight our driverless service area in SF expands to cover almost all of SF. Today is also exactly one year since my first ride,…
2022-11-02 03:12:59 RT @alexgkendall: This year we’ve seen many incredible AI breakthroughs from very large foundation models:CLIP = image/languageStable Di…
2022-11-01 21:24:27 @elonmusk I don't have a blue checkmark and don't seem to need one.
2022-11-01 21:20:13 @divamgupta Makes me look fat(ter).
2022-11-01 18:57:19 ESM Metagenomic Atlas:An open atlas of 617 million metagenomic protein structures.Brought to you by Meta-FAIR.Explore the point cloud of proteins, click on a dot and view the 3D structure.Or just enter a sequence and fold it in real time.Blog post: https://t.co/K24occp77d https://t.co/RL8BH9GBQe
2022-11-01 12:42:48 @YiMaTweets Simultaneously with a system trained to read those reports.That woukd be progress from now, since no one actually reads them
2022-11-01 12:33:09 The degrowthers' main argument is just wrong.Increased CO2 emissions are *not* an inevitable consequence of economic growth. https://t.co/hxSToX8S7d
2022-11-01 02:42:20 RT @NYUADInstitute: Known as one of the "Godfathers of Artificial Intelligence," Yann LeCun discusses the future of artificial intelligence…
2022-11-01 02:23:12 RT @scienceisstrat1: Canada has an opportunity, to be sure. But its R&
2022-10-31 20:32:32 RT @gabrielpeyre: Oldies but goldies: Peter Burt and Ted Adelson, The Laplacian Pyramid as a Compact Image Code, 1983. The Laplacian pyrami…
2022-10-31 18:06:58 Excruciatingly long thread on content moderation on social networks.It's about attitude, more than content. https://t.co/iS59whIaV7
2022-10-31 15:51:43 RT @cesarcernuda: What an honor to serve on the jury for the Scientific and Technical Awards at the @fpa. A great weekend celebrating win…
2022-10-31 15:49:40 @tsowell1984 @EdMaltinho @elonmusk You seem to have a strange definition of socialism.France's economy is most definitely capitalist.Like every other liberal democracy, and unlike the US, France takes care of its people.
2022-10-31 15:46:31 RT @ComputingOviedo: La Escuela de Ingeniería Informática ha nombrado a dos Aulas en reconocimiento a Yann Lecun @ylecun y Demis Hassabis @…
2022-10-31 09:46:16 RT @d_brueckner: There's been lots of talk on moving #ScienceTwitter to another platformAs scientists, we should make informed decisions…
2022-10-31 08:58:09 @FedorShabashev @egrefen @elonmusk No. He briefly floated the idea of moving to Belgium in 2013, when the French government was thinking of raising the maximum income tax rate to 75%. But he changed his mind.
2022-10-31 08:52:01 @BraneRunner @elonmusk The US is a decent country to live in if you are:1. In the top 1%.2. An academic at a top research university.3. In a civilized region like the NYC area.4. Near your children.But I do spend a few months per year in France.
2022-10-31 08:47:13 @DiogoSnows @elonmusk Pastel de nata rock!
2022-10-31 08:46:16 @policiamor @elonmusk Yes you can.I have two PhD students in Paris right now, @AdrienBardes &
2022-10-31 08:43:00 @David____8 @elonmusk French people have an undeserved bad rep, kind of like Zuck.
2022-10-31 08:41:20 @EdMaltinho @elonmusk I spend a few months/year in France.I could do my Meta job from Paris.But I can't just move because:1. our children grew up in the US and live there.2. I like my professor position at NYU.3. Academic is one of the few professions that is better in the US than in France.
2022-10-31 08:32:11 @egrefen @elonmusk Yes, but the second richest man in the world, after Elon, is Frenchman and LVMH Chairman &
2022-10-31 06:58:46 @LorenRD @elonmusk Jamón rocks!Also Asturian seafood and cider is awesome.
2022-10-31 06:57:13 @nosovietanymore @elonmusk Pretty much everyone who is not in the top 10% of the socio-economic ladder should.Obviously doesn't apply to Elon.
2022-10-31 06:54:47 @ericgtaylor @elonmusk We haven't quite figured out how to reproduce the complete experience with virtual pastry yet.
2022-10-31 06:53:05 @c0mplex_NA @elonmusk Dude, France was #1 for the number of inbound tourists in 2018 with 89.4 million.Italy is #5 with 61.6 million tourists.https://t.co/M9HihrKQ5Z
2022-10-31 06:45:45 @elonmusk In addition to ubiquitous and delicious pastry, France has low-emission electricity, thanks to a high proportion of nuclear production.Good place to have electric cars and data centers.
2022-10-31 06:40:59 @FrenchLinda @elonmusk Like this? Where? https://t.co/1aU2e8hqTW
2022-10-31 06:15:03 @OilGains By reducing the need for physical travels.
2022-10-31 06:14:12 @OilGains Meta-FAIR is working on making hydrogen production more efficient.https://t.co/gQFFobmZXx
2022-10-31 00:19:18 RT @EricTopol: What if you had something that decreased death by >
2022-10-31 00:09:49 The Princess of Asturias Awards events took place last week, culminating with the award ceremony Friday.An amazing event driven by the Royal Family that brought many Asturians to the streets of Oviedo and many Spaniards to their TV. https://t.co/3FRIsQroWq https://t.co/Q8fZHULwP6
2022-10-30 17:20:29 Agreed. https://t.co/f6oHf4VwO6
2022-10-30 17:12:52 @JagersbergKnut @leoneu Yet only VR can give you a feeling of "presence".
2022-10-30 14:35:40 @elonmusk You should move to France, then.
2022-10-30 11:23:27 @3Aleph Paging @CoryOndrejka (who used to work at FB, BTW).
2022-10-30 11:21:33 @HonoAntoinee No.
2022-10-30 11:21:08 @leoneu 1700: mail is "nice to have but not a game changer"1900: telephone is NTHBNAGC1980: videophone? No one wants it.1995: email is NTHBNAGC2000: SMS is NTHBNAGC2005: Skype is NTHBNAGC2015: WhatsApp is NTHBNAGC2020: Zoom is NTHBNAGC....
2022-10-30 11:10:23 @RolandWank Even 1h a day is better than nothing and will save lots of physical trips.Historically, new communication technologies have enabled *more* human contacts, not less.These are contacts that simply would not exist otherwise.
2022-10-30 08:17:35 RT @schrep: So many high quality founders building non-zero sum answers to the climate crisis at #toughtechsummit. Food ground in waste…
2022-10-30 08:17:22 "True remote presence [through the metaverse] is a game changer for climate" https://t.co/lacTkf8Lg8
2022-10-30 08:08:42 @ZainulA40877140 What is happening in the field the *opposite* of the "exclusion of other methods [than ConvNets]".There is a *huge* amount of architectural exploration.We can debate whether there is enough originality in that, but there is huge incentives to devise new architectures.
2022-10-30 08:04:40 @cristiancanton @MetaAI Gràcies Cristian!
2022-10-30 07:58:46 Four of the coauthors are senior members of FAIR: Bottou, LeCun, Vincent, and Weston. https://t.co/q2tD9M1x3L
2022-10-30 07:44:33 @MilesCranmer @ChengSoonOng @earnmyturns @bschoelkopf @smolix @jaseweston 4 of the coauthors are at FAIR: Bottou, LeCun, Vincent, and Weston.That tells you something.
2022-10-30 07:41:52 @talrid23 Not just academia, research in general.It makes sense too: research is about exploring new things.Criteria for papers are different from criteria for practical products.Also, mixing Conv at the bottom layers with transformer modules at the top makes sense to me (DETR like)
2022-10-30 01:36:28 @natwitte You got it backwards.This is royalty honoring Science.
2022-10-30 01:35:15 @themintsv France, Ireland, Italy, Germany, and lots of others in the Eastern part of the EU.
2022-10-29 16:21:37 A new flavor of ConvNet crushes various flavors of transformers (as well as state-space models) for sequence modeling with long-range dependencies. https://t.co/EVvYsHGnp8
2022-10-29 11:43:18 @courchayj1 Those policies are not designed by a single person, not by engineers, but by a large body of people with very diverse backgrounds (human rights, law, politics, social science ...).Additionally, FB has an Independent Oversight Board to arbitrate content policy disputes.
2022-10-29 11:40:41 @courchayj1 FB has asked governments of liberal democracies to define what constitutes acceptable and unacceptable content online, because it doesn't see itself as having the legitimacy to do so.The response has been largely inexistent.Hence FB had to establish its own content policies.
2022-10-29 11:35:50 @courchayj1 Perhaps someone whose mission in life is to connect people with each other.
2022-10-29 11:28:50 @courchayj1 Avoiding dictatorship includes preventing authoritarian forces from corrupting the democratic process by spewing misinformation on social media."in order to maintain a tolerant society, the society must retain the right to be intolerant of intolerance."https://t.co/9NnIwTcBPx
2022-10-29 11:23:17 @andrei_no_no Illegal content in the EU includes hate speech, neonazi propaganda, Holocaust denial, &
2022-10-29 07:50:59 @franciscoortin @ComputingOviedo @demishassabis @fpa A pleasure to hear about your research!
2022-10-29 07:49:53 RT @invest_asturias: #ArtificialIntelligence | #AI pioneers @ylecun and @demishassabis receive the 2022 Princess of Asturias Award for Tech…
2022-10-29 07:44:45 Picture gallery of the Princess of Asturias Awards ceremony.What an incredible event! https://t.co/PlAdVRKClX
2022-10-29 06:49:18 Many thanks to Princess Leonor, and to the Foundation of the Princess of Asturias Awards. https://t.co/rlnIoL7b73
2022-10-29 06:22:56 @JozsefSzalma [reference needed]
2022-10-29 06:20:01 @tdietterich At the risk of sounding like a nativist, I suspect that the motivation that causes this behavior is evolved rather than planned, learned, and derived from some higher-level notion of empathy.
2022-10-29 05:51:18 Elon is putting himself into an untenable situation, conflicts of interest between content moderation on Twitter and Tesla business in various countries.From FB's former head of security. https://t.co/eqFUAVmT22
2022-10-28 15:35:10 @alscor1966 Hiding being a bowtie to the left of King Felipe.
2022-10-28 15:32:23 RT @martarroyo: Yann Le Cun (@ylecun)Apasionado del concepto de inteligencia desde pequeño, descubrió el preceptrón trás leer un libro…
2022-10-28 15:31:21 @jamesbuchanan27 Lots. Starting with Memory Networks, end-to-end Memory Networks, key-value Memory Networks, all from FAIR.
2022-10-28 15:29:24 @xuanhao_cao @alscor1966 In this case, it is the royal family honoring us!
2022-10-28 15:27:04 @FelixHill84 As long as the regime is a liberal democracy....Doesn't hurt that the royals seem like very nice people.Adam Michnik, another laureate, said "when in Poland, I'm a Jacobin revolutionary. When in Spain, I'm a monarchist."
2022-10-28 15:02:18 RT @fpa: Audiencia de los Reyes, la Princesa de Asturias y la Infanta Sofía a los galardonados con los #PremiosPrincesadeAsturias 2022. @…
2022-10-28 14:53:28 @AlexKontorovich Obviously.
2022-10-28 14:52:47 An audience of the royal family of Spain with the laureates of the Princess of Asturias Award. https://t.co/HhUptU6hwO
2022-10-28 09:51:38 RT @AstroCKragh: Graph Networks show huge potential for physics. But, in astrophysics, are there any *true* graph structures?YES! Causal…
2022-10-28 09:36:45 Effective altruism: limulus version.Doesn't require too many neurons, apparently. https://t.co/csRSRmFwaG
2022-10-28 09:24:28 RT @paulkrugman: That would be the 1950s in which the top tax rate was 91 percent and a third of private-sector workers were union members
2022-10-28 07:30:11 RT @MetaAI: Hey #ECCV2022, whether it was for: Demos of Project Aria. Our presentation on Make-A-Scene. Or maybe you just stopped…
2022-10-28 07:25:52 @loiannog @BotJunkie @ieeeras @2022Iros @nyutandon @nyuniversity Congratulations !
2022-10-27 23:06:06 RT @giacaglia: Amazing to see how many GPUs each organization uses. This seems to be a good proxy of how much they are adopting neural nets…
2022-10-27 22:31:25 @robbensinger I think I'm more to the right, near the Y axis.
2022-10-27 17:28:37 @SahilAk27054390 @demishassabis Sadly Geoff and Yoshis couldn't make the trip.
2022-10-27 17:24:22 s/to/two/
2022-10-27 17:19:02 @brandondamos Fame!
2022-10-27 16:32:13 RT @fpa: The scene is set and everything’s ready for the King, Queen, Princess of Asturias and Infanta Sofía to receive the 2022 Princess o…
2022-10-27 16:25:50 RT @fpa: Ya está todo listo para la audiencia que los Reyes, la Princesa de Asturias y la Infanta Sofía ofrecerán el viernes a los galardon…
2022-10-23 13:25:39 RT @DavidDeutschOxf: Why isn't there a White Mirror show that guesses what may happen when the technology that improves our lives goes on t…
2022-10-23 13:23:36 @pmddomingos Errr, also Western Europe never adopted the whole "dictatorship of the proletariat" thing, and largely stuck with liberal social democracy once they tried it (unlike much of the US).
2022-10-22 17:30:09 @TonyZador There is a direct line from Hubel &
2022-10-22 17:28:25 @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &
2022-10-20 21:32:44 "Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution"The NeuroAI manifesto: Neuroscience has long been an important driver of progress in AI. To accelerate progress in AI, we must invest in fundamental research in NeuroAI.https://t.co/JbjeNIhnB7 https://t.co/CiNLUb8tf7
2022-10-20 21:24:43 RT @MetaAI: @SiVola @ylecun Thanks to @HuggingFace, you can try demos for both:Hokkien: https://t.co/RICjW1AacdSpeechMatrix: https://t.co…
2022-10-20 16:37:25 Wednesday 27, in Oviedo. https://t.co/11orpd4v5d
2022-10-20 14:09:16 RT @gabrielpeyre: The celebrated Iterative Soft Thresholding (ISTA) algorithm to solve the LASSO is a special case of the Forward-Backward…
2022-10-20 09:32:18 RT @pierrepinna: #MachineLearning@ylecun’s Version of Autonomous Machine Intelligencehttps://t.co/CCDq1qejYOGiving the ability to mac…
2022-10-20 06:24:42 VentureBeat writes about Meta AI's Universal Speech Translator project.https://t.co/p0T9RQQGaL
2022-10-20 06:23:01 RT @boztank: One step closer to the universal translator! Thousands of languages around the world have no standard written form, and today…
2022-10-20 00:53:24 @arnauddsm @adjiboussodieng When 500 years old you are, look as good will you not! https://t.co/ZcgFUr3IQH
2022-10-19 23:54:56 @adjiboussodieng So, if you are 60, that makes me
2022-10-19 21:29:45 RT @BennieMols: 10 years after the breakthrough of Deep Learning, the 9th @HLForum organized a panel discussion (a.o. 3 Turing Award winner…
2022-10-19 20:38:31 RT @jeremyhsu: AI can outplay expert human players in a simplified version of the board game Diplomacy.But the cooperative aspect makes i…
2022-10-19 20:12:08 RT @LerrelPinto: Almost unlabeled data is the “secret sauce” for today's ML, but how do we use uncurated datasets in robot learning?Con…
2022-10-19 20:08:18 Dataset for speech-to-speech translation. https://t.co/ZS4NedARQN
2022-10-19 17:21:39 English <
2022-10-19 15:13:18 @kevin_zakka They got into the habit of calling successive generations of AI hardware after famous natural sites and national parks.They didn't ask any of the numerous French/Spanish speaker at FAIR about that one.Innocent multilingual hash code collision!
2022-10-19 13:11:03 @rasbt Precisely.Constructive debates, which may include harsh critiques, are fine.In fact, they are necessary!But "gotcha seeking" is a waste of time for the target, for the source, and for the community.
2022-10-19 13:04:50 RT @yuxiangw_cs: They say deep learning is just curve fitting. But how good is it in curve fitting exactly? Are DNNs as good as, or even be…
2022-10-19 12:50:09 @JeanRemiKing @theASSC @NicPes @StanDehaene Félicitations Jean-Rémi!
2022-10-19 12:38:30 (exceptionally, the talk will actually start at 11:15am EST)
2022-10-19 12:35:54 Giving a talk today at 11:00am EST in the van Vreeswijk Theoretical Neuroscience Seminar series (VVTNS).https://t.co/PwAUjIOtWL
2022-10-19 04:07:38 RT @randall_balestr: Decision trees do not combine input dims at each node but an oblique DT does1. ODTs are not easily interpretable due…
2022-10-19 04:05:47 @Grady_Booch @USClaireForce @synoptase We should consider the possibility that, in the end, human learning is akin to a particularly efficient application of differentiable statistics.
2022-10-19 03:54:27 @Grady_Booch @USClaireForce @synoptase Sure!And by the way, portrait photography drastically shrunk the market for painted portraits.
2022-10-19 03:43:22 @Grimeandreason @dmonett @Grady_Booch That's right, scientists cite prior work and work they were influenced by.But artists *do* build on others' work without mentioning them.
2022-10-19 03:38:43 @Grady_Booch @DavidRimshnick @adawan919 @Oatmeal Exactly.
2022-10-19 03:33:59 @USClaireForce @Grady_Booch @synoptase I am suggesting that generative art should be subject to copyright law in *exactly* the same way as human-produced art.Copying is copyright infringement, regardless of the production process.Learning from others in order to create new artifacts isn't.
2022-10-19 03:14:04 @Ket_Cherie @marklemley @pavel_soukenik We can argue about the level of creativity of generative models, but they *do* create artifacts that are substantially different from what they were trained on.The originality of their production may be on par with that of run-of-the-mill human artists.
2022-10-19 02:51:16 @Grady_Booch @adawan919 Gauguin was pretty well off from his work, Toulouse-Lautrec was from a wealthy aristocratic family. The others may have died pennyless, but not because their art was copied (or perhaps even appreciated) during their lifetime.
2022-10-19 02:21:41 @marcotrombetti Proving someone wrong with a good theory, experimental evidence, or just a good substantial argument, is an integral part of the process of science.But twisting someone's arguments in an attempt to prove them wrong is just counterproductive trolling.
2022-10-19 02:12:09 @Grady_Booch @adawan919 I tend to think that when a piece is being widely imitated, the effect is to make the original more valuable, not less.
2022-10-19 01:49:04 RT @kjgeras: Globally-Aware Multiple Instance Classifier (by @ArtieShen) extended to 3D data by @jpatrickpark.There are a few clever thin…
2022-10-18 22:29:29 @PeterJungX @Grady_Booch @Meta A word of advice: before making disparaging public statements about people's character, you might want to learn about them.Not just me and Zuck, but the 70,000 people you just insulted because "they work for Zuck".
2022-10-18 22:20:20 @armchair_prof I do admit when I'm wrong.I would be a pretty terrible scientists if I wasn't capable of that.But I don't admit than I was wrong simply because some troll misrepresents my statements in order to show that I was wrong whereas I was not wrong.
2022-10-18 22:14:03 Public service announcement:I will not waste time correcting statements about what I allegedly said or not said, whether I meant something or another, whether I changed my mind or not, particularly when they come from people whose main motivation is to show that I was wrong.
2022-10-18 22:05:21 Elliptical Episodic Bonuses (E3B): intrinsic motivation to drive task-independent exploration in varying environments.From Meta-FAIR @NeurIPSConf . https://t.co/JkvjPeJmUY
2022-10-18 20:44:36 RT @RichardGarriott: Vote Out the insurrectionists, autocrats, and those pushing their religion into your health. Vote out MAGA Republicans…
2022-10-18 20:35:21 @dmonett @PDillis @Grady_Booch No. These systems *learn* from existing art.If a piece such a system produces copies or reproduces an existing piece by another artist, that constitutes copyright infringement.But if the piece is merely *inspired* by other artists, then it's the same as what human artists do.
2022-10-18 18:04:03 @adawan919 @Grady_Booch As soon as I post a paper on ArXiv, I know it can be used by anyone to build upon.In fact, I *hope* it will be used by as many people as possible.
2022-10-18 17:56:37 @PDillis @dmonett @Grady_Booch Yes
2022-10-18 17:55:30 @BoseShamik @Grady_Booch In science, one must cite both prior art (influential or not) and works that were influential.In art, artists rarely list what influenced them at the bottom of their opus.
2022-10-18 17:51:00 @dmonett @Grady_Booch If the result was plagiarizing his work, then that would be unethical.Of his work was combined with others to create something new, then absolutely not.This is exactly what scientists do.Whether the "scientist" in question is human or not is completely irrelevant.
2022-10-18 17:45:49 @DrHughHarvey Sumit Chopra, K. Geras, both at NYU Radiology
2022-10-18 17:44:07 RT @MetaAI: Who’s coming to #ECCV2022 in Tel Aviv? For those attending virtually, we’ll be at @_LXAI’s event on October 24th. And for those…
2022-10-18 17:43:11 @roydanroy @boazbaraktcs @Grady_Booch Human calculators who could mentally compute long operations, logarithms, and other things were "understandably concerned" when electronic computers appeared.
2022-10-18 17:35:56 @Grady_Booch I am thankful for the fact that I do *not* need your consent to read your papers, get inspired by them, and build new ideas, theories, algorithms, systems, even products based on that.
2022-10-18 17:26:51 @Grady_Booch @synoptase Anyone interested in the ethical issues surrounding copyright and IP should read "Free Culture" by Lawrence Lessig.
2022-10-18 17:24:36 @boazbaraktcs @Grady_Booch Anyone interested by the real ethical issues around copyright and IP should read "free culture" by Lawrence Lessig.
2022-10-18 17:22:25 @boazbaraktcs @Grady_Booch Culture really wants to be free and free flowing.Copyright (and other IP instruments) exist to incentivize creation.Straight plagiarism is already illegal.But "getting inspiration from" isn't, and shouldn't, whether the entity being inspired is a human or not.
2022-10-18 16:55:53 @mtnmarcus What do you think the FB's intent was in relation to Cambridge Analytica?
2022-10-18 13:05:21 Highly visible scientific fields with Big Open Questions attract the interest of many people.Some good: mathematicians, physicists, philosophers, educators, serious science journalists, young students, philanthropists... Some not so much: trolls, crackpots, politicians,.... https://t.co/eAlB4aa2WV
2022-10-18 13:04:40 @rao2z Highly visible scientific fields with Big Open Questions attract the interest of many people.Some good: mathematicians, physicists, philosophers, educators, serious science journalists, young students, philanthropists... Some not so much: trolls, crackpots, politicians,....
2022-10-18 12:56:31 RT @gabrielpeyre: Least square is the most fundamental data analysis method. Gauss or Legendre? https://t.co/LJHUYbYJgu https://t.co/PbsUDu…
2022-10-18 12:50:50 RT @johncarlosbaez: In this 2017 interview Witten said he was getting interested in quantum information theory as a way to make progress in…
2022-10-17 23:32:36 @Grady_Booch @synoptase Should that also be applied to human artists?No one could get inspired or influenced by existing artists without their permission?It would be the death of all artistic creation.Why should the rules for machine learning be any different from those for human learning?
2022-10-17 23:28:00 RT @AlecStapp: Finally! Very glad to see the Germans come to their senses. https://t.co/7fXmbJb6uw
2022-10-17 23:02:30 RT @davidchalmers42: the video for my talk last week on "are large language models sentient?" is now online. https://t.co/kfCvhtVHLG
2022-10-17 06:11:39 @pfau @kchonyc @McAllesterDavid The whole idea of contrastive loss is much older than that anyway.I mean, the original Siamese net (1993) used a contrastive loss.And denoising auto-encoders were around in the 1980s.....
2022-10-07 23:28:06 @amcafee You can't expect a bunch of investors with a 3-month horizon, a 10-minute attention span, and zero understanding of science and engineering, to understand the value of long-term technological bets and the necessity of investment in R&
2022-10-07 23:15:58 @mahemoff @BillHiggins @Grady_Booch Indeed. "Move fast and break things" was abandoned 8 years ago, both the slogan and the ethos.
2022-10-07 23:14:23 @kgb1001001 @Grady_Booch When a bug causes a major service disruption, or a mere few percent of excess computation, the resulting cost may be in the millions per hour.That creates a huge incentive to get it right.
2022-10-07 23:10:20 @KyleCranmer @theory_dad We materialize energy, and vice versa.
2022-10-07 22:39:20 @Grady_Booch I think you grossly underestimate the level of expertise of ML software engineers at Meta, Google, and other places.I mean, they routinely deploy ML code used by billions of people, running on millions of CPUs (or other {X}PUs).
2022-10-07 22:24:44 RT @riceasphait: Introducing "Generalised Implicit Neural Representations"!We study INRs on arbitrary domains discretized by graphs.A…
2022-10-07 19:45:49 @rao2z
2022-10-07 19:19:42 RT @ThibaultTellie5: Bravo à @univ_lille qui a pris la décision de donner le nom de Samuel Paty à l'un de ses amphis . La mention du terror…
2022-10-07 19:16:13 This is what I *actually* looked like as a 5 year old. https://t.co/MEy2uZNv4q https://t.co/1su6lZMspX
2022-10-07 18:58:13 RT @fpa: @geoffreyhinton, @ylecun, Yoshua Bengio y @demishassabis, Premio Princesa de Asturias de Investigación Científica y Técnica 2022,…
2022-10-07 18:52:05 @chris_jwala Yeah. And by the way, it's Poincaré
2022-10-07 13:35:06 RT @MetaAI: Just under 3 weeks to join the #MyoChallenge!Develop a solution on MyoSuite for teaching a physiologically realistic musculos…
2022-10-07 13:30:05 RT @femtechinsider: Are you building a #digitalhealth app, and looking for the right technology to drive patient engagement? Meet @Nabla…
2022-10-07 12:20:40 @Sonny_AD Oui. Beaucoup de "Le Cune"
2022-10-07 12:19:57 @albn You'd be surprised.The french straight and short 'a' sound doesn't really exist in American English.It's either yenn or yawn.
2022-10-07 12:16:37 @chris_jwala And most people don't even pronounce their own name as they were originally pronounced.I don't even *spell* my name the way it's supposed to, which is 'Le Cun'
2022-10-07 12:13:54 @bobme Better than a Messerschmitt aggression (with two 't's)
2022-10-07 12:10:07 @yuhsinjc It's not obvious even for some French people.
2022-10-07 12:07:58 @grvsmth Gree-ehve Smeess?
2022-10-07 02:59:58 @hardmaru No, but some attempts by English speakers to pronounce my last name the French way end up sounding like the French counterpart of the word you have in mind.
2022-10-06 12:48:56 RT @EvaSmartAI: "Common sense can be seen as a collection of models of the world that can tell an agent what is likely, what is plausible,…
2022-10-06 12:35:31 RT @gabrielpeyre: The Fourier transform diagonalizes convolution operators aka circulant matrices aka linear operators which commute with t…
2022-10-06 05:15:58 RT @eddiemajor: @MetaAI @ylecun @TatyanaZaria @VICE "Human and non-human animals seem able to learn enormous amounts of background knowledg…
2022-10-06 05:11:09 @Observing2022 @francoisfleuret I believe this idea may have originated with Olivier Costa de Beauregard.
2022-10-06 04:57:46 @rao2z Thanks for watching!
2022-10-06 04:54:34 @rao2z Haha!
2022-10-06 04:48:26 RT @TanyaShreedhar: One of my fondest memories of #HLF22 was discussing the limitations of #RL with the godfather of deep learning @ylecun!…
2022-10-05 23:28:49 RT @LerrelPinto: Excited to release AuRL, a new framework for learning dynamic manipulation from sound! By using just desired sounds as inp…
2022-10-05 23:26:33 RT @MetaAI: Value-Implicit Pre-Training (VIP), led by @JasonMa2020 and team, is a self-supervised visual representation trained on large-sc…
2022-10-05 17:43:17 @francoisfleuret Same thing.It's just that the secret latent variables cannot be local.
2022-10-05 14:59:31 VICRegL matches local features enabling the system to predict the representation of one image by moving the local features of a distorted version.Great results on Pascal VOC and ADE20k after VICRegL pretraining on ImageNet-22k, with linear head, both with fine-tuning and without
2022-10-05 14:58:10 TL
2022-10-05 14:57:29 NeurIPS'22: "VICRegL: Self-Supervised Learning of Local Visual Features"Adrien Bardes, Jean Ponce, and Yann LeCun.Paper: https://t.co/ut1GothPaGCode and pretrained models: https://t.co/D10wJjrZUx... https://t.co/i2lCH6tlsQ
2022-10-05 11:07:49 RT @AlecStapp: Nearly two-thirds of U.S. graduate students in artificial intelligence and semiconductor-related programs are born abroad.…
2022-10-05 02:57:06 RT @DCTannoudji: Impressionnante école française de physique quantiqueAlfred Kastler, Prix Nobel 1963, a dirigé la thèse de Claude Cohen-…
2022-10-04 21:43:48 @HulsmanZacchary Yi's MCR2 method is very similar to our VICReg method.We are thinking along the same line.
2022-10-04 18:05:41 Video of my Berkeley talk https://t.co/XKIboqzbhEPaper: https://t.co/7ZgRtLIQWY
2022-10-04 18:04:26 Nice article at Vice that give the gist of my recent proposal for autnomous AI architectures.It derives from my collquium talk at UC Berkeley last week.https://t.co/VspECDDWDW
2022-10-04 17:40:40 RT @THayes427: Need help imagining what your next animated character could look like? Make-A-Video can help with that #MetaAIMakes https:…
2022-10-04 16:50:26 @HODLFrance I would.
2022-10-01 02:42:50 RT @schrep: Excited to combine this with Make-A-Video in the future to be able to construct full video and audio sequences from a text prom…
2022-10-01 02:19:21 @mraginsky @beenwrekt @kamalikac @DimitrisPapail No joke.Have you looked at the G-Scholar rankings?You may disagree with G-Scholar's methodology, or with the relevance of the results, but I'm definitely not making this up.
2022-10-01 02:15:43 @robpiercy Did you read that in Murdoch's Wall Street Journal?Academicstudies are very divided on the question of the effect of social media on politics.Review: https://t.co/5vxGbafMbhAnnotated list of scholarly studies: https://t.co/Ri5NLKfDHp
2022-10-01 02:01:51 RT @LEGENRA: Beau graphique ce matin dans @le_Parisien sur le coût de l’électricité en Europe. Les Français ont-ils conscience que leur f…
2022-10-01 01:49:38 No sh*t.Murdoch's disinformation machine will be the first to blame social media for it.But that's because social media are eating what he considers his advertising lunch. https://t.co/eNFU4Hf6iV
2022-10-01 01:38:42 @beenwrekt @kamalikac @DimitrisPapail Enormous impact: https://t.co/WzNDqB18qg
2022-10-01 01:38:04 @florian_tramer @beenwrekt @kamalikac @DimitrisPapail It's actually ahead of NeurIPS, and way ahead of ICML in terms of impact:https://t.co/WzNDqB18qg
2022-10-01 01:36:49 @mraginsky @beenwrekt @kamalikac @DimitrisPapail ICLR was the first to use https://t.co/tOM7lHcmSz Others followed.In a mere 9 years, ICLR has become the 9th highest-impact publication venue in all of Science, according to G-Scholar, ahead of NeurIPS, PNAS and many other distinguished journals:https://t.co/WzNDqAIZc8
2022-10-01 01:31:10 @mraginsky @beenwrekt @kamalikac @DimitrisPapail ...Simultaneously, we had been frustrated with reviewing process of other conference, and thought an open reviewing process would be better.My desire was for it to be completely open (and hence single blind).But subsequent program chairs insisted on double blindness.....
2022-10-01 01:28:39 @mraginsky @beenwrekt @kamalikac @DimitrisPapail No, not really. The real story is that Yoshua and I had been running the invitation-only Snowbird Workshop (me since 1997).It had become the headquarters of the growing deep learning community.There was a need for an open conference. So we turned it into ILCR in 2013....
2022-09-30 22:13:37 RT @rasbt: sometimes I forget to appreciate how small CNNs can be https://t.co/AxzLPweAFw
2022-09-30 20:20:04 @david_picard The system was implemented in C.But we needed an interactive frontend language and settled on Lisp because Lisp is both flexible, compact, and easy to write an interpreter for.The project was started in 1987, in the last semester of my PhD and the last year of Leon's degree.
2022-09-30 20:17:01 @belsebubb Yes, you are probably right.I'm not taking credit.Merely pointing that the concept is older than what many people might believe, including in the context of deep learning.
2022-09-30 20:10:02 New preprint: VICReg and SimCLR in the kernel regime, with some theory. https://t.co/7T3Z2Phz70
2022-09-30 19:06:52 I should have said that this was first implemented in 1991.
2022-09-30 19:04:59 @francoisfleuret Not that I know of. But I might be wrong.Our system was first implemented in 1991.
2022-09-30 19:00:53 I, for one, welcome our future high-resolution, heroic, insomniac, water-resistant, luminiferous, robotic overlords. https://t.co/LtwGbOaHxF
2022-09-30 18:53:34 AudioGen Text->
2022-09-30 18:50:56 RT @schrep: prompt: "A robot heroically walking in the rain, at night, in urban city, cinematic, 4k, sharp focus, emitting diodes, smoke, a…
2022-09-30 18:48:11 @MilesCranmer \renewcommand{\baselinestretch}{0.95}
2022-09-29 14:46:51 It had to happen:Generating video from a textual description or from a photo.From Meta-FAIRhttps://t.co/tnMN2NWbQP https://t.co/Qy8HYMrzAM
2022-09-29 12:20:54 @newplatonism @JFPuget @GaryMarcus There is plenty of critique in science.In fact, peer review is designed exactly for that purpose.Anonymity and requirements of reproducibility enable unfettered critique.But notice the "peer" in "peer review".Reviews are supposed to be made by peers.
2022-09-29 10:51:06 @francoisfleuret And both remarks are made by the very same person.
2022-09-29 06:50:02 @yudapearl @artistexyz @pabbeel What works are you thinking of?
2022-09-29 00:10:14 RT @pabbeel: Key open challenges @ylecun listed https://t.co/ARkHlblnJF
2022-09-29 00:09:49 Yup. https://t.co/o31SuNvyBd
2022-09-28 17:25:08 RT @NYU_Courant: Professor Yann LeCun (@ylecun) spoke at Berkeley's Electrical Engineering and Computer Science Colloquium yesterday. His e…
2022-09-28 15:59:53 Slides: https://t.co/tj2eAS3j3NPaper: https://t.co/7ZgRtM0rOw
2022-09-28 14:04:46 Video of my colloquium talk at Berkeley EECS yesterday."A path towards autonomous machine intelligence" https://t.co/EVobXFl1s8
2022-09-27 18:04:13 @xamat Hahaha!
2022-09-27 16:48:25 Indeed! https://t.co/IpG04fsbXN
2022-09-27 16:45:49 RT @ai_ngrosso: Convolutional structure learned from scratch in fully connected neural networks. A tale of non-gaussianity and cumulant ten…
2022-09-27 16:10:48 @GaryMarcus @Zergylord @rao2z @MITCoCoSci @guyvdb @HenaffMikael @alfcnz So, planning rocket trajectories is not the real world?Industrial robots trajectory planning is not the real world?Because MPC is what *every* *single* real-world motion planning system uses.The modern challenge is using *learned* world models that can deal with uncertainty.
2022-09-27 16:03:58 @juancervinouy @GaryMarcus You got that right.I prefer constructive contributions.
2022-09-27 15:57:24 @GaryMarcus @trading_noise Sorry Gary, I'm not getting back into this debate. It's a complete waste of time.Il n'y a pas plus sourd que celui qui ne veut pas entendre.
2022-09-27 15:49:28 @GaryMarcus @Zergylord @rao2z @MITCoCoSci @guyvdb @HenaffMikael @alfcnz It's not speculation. It's classical Models Predictive Control!This kind of things has been standard for 60 years for motion planning.And there are *tons* of recent papers on MPC with DL trained models.
2022-09-27 15:42:00 @GaryMarcus @trading_noise Dismissing your vague proposal for explicit (discrete) symbol manipulation mechanisms (because they would incompatible with gradient-based learning) does not amount to dismissing the entire neurosymbolic literature.
2022-09-27 15:35:49 @GaryMarcus @trading_noise I have been dismissing:- your constant dismissal of DL, - your mischaracterization of my cognitive architecture as a dismissal of DL, - your caricature of me as summarily dismissing all structure.- your shifting definition of "symbolic" so you can declare victory.
2022-09-27 15:24:58 @GaryMarcus @trading_noise My opinions have not changed.Gary's understanding of my opinion may have changed.In any case, I'm not the one who has been dismissing a literature. I certainly never dismissed whatever it is that you call the neurosymbolic literature....
2022-09-27 15:14:58 @GaryMarcus @Zergylord @rao2z @MITCoCoSci @guyvdb @HenaffMikael @alfcnz I don't know if that constitutes enough "innate structure" for you to declare victory.It seems to me that you may do so regardless.I'm saying this as someone whose entire career has been built on proposing structure for learning systems.
2022-09-27 15:12:04 @GaryMarcus @Zergylord @rao2z @MITCoCoSci @guyvdb @HenaffMikael @alfcnz The cognitive architecture in my position paper does not rely on symbols (only vectors and differentiable function) but requires an "innate" planning/reasoning/optimization mechanism to search over action sequence (&
2022-09-27 15:06:04 @GaryMarcus Look, I'm interested in moving progress forward, which occurs through concrete proposals, implentations, empirical results, and theoretical results.I'm considerably less interested in debating the drift of concept definitions so that one party or another can declare victory.
2022-09-27 14:57:20 @AVMiceliBarone @GaryMarcus @yudapearl @TiernanRayTech That's why my position paper proposes to use a *single* world model and to configure it dynamically for the situation at hand.This enables the model to share common knowledge between situations.
2022-09-24 22:14:47 @__goldfinger @JagersbergKnut @mustafasuleymn You are conflating several different things: scientific advance, technology development, product implementation, and market penetration.
2022-09-24 22:08:57 @WickedViper23 @mustafasuleymn The idea of self-organization is not missing.In fact, it is at the core of modern neural nets.There were entire conferences on "self-organizing systems" going back to the early 1960s.https://t.co/IPlbpmrLTH
2022-09-24 22:03:06 @grbradsk @mustafasuleymn https://t.co/7ZgRtLIQWY
2022-09-24 21:59:55 RT @TiernanRayTech: If you simply believe today’s deep learning will scale to infinity, you would do well to consider the sage observations…
2022-09-24 21:49:28 RT @voixdunucleaire: Spécial #StandUpforNuclear ce samedi à #Strasbourg Place Kléber : totalité des déchets nucléaires HA, vitrifiés…
2022-09-24 21:39:21 @chrislengerich @0majors @hardmaru @JP_GHIBLI Cute.
2022-09-24 21:25:37 RT @kchonyc: https://t.co/5LJ7G8Izyy
2022-09-24 08:28:41 "Let's be environmentally conscious by shutting down a zero-emission nuclear power plant and replacing it with non-existent Russian gas." https://t.co/0ssP5rAJ3z
2022-09-24 08:24:23 @francoisfleuret Oh, I get it: you are talking about G.W. Bush, Dick Cheney, and their Neocon plan for a "New American century" whose first move was to gratuitously invade Iraq, right?
2022-09-24 08:08:59 @mustafasuleymn I am always surprised by the incomprehensibly large number of people who believe the myth that human-level AI will be achieved through one key idea, by one group of people, in one lab/company, that will stay ahead of the pack for years.(and make trillions).It's preposterous.
2022-09-24 07:29:38 @lxbrun That's because it can go both ways, no?
2022-09-23 18:17:08 RT @scienceisstrat1: Decoupling growth and CO2 emissions is now a global phenomenon Cc: @Noahpinion @dwallacewells @ramez @JesseJenkins h…
2022-09-23 18:08:28 RT @maksymeristavi: my family was just forced to vote at gunpoint in russian cosplay of a “referendum” in southern ukraine:- they come to…
2022-09-23 15:15:33 @TheRandomMtrix @hardmaru That would be called retrodicting
2022-09-23 09:36:50 Fame! https://t.co/Ios0OO3Y6H
2022-09-23 09:33:41 @0majors @hardmaru @JP_GHIBLI New prompt suggestion: "Yann LeCun as Totoro"
2022-09-23 09:29:42 @hardmaru I look so young!It got the blue sweater right.
2022-09-23 07:00:58 ECML-PKDD organizing committee https://t.co/Gq2p8CHICL
2022-09-23 06:40:54 Giving a keynote at @ECMLPKDD in 20 minutes.
2022-09-22 18:36:53 @nimaone111 @alfcnz @Meta @MetaAI @ykilcher It is the Iranian regime that block WhatsApp, without cooperation from Meta.Iran already blocks FB.https://t.co/PqAMdrwecu
2022-09-22 18:31:55 @nimaone111 @alfcnz @Meta @MetaAI @ykilcher I would suggest that, before you accuse people of immoral behavior, you check whether what you observe is actually caused by what you think.For starters, Meta does not operate in China at all, precisely because it doesn't want to cooperate with the Chinese government.
2022-09-22 18:14:10 RT @TheOfficialACM: Livestreaming Now: #HLF22 panel "Communication in Crisis?" Tune in to hear moderator @evawolfangel , #ACMTuringAward re…
2022-09-22 17:54:32 Congratulations @koraykv !!! https://t.co/keh8IM5iZK
2022-09-22 17:49:57 RT @NYUDataScience: Grace W. Lindsay (@neurograce), joined CDS this fall as an Assistant Professor of Psychology and Data Science. Read mor…
2022-09-22 17:48:20 RT @gabrielpeyre: The proximal point algorithm is the simplest non-smooth optimization method. While often intractable, it is the basis for…
2022-09-22 17:44:47 RT @NewYorkStateAG: Our suit against Donald Trump and the Trump Org alleges a years-long financial fraud scheme. Here’s how Trump generat…
2022-09-22 16:50:39 RT @NanaYaaSally: Most anticipated session @HLForum with @shakir_za @ylecun @_beenkim Dina Machuve, Sanjeev Arora, Geoffrey Hinton, Raj Red…
2022-09-22 16:48:46 @demishassabis @DeepMind Congrats Demis!
2022-09-22 16:02:39 RT @ziv_ravid: This is a great opportunity! I highly recommend it!
2022-09-22 16:02:20 RT @EricTopol: @NatureMedicine Comprehensive analysis of #LongCovid neurologic outcomes at 1 year from >
2022-09-22 15:08:25 Hilarious.But I want to stress that I have *never* worn a jacket and tie in my life. Certainly not at Bell Labs.(though I've been known to wear bowties in exceptional circumstances). https://t.co/Ok71bMC59J
2022-09-22 15:07:32 @hardmaru Hilarious.But I want to stress that I have *never* worn a jacket and tie in my life. Certainly not at work.(though I've been known to wear bowties in exceptional circumstances).
2022-09-21 23:50:56 RT @anilananth: Is the public losing faith in science? Is 'faith' the right word? What can scientists and communicators do about it? I'll b…
2022-09-21 23:46:51 @davidchalmers42 Regardless, we can come up with a situation whose resolution requires a physical Intuition never described in any text, but easily resolved by anyone with an experience of the physical world.
2022-09-21 23:36:59 RT @MetaAI: Join us for virtual speaking sessions at AI@Scale next Weds Sept 28 as Meta AI researchers @ylecun, Ludovic Hauduc, Kaushik Vee…
2022-09-21 23:26:41 @davidchalmers42 Can an LLM figure this out:In front of us are six gears numbered 1 to 6, mounted on axles in a row. Each gear is engaged with the next gear. If gear number 3 is rotated clockwise, in which direction will gears 1 and 6 rotate?
2022-09-21 10:33:51 @DuongBinhNhu1 @HLForum It nice to meet you and chat about cybernetics and the history of control, system theory, pattern recognition, and ML.
2022-09-21 10:25:59 RT @TheOfficialACM: Communication in #Crisis ? Join moderator @evawolfangel , #ACMTuringAward recipient @ylecun , journalist @anilananth ,…
2022-09-20 22:12:13 RT @rodneyabrooks: This is ani actual picture from Trump's rally in Youngstown OH on Sept 16, 2022. If you or the GOP can excuse this then…
2022-09-19 05:35:43 RT @AlecStapp: "a generous welfare state makes people not want to work"(united states is the line on the bottom) https://t.co/vqXxT2lYyD
2022-09-19 04:08:55 RT @ccanonne_: Answers and discussion for last week's quiz on lower bounds for small-depth circuits: "What can we compute with reasonable-…
2022-09-18 20:27:41 @ThomasW423 Who claimed that's what you hear from the Left?
2022-09-18 20:26:31 @davidwbrw Income redistribution doesn't equate "oppression".It means less poverty, more access to education and healthcare, more social mobility.It may mean more taxes, but not necessarily: it's more a matter of what taxes are used for (e.g. education vs prisons, health vs defense...).
2022-09-18 19:54:59 @davidwbrw The USA is pretty horrible, actually.Not only is it somewhat regulated, it also has obscene wealth and income disparities.Being at the low end of this chart (below the main diagonal) is bad, not good.
2022-09-18 19:44:24 RT @Noahpinion: Since 2010, global per capita CO2 emissions have fallen, even as economic growth has continued steadily.Degrowth is snake…
2022-09-18 19:24:56 Some countries have high economic freedom *and* fiscal policies with large income transfer (from rich to poor).They are the ones that seem to do well.This invalidates usual narratives from the Right ("taxes kill freedom") &
2022-09-18 15:22:42 @mmbronstein @gordic_aleksa There are 2 main types of SSL architecture: generative (auto-encoders and variants) and joint embedding (e.g. Siamese nets, etc).Both are much older than 10y, but their empirical success is less than 10yo.
2022-09-18 15:16:43 @emot @github I found it again recently and haven't tried to power it up yet.
2022-09-18 15:04:05 @github Synertek VIM-1, 1978.1MHz 6502, 1KB RAM https://t.co/NdUBjbNx1Y
2022-09-17 22:19:55 RT @erikbryn: @ericschmidt @elonmusk @CondoleezzaRice @scsp_ai @Ukraine Boston-NYC could be a 1 hour, smooth, frequent train ride from down…
2022-09-17 21:03:02 Me. https://t.co/9P7aOHfLKr
2022-09-17 17:49:41 RT @ssmonsays: Given the recent excitement about permutation symmetries of neural networks and linear mode connectivity, we are happy to sh…
2022-09-17 17:33:55 @wellingmax @RichardSSutton Some authoritarian idiot, regardless of the claimed ideology (fascism, communism, religious extremism, etc)
2022-09-17 15:13:27 @erikbryn And your arms a lot stronger.
2022-09-17 02:20:38 RT @ziv_ravid: Our work on learning informative priors for transfer learning was accepted to #NeurIPS2022.Once again, thank you to all the…
2022-09-17 02:20:10 @rasbt Denoising AEs are much older than 10 years.But yes, diffusion models are essentially multi-step denoising AEs.
2022-09-17 02:12:07 @adjiboussodieng @MakeItAQuote LOL
2022-09-16 23:00:09 RT @jburnmurdoch: NEW: income inequality in US &
2022-09-16 22:41:07 @neurograce Greetings from the 5th floor!
2022-09-16 18:54:06 Looks like people stopped wanting cherries on cakes. https://t.co/E1j4aRCrDb
2022-09-16 18:51:46 @arthur_spirling It's actually "détour", and it mean "alternative route"
2022-09-16 18:46:36 @ziv_ravid True story.
2022-09-16 12:45:51 I should say the #3 includes graph neural nets, which I see as a major conceptual advance (albeit somewhat subsumed be transformers).
2022-09-16 12:44:38 @ZQtLk2Vl6pUrxMc Graph neural nets and transformers are examples of dynamic graph networks.You can think of them as NN modules connected through a graph that changes dynamically with the input data.
2022-09-16 12:42:30 @EvoOzm The first papers on this are way older than 10 years.- 1993: https://t.co/elnVChgXOZ- 2005: https://t.co/opglbiRNbV- 2006: https://t.co/qKCMVunNiS
2022-09-15 22:02:14 @johncarlosbaez Friction is a possibility.But it should be possible to build a version of this where suction is the mechanism, and friction plays no role.
2022-09-15 21:46:39 @SilverStar_92 @davidchalmers42 I agree that "unbounded inference / thinking time" will make a comeback.I disagree that it will be in the form of recurrent architectures.I have another proposal here: https://t.co/7ZgRtLJoMwHint: It's inference of latent variables.
2022-09-15 21:45:17 @hbouammar @davidchalmers42 Multi-step auto-encoders.Yeah, it's cool.
2022-09-15 21:44:47 @_krr12 @davidchalmers42 https://t.co/7ZgRtLJoMw ?
2022-09-13 02:09:36 RT @Jake_Browning00: A Spanish translation of the recent piece with @ylecun in @NoemaMag! Thanks to @gienini
2022-09-13 01:14:03 LOL https://t.co/tvhJcYYKqv
2022-09-12 22:07:32 RT @soumithchintala: Schrep has been one of the fiercest supporters and drivers of @PyTorch .Great thread from him with his perspective on…
2022-09-12 22:06:01 RT @schrep: A big moment for a key piece of open infrastructure powering the AI revolution. Introducing the #PyTorchFoundation! https://t.…
2022-09-12 20:26:12 @RMajdoddin Nope. Meta is still all in on PyTorch for internal R&
2022-09-12 20:24:59 @lc_rs3397 No.
2022-09-12 20:23:55 @GiorgioMantova @Meta The amount of engineering support for PyTorch within Meta is certainly not going down.
2022-09-12 20:22:43 @tannguyen2013 It ensures that continued support is not subject to resource allocation decisions in one company.With this structure, there will be support as long as there a users who care sufficiently.
2022-09-12 20:17:02 RT @_willfalcon: This is a MAJOR milestone for the AI community!! to say i’m super excited is an understatement. thanks for your leadersh…
2022-09-12 17:01:14 RT @MetaAI: Today, Mark Zuckerberg announced the launch of the #PyTorchFoundation under the @LinuxFoundation &
2022-09-12 17:00:15 RT @linuxfoundation: BIG NEWS: @Meta has transitioned @PyTorch to the Linux Foundation Since 2017, PyTorch has grown to become a leadin…
2022-09-12 16:52:46 @tannguyen2013 1. PyTorch serves a diverse community, which should influence directions of development.2. An independent foundation ensures continued support, independently of the decisions, interests, or resources of a single company.
2022-09-12 16:50:41 @GiorgioMantova No. More resources from Meta, and way more resources from contributors other than Meta, now that PyTorch is a perennially open community project.
2022-09-12 16:29:16 PyTorch is now handled by an independent foundation, the #PyTorchFoundation, which is under the #LinuxFoundation umbrella. https://t.co/DnrdhWryKv
2022-09-12 00:40:10 Trajectory planning with 800 million neurons. https://t.co/mQ1Vm1zs3H
2022-09-12 00:36:02 @McAllesterDavid @ilyasut @jurafsky @manning @percyliang Our main argument in favor of the grounding hypothesis is that the structure of the world is way too rich and complex to be expressed by humans in linguistic form. Hence learning from pure human-produced text cannot possibly lead a machine to a deep understanding of the world.
2022-09-12 00:35:54 @McAllesterDavid @ilyasut @jurafsky @manning @percyliang I do agree that the emergence of discrete concepts and representations preceded the emergence of language, and perhaps enabled it.
2022-09-12 00:35:36 @McAllesterDavid @ilyasut @jurafsky @manning @percyliang - "They seem to implicitly assume that nonhuman intelligence is non-symbolic."We are *not* assuming that non-human intelligence is non-symbolic (whatever you mean by that). But we do claim that it's non linguistic.
2022-09-12 00:35:05 @McAllesterDavid @ilyasut @jurafsky @manning @percyliang Glad you wrote this, though I disagree with most of it ."the grounding hypothesis: no learning algorithm [...], can learn to understand using only a corpus of text. This is a claim about the limitations of (deep) learning."No, it's a claim about the limitations of language.
2022-09-12 00:18:07 Arms race. https://t.co/5ZWgIqVQ32
2022-09-12 00:13:44 RT @erikbryn: TL
2022-09-10 15:41:46 Multiple interpretations of ambiguous percept must be associated with multiple values of an explanatory latent variable.By *latent*, I mean that they are not outputs but internal inputs.What are the mechanisms in the brain for exploring the set of plausible values? https://t.co/DEuy0MvdBS
2022-09-10 05:34:52 RT @oeaw: How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? Chief AI Scientist at…
2022-09-10 05:34:22 RT @ziv_ravid: Our talk about "Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior" is now online! @micahgoldblum @…
2022-09-08 03:42:34 RT @BdsLoick: Et quand il n'y en a plus et beh il y en a encore !L’édition 2021 du cours d’Introduction à l’apprentissage profond de @ylec…
2022-09-08 02:11:41 RT @ianbremmer: china passes united states in life expectancyshould be a headline in every us newspaper https://t.co/SSo2IBL0vQ
2022-09-06 19:47:34 RT @johncarlosbaez: You can switch from sums to integrals in the definition of entropy, but be careful - a bunch of things change! When…
2022-09-06 00:48:17 RT @GwilliamsL: happy to announce the curation and release of our big natural listening MEG dataset!paper link: https://t.co/hB6jCKOgAAd…
2022-09-06 00:47:56 RT @JeanRemiKing: Our latest MEG dataset of brain activity during natural speech processing is out
2022-09-05 16:06:30 RT @HLForum: The panel discussion will be moderated by Eva Wolfangel @evawolfangel and the panelists will be Yann LeCun @ylecun, Anil Anant…
2022-09-04 22:10:56 @Gazikomde @simbaforrest Contact Professor Maurice Tchuente, Président du Conseil d'administration de l'université de Yaoundé II.
2022-09-04 14:45:26 RT @simbaforrest: I am thrilled and honored to receive this recognition. Working at NYU has been a fabulous experience, allowing me to chas…
2022-09-03 18:57:06 @RLAonline @MetaAI @JeanRemiKing @honualx @c_caucheteux You know, in properly-run countries, which includes liberal democracies, there are laws against egregious invasions of privacy. Lots and lots of existing technologies can be used today for privacy invasion. Most are either banned, severely regulated, or about to be.
2022-09-03 18:52:05 RT @mukdal: Excellent talk yesterday by @ylecun on "From Machine Learning to Autonomous Intelligence", organized by the @PrincetonAIClub. T…
2022-09-03 18:32:20 RT @NoemaMag: “If there is one constant in the field of artificial intelligence it is exaggeration: there is always breathless hype &
2022-09-03 16:12:21 RT @rob_sheridan: Oh no, someone used a *checks notes* digital art tool to win the *checks notes* digital art contest at the state fair, th…
2022-09-01 23:34:09 RT @s_y_chung: First day as an assistant professor at NYUOur lab is now up and running fully &
2022-09-01 19:00:06 @RLAonline @MetaAI @JeanRemiKing @honualx @c_caucheteux Like what?
2022-09-01 18:56:47 @KrzakalaF Reminds me of the old Soviet joke:4 prisoners in the gulag:Prisoner 1: why are you here?P2: I wrote a pamphlet defending VasilyevP3: I wrote a pamphlet attacking VasilyevP4: I am Vasilyev
2022-09-01 11:58:27 One thing CVPR, ICLR, and NeurIPS have in common is that their publications are all open access. https://t.co/nSw5M8Zt0j
2022-09-01 11:44:44 RT @zdeborova: Currently CVPR=#4, ICLR=#9, NeurIPS=#10 (only a year ago CVPR=#5, ICLR=#17, NeurIPS=#21) ... what a gradient! Should check i…
2022-09-01 02:17:11 RT @MetaAI: (1/3) Historically, decoding speech from brain activity has mostly benefited from invasive methods. In this new model, Meta AI…
2022-09-01 00:06:58 @sainingxie @NYU_Courant @nyuniversity @CILVRatNYU Welcome Saining....again!
2022-08-31 14:27:51 Les théories complotistes qui prétendent que tout ce qui constitue le cœur même de votre carrière n'est qu'un mensonge, ça fini par énerver.Ce sont les auteurs de ces théories complotistes qui produisent des mensonges, parce qu'ils n'ont aucun autre moyen d'attirer l'attention. https://t.co/eQwJvd68qU
2022-08-30 22:54:56 Generative art is blowing up. https://t.co/WegTswDUkW
2022-08-30 19:41:42 @RichardGarriott What were they thinking?That Trump was actually going to pay his bills?Haha!
2022-08-30 19:40:36 What was the hosting company thinking?That Trump was actually going to pay his bills?Everyone in NYC knew he was never gonna pay his bills.That will teach them! https://t.co/j6zRCKerUP
2022-08-30 05:27:49 RT @EricTopol: Hey United States, not a good past 12 months in the prevention of Covid fatalities @OurWorldInData https://t.co/RN7sbj7SgS
2022-08-30 05:22:40 RT @PartnershipAI: Coming from both industry and academia, Dr. Joelle Pineau is the Managing Director of @MetaAI and is deeply knowledgeabl…
2022-08-30 05:22:33 RT @PartnershipAI: To help us achieve our mission to advance the responsible development of AI, today we announce the appointment of three…
2022-08-30 03:02:31 RT @EPrinceton: Join the @PrincetonAIClub for its kickoff event for the coming academic year. Special guest speaker will be Yann LeCun (@yl…
2022-08-30 03:01:46 Flipping through #NIPS89 proceedings, finding modern trends. https://t.co/4766CnFWcg
2022-08-30 02:57:09 Real-time composition. https://t.co/6r47tBGb6V
2022-08-30 02:52:40 RT @fjord41: Diffusion for music synthesis!We trained a “notes2audio” pipeline to synthesize audio from multi-instrument MIDI notes.Lis…
2022-08-28 22:15:12 @lkartaun Germany on a good track?It's actually pretty awful.Thankfully Germany is reexamining it's insane decision to phase down nuclear.Have a look at this:https://t.co/kXF1YbcQi6
2022-08-28 22:10:44 @I2eptileX The reason for France consuming much less fossil fuel per unit of GDP than Germany is most definitely France's higher percentage of nuclear energy.
2022-08-28 22:04:56 @DrElectronX This counts *consumed* fossil fuel, whether it's imported or not.So the graph shows the correct thing.
2022-08-28 21:03:11 @I2eptileX False.France net export of electricity was 43.2TWh in 2020 and 87.1TWh in 2021.in the first 1/2 of 2022, France had a modest net importation of 2.5TWh, but that's an exception due to 30 nuclear plants being shut down for maintenance or throttled down for lack of cooling water.
2022-08-28 20:08:37 Billions of $ of GDP per exaJoules of fossil fuel consumption for various countries.Q: What's up with France?A: nuclear. https://t.co/CVw2Pdzl0L
2022-08-27 19:09:47 @trekkinglemon @francoisfleuret Exactly.
2022-08-27 13:46:50 Hilarious.There are similar lists for pretty much every new cultural phenomenon, particularly when caused by a new communication technology: jazz, comics, TV, cinema, radio, novels, telephone, smartphone, internet, social networks.... https://t.co/bJRHilv125
2022-08-27 12:09:14 RT @BeschlossDC: With ample evidence, this is now an espionage investigation of a former President of the United States. Mindblowing.
2022-08-27 12:07:23 RT @BeschlossDC: We have never -- ever -- seen anything close to this from any President or ex-President in American history.
2022-08-27 12:05:06 ROFL! https://t.co/BSekrbLbvI
2022-08-27 12:00:51 As I've said for years.Nice to hear from a solar power magnate. https://t.co/fR2m20gLqV
2022-08-27 11:43:32 I'll be the first one to point out the limitations of LLMs, but I agree that they do much more than merely storing the training data and regurgitating it with a bit of interpolation. https://t.co/QVH4ye1WWC
2022-08-26 21:29:14 @reevax_ Wrong. What actually happened is that I fell asleep while doomscrolling. My phone fell on my nose and retweeted this thing that I don't even remember seeing.
2022-08-26 01:46:38 RT @vardi: The toxic culture of rejection in computer sciencehttps://t.co/nXY4Ve58uM
2022-08-26 01:01:35 https://t.co/VCOvEmmHk0
2022-08-25 11:48:43 RT @shapedai: Are humans and animals as efficient as machine learning models?In this post we simplify Yann LeCun's latest paper "A Path T…
2022-08-25 11:31:09 RT @MetaAI: Researchers from @allen_ai just reproduced &
2022-08-25 11:24:21 Interesting paper on the reasons for the increasing dominance of American research universities iver the 20th century.TL
2022-08-25 04:43:31 @l4rz In fact, an infinitesimal portion.What we can't capture, we call noise, entropy, or heat.Given a box filled with gas, the information we can know (volume, mass, pressure, temperature,...) is infinitesimal compared to the information in the positions and momenta of all molecules
2022-08-25 04:38:19 @zxul767 @chris_jwala Precisely.
2022-08-25 04:20:27 @NotTriggerAtAll @pmddomingos @Jake_Browning00 You definitely do *not* need symbols to plan.All of classical planning in optimal control is done by minimizing an objective function with respect to latent variables representing the action sequence.
2022-08-25 04:18:31 @NotTriggerAtAll @pmddomingos @Jake_Browning00 <
2022-08-24 23:11:24 This is connected to an older thread about language:https://t.co/e56f7PHG92 https://t.co/56RXa7S86g
2022-08-24 20:01:05 @TalhaIrf No. They don't trust religion anymore.
2022-08-24 19:10:26 @soboleffspaces @Jake_Browning00 Individuals of non-social animal species have lots of knowledge about the world.They acquire it by themselves without resorting to any form of communication.
2022-08-24 19:01:11 @bradpwyble @gottfriedmath @Jake_Browning00 The point is that they start to understand how the world works before they do any intervention (beyond their own limbs).
2022-08-24 11:58:34 @gottfriedmath @Jake_Browning00 A one month old baby hardly has any agency, yet acquires enormous amounts of knowledge about the world, largely by observation.
2022-08-24 11:55:01 Interesting chart.STEM fields (particularly CS) have become more popular over the last decade.Humanities less popular.Religion has fallen off the train. https://t.co/ODwL9ATc77
2022-08-24 11:43:54 RT @togelius: This is very good and thoughtful. It makes me wonder how much certain philosophies that see everything as "a text" are to bla…
2022-08-23 21:39:38 @TheRealVeedrac @Jake_Browning00 Helen Keller had the sense of touch.
2022-08-23 21:33:08 @holmesjtg @Jake_Browning00 Better question: who is trying to get a machine to learn a model of the world similar to what a dog or a crow possesses?
2022-08-23 21:30:08 @GaryMarcus @terrible_archer @ErnestSDavis 1. What is missing from LLMs is the kind of world model that many animals possess. No linguistic knowledge whatsoever.2. If linguistic knowledge is "inherently symbolic", then all neural language models are inherently symbolic and your calls for hybrid models are unnecessary.
2022-08-23 21:23:53 RT @boztank: A profound reminder of how much further we have to go in developing intelligent machines.
2022-08-19 20:17:55 @tyrell_turing @KordingLab I agree with you 100% Blake.
2022-08-19 07:00:28 RT @HarbRimah: Meet the top Speakers at the #NVIDIA GTC 2022 (Sept. 19-22)#GTC22 #AI #DEVCommunity #Metaverse #blockchain #DeepLearning…
2022-08-19 06:50:03 RT @TrungTPhan: ASML is the most important company you've never heard of.The $220B Dutch firm makes the machines that make semiconductors…
2022-08-18 22:36:40 RT @NVIDIAAIDev: What questions do you have for #AI pioneers and Turing Award winners Yoshua Bengio, @geoffreyhinton, and @ylecun?Shar…
2022-08-18 22:34:28 RT @PrincetonAIClub: Are you @Princeton and interested in #AI? Are you not part of Princeton but interested in AI? Come to our "Welcome bac…
2022-08-18 22:32:28 RT @erikbryn: Superpower: More inventors immigrated to America than to all other nations in the world combined between 2000-2010.https:/…
2022-08-18 11:32:56 @wellingmax Pessimism entices you to give up.I'm not giving up.
2022-08-16 22:18:03 RT @neiltyson: In 1945 we defeated violent fascist leaders who controlled the press &
2022-08-16 22:12:10 RT @glouppe: The more I talk about simulation-based inference, the more I realize that the concept of an intractable likelihood is complete…
2022-08-16 10:53:16 @sbstnschmtthd @SchmidhuberAI @ykilcher @ChristianPehle The idea of searching for the input of a neural net that produces a particular output (or an output that satisfies constraints) has been around since the late 1980s.This can be done by backprop to the input.
2022-08-16 10:33:42 RT @gabrielpeyre: Oldies but goldies: K. Levenberg, A Method for the Solution of Certain Non-Linear Problems in Least Squares, 1944. Levenb…
2022-08-16 10:28:33 I choose optimism. https://t.co/cxKAChuHBV
2022-08-16 09:35:01 @reevax_ @ampanmdagaba The same point as making a unique ceramic bowl by hand and selling it $500 vs getting a machine-made bowl for $5.
2022-08-16 08:59:28 RT @benedictevans: IMO you can believe all of the following: Apple’s privacy sermons and claims not to ‘track’ you are hypocritical bullsh…
2022-08-16 08:47:57 RT @StanDehaene: Thank you Time Magazine for citing my research on « reading in the brain ».All the evidence shows that phonics teaching i…
2022-08-16 08:43:05 RT @MetaAI: Join us for the MyoChallenge! Develop controllers for a physiologically realistic musculoskeletal hand to solve two of the mo…
2022-08-15 16:59:43 Yes https://t.co/3RSUtAuG6g
2022-08-14 20:37:50 RT @johncarlosbaez: Now let me explain the link between energy, entropy and temperature!If a probability distribution p maximizes its ent…
2022-08-14 12:55:10 RT @GuillaumeLample: Excited to release our latest work: https://t.co/E7BdFH72El We present a new algorithm, HyperTree Proof Search (HTPS)…
2022-08-13 22:18:12 Hello EU?Wake up and invest in R&
2022-08-13 22:09:34 RT @gabrielpeyre: Brunn-Minkowski inequality is one of the fundamental inequalities in convex geometry, which generalizes the isoperimetric…
2022-08-13 22:08:50 RT @MetaAI: We’re excited to announce the Habitat Rearrangement Challenge at #NeurIPS2022! The competition evaluates how embodied agents…
2022-08-13 13:04:48 Periodic reminder. https://t.co/gKdGMQ3qx8
2022-08-13 12:51:50 @iScienceLuvr @StableDiffusion @geoffreyhinton @AndrewYNg Looks more like Teuvo Kohonen than like me. https://t.co/or0NWYetiC
2022-08-13 10:36:10 @alex_conneau But then you get a magic potion of superhuman strength.
2022-08-13 10:32:30 RT @MetaAI: 3D computer vision research just got easier!We’re releasing Implicitron, an extension of PyTorch3D that enables fast prototyp…
2022-08-13 10:15:18 LSSM FTW. https://t.co/dAHPSsiFbo
2022-08-13 10:11:45 @tdietterich @percyliang Though the "large" thing is not going to age well, unless "large" means "larger than what a normal academic lab can train"
2022-08-13 10:07:22 @tdietterich @percyliang Agreed.
2022-08-12 17:15:53 Habitat v0.2.2 https://t.co/HfzCb2D5Dm
2022-08-12 13:41:43 @tyrell_turing @anilananth I like your quote:“I think there’s no doubt that 90% of what the brain does is self-supervised learning,”
2022-08-12 13:39:34 RT @anilananth: “We are raising a generation of algorithms that are like undergrads [who] didn’t come to class the whole semester and then…
2022-08-12 13:35:36 RT @sethlazar: These days, there's a lot (too much) attention on philosophers highlighting 'longtermist' 'existential risks' supposedly pos…
2022-08-11 21:20:52 RT @AlecStapp: New data from @NFAPResearch:Immigrants have started more than half (319 of 582!) of America’s startups valued at $1 billio…
2022-08-11 11:08:39 RT @gabrielpeyre: Spherical harmonics are the equivalent on the sphere of the Fourier basis. Useful to perform low-frequency approximation…
2022-08-11 11:07:53 RT @MetaAI: (1/4) Can sim2robot transfer be improved by *decreasing* simulation fidelity? Surprisingly, yes! Research by FAIR and @gtcomp…
2022-08-11 07:49:02 RT @DhruvBatraDB: Come participate in the Habitat Rearrangement Challenge at @NeurIPSConf 2022.
2022-08-10 17:52:30 Nice. https://t.co/grNxP7n9dp
2022-08-09 07:07:11 RT @gabrielpeyre: Oldies but goldies: Pietro Perona and Jitendra Malik, Scale-space and edge detection using anisotropic diffusion, 1987. I…
2022-08-09 07:04:41 RT @PSH_Lewis: We’ve been working on better retrieval-augmented models &
2022-08-09 07:04:25 RT @EXGRV: Very excited to introduce Atlas, a new retrieval augmented language model which is competitive with larger models on few-shot ta…
2022-08-08 23:04:22 Atlas: a not-so-large language model (11B parameters) that beats the big guys at question answering and fact checking.The main difference is that it can retrieve facts from a corpus.Paper: https://t.co/WlV3rbuY30 https://t.co/7bsNXfqV6l
2022-08-08 22:49:27 RT @MetaAI: Models are increasingly data hungry, but do we really need all that data? @SuryaGanguli, @arimorcos and team show, in theory an…
2022-08-07 19:35:06 The Trump administration policy of separating children of undocumented immigrant families from their parents was deliberately brutal.Shame on everyone involved. https://t.co/T6wuntgnDy
2022-08-07 13:50:11 @ilkekaya @jackclarkSF Like what? How is Facebook "choosing to utilize AI" to cause harm?
2022-08-07 13:45:45 @LeCungil @ClaraSchmelck
2022-08-07 07:51:20 RT @jpineau1: This seems worthy of a first tweet! BB3 is the first 175B param, publicly available chatbot, with model weights, code, datase…
2022-08-07 07:00:02 @jackclarkSF Seems to me almost everything you complain about would apply to almost every technological revolution, no?Think the railroad, automobile, electricity, telephone, computer, internet...Large impact on society, driven by a small number of large, initially-unregulated corporations.
2022-08-06 19:51:09 Question: which architecture class is more robust for vision tasks, transformers or ConvNets?Answer: it's complicated. https://t.co/uCYvsMpw6g
2022-08-06 19:45:05 @EtienneKlein Mais les lois de conservation prédiraient l'apparition d'un anti-canular.
2022-08-06 19:40:27 @schrep Solar panel responded: My favorite band is the Conduction band. It's dope.
2022-08-06 11:13:18 @BugRib @MetaAI Excellent question.
2022-08-06 11:12:07 @loo_situ @MetaAI Well, it just goes to show that, within the text data BlenderBot3 was trained on (which is a reflection of media and public opinion), Mark gets an undeserved bad rep, and I get an undeserved good one.
2022-08-06 11:00:15 The @MetaAI BlenderBot3 chatbot says I "truly understand the nature of reality itself".Unlike other chatbots, BlenderBot3 gets its facts from reliable sources, but I have serious doubts about this particular fact. https://t.co/jQ7T53mxq0
2022-08-06 08:19:07 RT @MetaAI: Meta Research Engineer Oran Gafni chats with @schrep about how Make-A-Scene differs from other models. By adding a simple sketc…
2022-08-06 01:58:59 RT @gabrielpeyre: Oldies but goldies: Hubel, D. H.
2022-08-06 01:56:27 RT @MetaAI: Help improve conversational AI safety for everyone by trying out the new BlenderBot3 demo and providing feedback: https://t.co/…
2022-07-27 06:41:09 @rouxph_22 They didn't have any regular Dirac combs left.So I got one that was Fourier transformed. But that turned out to be fine for my furrier hair.
2022-07-27 06:27:11 RT @wk057: Tesla really fires me up sometimes.I have a customer who's the ~3rd owner of a 2013 Model S 60.At some point years ago the…
2022-07-27 06:17:27 RT @TadeuszGiczan: One of those cases where the gut feeling turns out to be right. A poll in Austria found that most vaccinated Austrians b…
2022-07-27 06:09:19 Indeed.Then again, the Collobert&
2022-07-27 06:05:04 RT @MetaAI: Tired of manually annotating vast quantities of data to achieve good performance? Our Masked Siamese Networks (MSN) is a self-s…
2022-07-26 22:28:27 RT @MetaAI: We’ve seen an amazing response to OPT-175B, the first large language model of its kind to be made freely available to the resea…
2022-07-26 15:13:40 @umuti5ik From the other side it's an anti-sign: mirror image, opposite charge, and goes backwards in time.
2022-07-26 12:29:55 RT @fpa: ¿Qué es el “deep learning”? El divulgador en IA @DotCSV explica en este vídeo la técnica para la que @geoffreyhinton, @ylecun, Y…
2022-07-26 08:27:39 RT @gabrielpeyre: A reminder that a quick and dirty translation of my book (initially written in French) "The Discrete Algebra of the Fouri…
2022-07-26 08:26:03 All official functions are actually official distributions.
2022-07-26 08:24:19 @trgokhale @rao2z The inhabitants come in two categories FIR and IIR.
2022-07-26 08:23:04 @pmddomingos Yes, the one that is even thinner than a single carbon nanotube, whose thickness is equal to the Planck length.
2022-07-26 07:12:35 @EdeSanVicente Well, zero.
2022-07-26 07:11:35 @ScienceStanley @tunguz A cubic kilometer, actually.
2022-07-26 07:10:43 @gbonnet78fr That would be square meters
2022-07-26 07:10:15 @JonathanKinlay Instead of tweeting this joke publicly, I should have kept it Fermi.
2022-07-26 07:08:55 @nlholdem As long as you're not in Fourier space.
2022-07-26 07:08:00 @trekkinglemon Regularized.
2022-07-26 07:07:42 @wellingmax The hole counterpart to this town is suitably named Padirac :https://t.co/wq3cqd3UQs
2022-07-25 21:13:02 This French town is infinitely thin. https://t.co/pEoZljcyrP
2022-07-21 16:11:30 ESMFold: Great new results from the Meta-FAIR Protein Team.Super-fast and accurate protein folding. https://t.co/HIDT2pwT7L
2022-07-21 16:05:11 @roydanroy @wrongu "Scaling learning algorithms towards AI", by Bengio &
2022-07-21 16:00:23 RT @NYUDataScience: Yann LeCun (@ylecun), CDS Advisory Committee Member and VP &
2022-07-21 15:57:38 RT @andrewgwils: We're honoured to receive the Outstanding Paper Award at #ICML2022 for our work on Bayesian model selection. This was a wo…
2022-07-21 15:43:39 RT @MetaAI: @shibamufu Make-A-Scene isn't available to the public yet, but if you have a text prompt and sketch you want to see in action,…
2022-07-21 01:26:36 The Theseus library for PyTorch allows one to insert modules that perform an optimization to compute their output, and to back-propagate gradients through it.From @MetaAI https://t.co/uFov5jVuq1
2022-07-20 19:49:27 RT @MetaAI: Congrats to our research team for winning an #ICML2022 Outstanding Paper Award for their paper on inverse folding from millions…
2022-07-20 19:28:48 RT @tobi: Can't stop thinking about this and go: "The reason we don't have fusion already is because we, as a civilization, never deci…
2022-07-20 17:15:42 @kchonyc They were sliced so thin that they only had one side.
2022-07-20 17:08:21 RT @KrzakalaF: « From machine learning to autonomous intelligence » by the one and only @ylecun in #leshouches2022 https://t.co/jdLgZuXc2p
2022-07-20 11:52:54 RT @bschoelkopf: Our 2012 paper ‘On causal and anticausal learning’ just received a Test of Time Honorable Mention at @icmlconf #ICML2022:…
2022-07-20 09:34:12 @YiMaTweets Mike is actually planning to speak about some work he did with Emmanuel on that topic.
2022-07-20 06:58:44 Michael Jordan at the Summer school on Statistical Physics &
2022-07-19 06:33:48 @pmddomingos @gchrupala The overwhelming problem in society at large is the latter.That's considerably more dangerous and could spell the end of liberal democracy in a few countries.What do you propose?
2022-07-19 06:17:52 RT @randall_balestr: Happy to be at #ICML2022! And happy to chat/brainstorm about SSL/splines/data-augmentation/... at the @MetaAI booth (T…
2022-07-18 20:44:53 @pmddomingos @gchrupala We can agree that illiberalism, whether from the woke left or the fascist right are bad.
2022-07-18 20:30:20 @gchrupala @pmddomingos This is a uniquely American phenomenon due to the drift of the American Right towards anti-science, anti-intellectualism, and now anti-democracy.
2022-07-18 17:39:11 @ArieHaziza @EtienneKlein @franceculture Non. Je suis de nature optimiste.
2022-07-18 17:34:52 @aa73561 @deviparikh @_ScottEaton_ @MetaAI @Apple If Hollywood gets their hands on it they'll manage to dum it down to only produce ancient fairy tales and superhero movies.
2022-07-18 17:19:56 @pmddomingos How is it surprising (or a problem) that a wide majority of highly-educated critical thinkers are politically progressive?Notable exceptions include people who grew up under authoritarian governments (communist or fascist dictatorships) &
2022-07-18 14:43:28 RT @dkaushik96: We blew our ability to lead in the development of and setting standards for the 5G technology… by not granting **ONE** Gree…
2022-07-18 13:07:07 @mattocrik @EtienneKlein @franceculture Il écrivait encore des pamphlets contre la relativité et l'équivalence masse-énergie en 1935, bien après de nombreuses vérifications expérimentales.
2022-07-18 13:04:41 @Reda_Action @Thabris51 @EtienneKlein @franceculture Il croyait en l'exactitude du formalisme probabiliste de la mécanique quantique.Mais il ne pensait pas que ce formalisme fournissait une description complète de la réalité.Ce en quoi il avait raison.
2022-07-18 07:42:37 @EtienneKlein @franceculture C'est aussi lui qui ne croyait pas en la relativité, les électrons, la théorie atomique, l'équivalence masse-énergie, les ondes électromagnétiques, et pensait que les rayons cosmiques voyageaient 50 fois plus vite que la lumière.Un ingénieur de génie, mais pas un scientifique!
2022-07-18 07:24:25 I'd say not just the US but all the liberal democracies in the world. https://t.co/EhYqqcB5VE
2022-07-18 06:20:13 @pmddomingos If there is a problem, it is not with ideological imbalance, but with creeping illiberalism and attacks against the values of the Enlightenment from the far left *and* the far right, Tenure is the last defense against that within Academia.
2022-07-17 21:39:01 OK, this issue doesn't have much to do with people who are trained in ML.It has more to do with domain scientists who *use* ML and haven't been exposed to common methodological pitfalls.More courses for ML pple won't help.More training in ML for domain scientists will. https://t.co/M2fkoryjso
2022-07-17 21:26:19 @eggie5 Texas doesn't look too bad, perhaps because they have a decent public university system.
2022-07-17 17:50:09 @pmddomingos Must be the most ridiculous thing you've ever tweeted.I mean, many things are true without being "ideologically balanced", whatever you mean by that.On fact that is the very reason academic tenure exists, as you well know.
2022-07-17 17:44:26 RT @BeschlossDC: In Sinclair Lewis's 1935 "It Can't Happen Here" (performed on stage) a demagogic President, feigning populism, arrests pol…
2022-07-17 13:52:12 RT @MetaAI: Don’t miss us at @icmlconf 2022 - stop by booth 611 and see our demos: Stories Told Through Translation, Animated Drawings and…
2022-07-17 13:40:44 RT @_ScottEaton_: Excited to share a few additional experiments from @MetaAI 's Make-A-Scene. The compositional control is a powerful augm…
2022-07-17 09:20:35 The US South East is feudal. https://t.co/q3qOACjsl3
2022-07-17 08:53:13 @holmesjtg @tyrell_turing A *lot* of things are much improved.E.g. preemptive hate speech take down went from 30% to 96% in 4 years.
2022-07-17 08:50:00 RT @_ScottEaton_: Interestingly, if the text prompt is entirely unrelated to the categories prescribed in the label, Make-A-Scene uses the…
2022-07-17 08:49:55 RT @deviparikh: This is one of my favorite things about Make-A-Scene! The new dimension for creative exploration and element of surprise th…
2022-07-16 13:23:48 @fjmendez Santos-Dumont was first to do a completely self-powered controlled flight.Although this was in 1906 (3 years after the Wright Bros), the Wright Flyer needed to be catapulted at takeoff.
2022-07-16 13:16:38 @josip_loncaric Not mentioning Leonardo da Vinci.
2022-07-16 13:09:56 @sir_deenicus @FelixHill84 @lessc0de @geoffreyhinton Hard to evaluate.But I suspect the impact of ConvNet is greater. It reduces collisions by 40% in cars that have driving assistance systems (which almost every new car has).
2022-07-16 13:02:03 @bibbadibobbadi @tyrell_turing Do you know why?
2022-07-16 09:08:41 Too many studies that apply machine learning to science &
2022-07-16 08:48:00 RT @SuryaGanguli: This was a very cool workshop on loss landscape geometries in neural networks - all recordings of talks are available - i…
2022-07-16 02:31:47 RT @deviparikh: There was so much wonderful imagery from @_ScottEaton_, was very hard to pick what to showcase :)
2022-07-16 02:31:40 RT @MetaAI: “Make-A-Scene provides a level of control that’s been missing in other SOTA generative AI systems. Text prompting alone is cons…
2022-07-15 23:08:10 RT @MelMitchell1: Interesting take by @AlisonGopnik : Large language models are "cultural technologies" -- they are not themselves intelli…
2022-07-15 23:01:01 @sachinvsAI @rao2z My only success when it comes to weight loss is Optimal Brain Damage.
2022-07-15 22:56:29 RT @sfiscience: "[Large language models] lack memory and internal models of the world that are actually really important... There’s little…
2022-07-15 22:50:54 RT @JacobBor: The "missing Americans": early death in the United States In a new pre-print, we quantify the number of deaths that would h…
2022-07-15 22:37:21 RT @brewster_kahle: What makes the Internet Archive special? millions of patrons, thousands of uploaders, hundreds of staff/helpers, and…
2022-07-15 20:47:42 @LarevueIA Ask @deviparikh
2022-07-15 20:46:13 @FelixHill84 @lessc0de @geoffreyhinton Also, I know a bit about physics, electronics, neuroscience.And for fun, I'll design and build small flying contraption as well as electronic musical instruments.
2022-07-15 20:44:04 @FelixHill84 @lessc0de @geoffreyhinton Persistent but not monomaniac.I stopped working on DL for about 5 years (1996 to 2001). I worked on DjVu and ran a lab at AT&
2022-07-15 20:29:08 RT @MetaAI: When media artist and director @refikanadol tried the Make-A-Scene generative AI research tool, he said he was able to prompt a…
2022-07-15 19:23:10 Haha. https://t.co/MvLnX09COK
2022-07-15 19:21:18 @tyrell_turing Facebook is better
2022-07-15 19:20:05 RT @AntoineBordes: Here's what I was able to do when I tried to artistically render the lovely cabin I was in a few days ago. Same layout,…
2022-07-15 12:47:37 @SC_Griffith The existence of infinite sets is an axiom of Zermelo-Fraenkel set theory.See axiom 7 "axiom of infinity":https://t.co/TJch0HGhrJYou can very well decide to no use this axiom. A lot math becomes simpler, but then a lot of math becomes impossible (eg differential calculus)
2022-07-15 12:34:17 A blog post about Make-A-Scene in which artists explain how they use it.https://t.co/ElRIbKVBH0 https://t.co/1AN7FMaDt1
2022-07-15 01:08:31 RT @schrep: Human's assisted by AI is going to be a huge driver of productivity and prosperity. So cool to see this applied to creative pu…
2022-07-15 01:05:06 @numberjuani True. Also true for scientists.
2022-07-15 00:54:58 I wished it could work for me. https://t.co/3eFX7ILfUV
2022-07-15 00:53:27 @JeffDean @rao2z I'm a lot better at descending gradients than at climbing them.
2022-07-15 00:50:25 @rao2z Weight loss? I should try it.
2022-07-15 00:48:27 RT @tomgoldsteincs: SSIM has become a common loss function in computer vision. It is used to train monocular depth models for self-driving…
2022-07-14 23:04:15 RT @MetaAI: When using Make-A-Scene’s sketch inputs, AI artist @soficrespo91 said she could iterate quickly across new ideas—be it a jellyf…
2022-07-14 23:04:03 Awesome creations by artist Sofia Crespo using Make-A-Scene. https://t.co/qf0US5tbTT
2022-07-14 23:03:26 RT @MetaAI: Using AI to augment human creativity is a powerful use of tech. But people should be able to shape the content a system generat…
2022-07-14 23:03:12 Make-A-Scene can be used by artists and children alike. https://t.co/segg3CFqFE
2022-07-14 23:02:33 RT @MetaAI: Excited to announce Make-A-Scene, our latest research tool Mark Zuckerberg just shared. Make-A-Scene is an exploratory concept…
2022-07-14 23:02:19 Make-A-Scene! An *interactive* and *controllable* image generation system that produces a nice picture from a text description and a rough sketch.Paper here: https://t.co/cp5aeI0H8j https://t.co/xyInuGILqs
2022-07-14 01:11:34 @EsmaBens2020 @QuaintTransfer @rao2z So we should stop using lithium to treat bipolar disorder?Because we have no idea how it works.
2022-07-13 01:18:55 RT @OpenCatalyst: We will be going over the recent OC22 dataset and models over a Zoom call tomorrow at 10 AM PT. There will also be time f…
2022-07-13 01:18:10 @Veronichkapinke @CallieDuke15 @jryandx @GenghiskhanJR Nor women from a man's rib...
2022-07-13 00:38:49 @erikbryn Already happening.
2022-07-13 00:37:30 @marielecun Tex Avery.
2022-07-12 21:25:09 @GenghiskhanJR But anti-abortion laws and judicial decisions are largely done by men.
2022-07-12 20:05:23 @PlatosDog_ Definitely has that steampunk charm.
2022-07-12 18:50:40 @themintsv Haha! good question.
2022-07-12 13:04:31 Any American women out there still voting Republican? https://t.co/sdFikPBkkw
2022-07-12 04:42:22 @norabelrose @FellowHominid Proposing an architecture with an Intrinsic Cost module that essentially determines and constrains the agent's entire behavior is a good start, no?I mean, there is no such thing in LLMs and such, so their behavior cannot be easily controlled nor constrained.
2022-07-12 04:31:28 @Alexey_CA I've personally built dozens of small flying airplanes where the wings were nothing more than two sheets of wood or foam. No control surfaces whatsoever on the wings.Two sheets because you need dihedral for passive roll stability.
2022-07-12 04:11:32 @AGIArchitect @wellingmax All from the consequences of the evacuation, and only one confirmed from radiation exposure.https://t.co/1YHS6ymmVL
2022-07-12 04:02:18 @balazskegl @vervaeke_john The question: "what is entropy if there is no knower?"is similar to:"what is motion if there is no observer at rest?"
2022-07-12 03:55:08 RT @techatfacebook: Tune in to ep. 11 Boz is joined by @ylecun, VP &
2022-07-12 03:53:33 @pmddomingos @GaryMarcus @SchmidhuberAI You and I can come up with tons of ideas that don't work.We rarely publish them.Reminds me of a Soviet joke: - How come this shop for foreigners sells shoes for 20 rubles? they cost 5 kopeks at the regular store!- I too would sell shoes for 5 kopeks if I didn't have any.
2022-07-12 03:42:04 @RLerallut That would be in the Intrinsic Cost module.
2022-07-12 03:35:34 @Alexey_CA <
2022-07-12 03:25:48 @norabelrose @FellowHominid Since you believe I'm so misaligned, you might want to think about what to put inside the Intrinsic Cost module I talk about in my paper.
2022-07-12 03:19:08 Galaxies dédoublées par effet de lentille gravitationnelle dans cette première image du James Webb Space Telescope:Einstein vaut mieux que deux tu l'auras. https://t.co/TRef8FYEUE
2022-07-12 01:05:45 @TonyZador Definitely not ospreys
2022-07-11 17:05:00 @balazskegl The 2nd law applies to any knower. Bit because any knower can only "knows" a small proportion of the information contained in a macroscopic system, the uncertainty about predictions of future states of the system must increase.
2022-07-11 16:04:01 @balazskegl No, it just means that one's ability to extract energy from a system depends on one's knowledge of the state of that system.So energy is also in the eye of the beholder.
2022-07-11 15:40:16 @FaccioAI The 1st powered "flight" was Ader's.But the Wright bros made the 1st sustained, controlled flights (though catapulted &
2022-07-11 15:29:31 RT @_orcaman: היום היה לנו הכבוד לארח את @Ravid99216606 במפגש גילדת ה-Data Science שלנו, כדי לשמוע קצת על העבודה והמחקר שהוא עושה ב-NYU עם…
2022-07-11 12:40:20 Optimal Brain Damage redux. https://t.co/WA0taf5ula
2022-07-11 12:34:10 RT @raju: Can deep learning systems learn to manipulate symbols? The answers might change our understanding of how intelligence works and w…
2022-07-11 12:25:39 @balazskegl Entropy is in the eye of the beholder.Because probability distributions require models, and models are in the eye of the beholder.
2022-07-11 00:50:08 @srchvrs @MARGARETSCHEUNG @adad8m Who makes those claims?
2022-07-11 00:44:42 @OwainEvans_UK Boosting (Schapire, Bell Labs)Adaboost (Freund &
2022-07-10 13:04:19 @adad8m I don't think AI/ML is particularly forgetful when it comes to citations.Compared to other fields of engineering science?Compared to molecular biology? Medicine? Neuroscience?
2022-07-10 03:02:31 @msaquibsarfraz @GaryMarcus @pmddomingos @SchmidhuberAI Have you looked at those references?Do you find them relevant?But yes, I agree that the point is more the overall view.
2022-07-10 03:00:54 @pmddomingos @GaryMarcus @SchmidhuberAI Examples of hierarchical RL that work?Also, my proposal barely mentions RL.It is much more connected with optimal control, model-based planning, and system identification. And I cite a bunch of those, going back to the 1960s and 1970s.
2022-07-10 02:47:05 @adad8m Not as scary, nor self-destructive, as others obsessively insisting that their unreasonable notions of credit assignment should be adopted by everyone.
2022-07-10 01:16:34 @JonathanSumDL Yes, it's a joke.
2022-07-10 01:10:00 @FloRicx @guicho271828 @gregarityNow Yes, but those aren't necessary when training a deep neural net.Because in very high dimension, when your model is overparameterized, minima are higly degenerate, largely connected with each other, and easy to find through gradient descent.
2022-07-10 01:03:52 @michael_rivard Im sure you did.And a few thousand people did it before us.
2022-07-10 01:01:06 @VahidK Credit is not taken.Credit is given.
2022-07-09 23:30:50 @edelmann_domi The difference is that he doesn't joke.
2022-07-09 23:13:40 RT @SpirosMargaris: The #FutureOfAI with Guest #YannLeCun https://t.co/IDbLkgqz0t #fintech #AI #ArtificialIntelligence #MachineLearning…
2022-07-09 22:09:54 @Franksneto He first flew in 1906, 3 years after the Wright Bros.He thought he was first because, although the French had heard about the Wright Bros, they dismissed it as yet another American hype.They changed their mind when the Wright Bros demoed their Flyer in Paris a few years later.
2022-07-09 21:59:22 This pic is actually Ader's Avion III, which never flew.His first plane "Éole" took off, but flew uncontrolably at about 50 cm altitude for about 50 m, before crashing.He got a few things right (propeller) &
2022-07-09 21:47:53 @gregarityNow He was wrong.
2022-07-09 21:47:11 @gregarityNow He actually did, together with Oliver Selfridge in 1961.He called it "hill climbing" and pointed out that it was a bad approach because it could get stuck in local optima.https://t.co/U5TTE7qMSv
2022-07-09 21:35:16 In 1890, Clément Ader flew a steam-powered, bat-shaped airplane, 13 years before the Wright Brothers.He is largely forgotten, except by a few French aviation buffs.Why?Because his "Avion" flew uncontrolably for less than 50m.Subsequent pioneers owed almost nothing to him https://t.co/lWN8dPMZgz
2022-07-09 21:23:32 Back in the 1980s, I wrote f(x)=0.Shortly thereafter, I wrote x* = argmin f(x)Every new procedure or algorithm is obviously a special case of this general formulation.Should I be cited?1/2
2022-07-09 21:10:48 @BartWronsk Almost all the driving assistance systems that are sold today are based on Deep Learning, on ConvNets to be precise.That starts with MobilEye, which dominates the market today.
2022-07-09 20:00:55 RT @McKinsey_MGI: Influential #AI researcher @ylecun has devised “an approach that he thinks will one day give machines the common sense th…
2022-07-09 17:44:19 @togelius https://t.co/wfur5MgKSz
2022-07-09 15:08:34 @GeorgeSFrankl @boztank If you really think humans learn through pain, I pity your children.
2022-07-09 15:03:43 @wellingmax @Melissahei Serious question: wouldn't the upcoming availability of AI weapons enable so-called "surgical" targeting, and thereby make the use of massively destructive weapons (like cluster munitions that cause civilian "collateral damage") more obviously war crimes.
2022-07-08 21:40:49 @boztank Aww, thank you Boz!
2022-07-08 21:13:35 @GaryMarcus @pmddomingos @SchmidhuberAI s/and only case/and in any case/
2022-07-08 21:06:03 @GaryMarcus @pmddomingos @SchmidhuberAI End-to-end differentiability is not new (and only case, I was on board that one before Jürgen).What's new is the particular cognitive architecture I'm proposing, plus the other three items.If anyone has prior work for these 4 items, please give me pointers.
2022-07-08 18:47:01 A discussion about AI with *the* @boztank, CTO of Meta.We get into recent progress, promising avenues, and get a bit philosophical at times. https://t.co/HiNa04ykxp
2022-07-08 18:42:21 RT @MetaAI: How do we know if our model’s translations meet quality standards for all languages? Breadth of coverage and depth of evaluatio…
2022-07-08 18:39:07 Good article by @Melissahei about how the war in Ukraine is pushing countries to accelerate the use AI in defense and weapon systems, and the challenges faced by companies trying to supply them. https://t.co/rWcczBENqo
2022-07-08 17:21:37 @joergneulist I certainly have no issue with that!
2022-07-08 17:20:58 RT @TheOfficialACM: Happy birthday to @ylecun ! LeCun received the 2018 #ACMTuringAward with Geoff Hinton and Yoshua Bengio for conceptual…
2022-07-08 17:20:21 @artistexyz Sure.How many systems that do not use deep learning (ConvNets, Transfomers, etc) are ranked in the top 100 for object detection and localization benchmarks?How many Bayesian nets?
2022-07-08 17:12:58 @jscix @___Cappy___ @chr1sa Would you ask that question about disk brakes, seat belts, air bags, ABS?
2022-07-08 17:08:28 RT @scienceisstrat1: Firearms and homicides in the world’s most developed countries@ChrisMurphyCT @dwallacewells @NickKristof @Noahpinion…
2022-07-08 17:05:53 Grrrr https://t.co/pQYl6YgJVT
2022-07-08 16:55:03 RT @tylerblack32: Really stunning data, putting to rest the notion that COVID doesn't affect the young.In 2021, COVID became the 4th lead…
2022-07-08 16:53:05 RT @physicsJ: This was one of my favs to make. It shows a close-up of the eight planets in our solar system, along with a couple of dwarf p…
2022-07-08 16:45:18 RT @MetaAI: Modeling 200 languages: Meta AI researchers have developed a Sparse Mixture-of-Experts model that has a shared and specialized…
2022-07-08 15:01:29 @pravengov All the top-performing modern ones are based on ConvNets
2022-07-08 14:59:08 @w_t_payne All the top-performing modern ones are.MobilEye/Intel dominates that market, and they use ConvNets.
2022-07-08 14:58:15 @TonyBelpaeme All the modern ones use deep learning.Simply because it's both simpler and more reliable.
2022-07-08 14:57:06 @BoyzZazen Yes, they are pretty much all based on camera input and ConvNets. Some also use other sensors (like radar) but camera systems are becoming more reliable, making those less useful (in fact, hurtful).
2022-07-08 14:55:07 @EvoOzm I agree that such systems should be thoroughly tested to get certified.I completely disagree that they should be explainable.Regulating the application is fine.Regulating the underlying method is ridiculous.
2022-07-07 01:56:55 @AutoArtMachine Almost all of them are System 1.Those that use latent variable inference may be System 2.I talk about this in my piece:https://t.co/7ZgRtLIQWY
2022-07-07 01:49:53 RT @BillGates: Cheap, clean hydrogen would be a massive energy breakthrough.
2022-07-07 00:34:20 The CILVR Lab at NYU has a new website!CILVR stands for "Computational Intelligence, Learning, Vision, Robotics", and informal group of 14 core faculty and 6 affiliated faculty working on ML/AI.https://t.co/BuvAz5J43y
2022-07-06 23:52:05 ICYMI: a discussion between Nobel laureate Daniel Kahneman and me from last December about human cognition and machine intelligence.Discussing many topics like the necessity of learning world models.Moderated by Alex Kantrowitz.https://t.co/2HgKggdyYA
2022-07-06 23:17:21 RT @MetaAI: Our researchers made a fun storybook demo that uses the latest AI advancements from the No Language Left Behind project to tran…
2022-07-06 23:13:26 RT @MetaAI: (1/4) Results from our No Language Left Behind (NLLB) project are not only advancing state-of-the-art in machine translations,…
2022-07-06 23:11:56 @EugeneVinitsky Farewell from FAIR, and an anticipated welcome to NYU!
2022-07-06 23:10:07 RT @MetaAI: NLLB-200 Model, Evaluation Dataset, and Paper, with improved translation quality for over 200 languages. Help advance AI t…
2022-07-06 23:07:10 RT @danijarh: The full training run of the A1 quadruped robot learning to walk from scratch in the real world in 1 hour! Made possible by t…
2022-07-06 23:01:35 Wikipedia is using @MetaAI's "No Language Left Behind" translation technology to make articles available in more languages. https://t.co/i274hJjK9s
2022-07-06 15:00:47 RT @MetaAI: (1/2) Billions of people can’t access online content in their native language, but machine translations could soon change that…
2022-07-06 14:59:59 RT @boztank: Huge kudos to our AI teams behind No Language Left Behind, a single AI model that can now translate 200 different languages, m…
2022-07-06 13:57:29 FAIR Labs is devoted to scientist-driven exploratory research.FAIR Accel is focused on larger research projects that require more coordination and engineering support, such as NLLB.4/n, n=4.
2022-07-06 13:56:55 NLLB is one of several projects being pursued within FAIR Accel.What is FAIR Accel?FAIR is organized in two research groups: FAIR Labs led by Joelle Pineau and FAIR Accel led by Antoine Bordes.3/n
2022-07-06 13:56:10 Automatic language translation connects people while allowing to use the language they are most comfortable with.Paper: https://t.co/SCvsCrh6OFOlder blog post: https://t.co/dmXjEPHnhcPress articles:The Verge: https://t.co/p5uxM7JrdoCNET: https://t.co/8e7vcg6ux92/n
2022-07-06 13:55:10 "No Language Left Behind."An open-source language translation system from FAIR capable of translating 200 languages between each other.50 billion parameters.The code and models are made available today as part of the Fairseq package.Github: https://t.co/1m581UAfpo1/n https://t.co/NCfTzyJWqA
2022-07-06 06:02:00 RT @AvecRoussel: https://t.co/RTkbo1j7HE
2022-07-06 00:57:53 Fun joke video from the NIPS 2007 workshop banquet.Just when Deep Learning was a nascent cult, a mere 6 years before becoming the dominant religion.Dave Blei wins the prize for the most thoroughly brainwashed disciple. https://t.co/241yjLthov
2022-07-06 00:42:41 @QRJ211 @erikbryn @Harr0uet I think your stereotypes are incorrect.My undergraduate electrical engineering education at @ESIEEPARIS was rigorous but also gave me lots of opportunities for research project that weren't part of a strict curriculum.And this was in 1978-1983.
2022-07-05 20:37:54 @egrefen @BorisJohnson @Conservatives The boy is neither buoyed nor bouyed but booed.
2022-07-05 18:47:58 RT @newsbeagle: My article about new research from @MetaAI with comments from @ylecun and Yoshua Bengio #AI
2022-07-05 16:20:20 @mraginsky I'm not disagreeing.
2022-07-05 14:11:30 @roydanroy @lorisdanto 4. No Emacs-style editor
2022-07-05 11:52:15 @mraginsky Also the only one for which log_2(number_of_participants) <
2022-07-05 04:56:59 RT @Ph_Etienne: Tonight, I was happy to celebrate the Fourth of July with my American friends! We honor together the values of liberty and…
2022-07-05 02:42:55 RT @KohitijKar: Can large-scale neural data directly update models beyond qualitative insights like "recurrence”? We jointly optimize ANNs…
2022-07-05 00:34:04 RT @Thom_Hartmann: Only in America do schoolchildren get trained in mass shootings. Only in America, the victims of mass shootings end up b…
2022-07-05 00:28:15 RT @bramsonboudreau: Yann LeCun (@ylecun) tells @techreview: "This idea that we're going to just scale up the current large language models…
2022-07-05 00:22:54 @pmddomingos Sorry, I meant:2. Kolmogorov-Arnold representation theorem: "the only true multivariate function is the sum, since every other function can be written using univariate functions and summing"https://t.co/MdUHGgOpp8
2022-07-05 00:03:34 @erikbryn @Harr0uet I don't think the American undergraduate education system is particularly good.Neither are the Master's programs, compared to, say, much of Europe.The doctoral schools are good.Research is what attracts top talents from abroad.
2022-07-04 23:18:31 @pmddomingos 1. In a discretized phase space, any dynamics is the multiplication of a state occupancy vector by a transition matrix (permutation matrix for deterministic, stochastic for probabilistic, unitary for quantum).2. Kolmogorov theorem: any fn can be written h(x,y) = f(g1(x),g2(y))
2022-07-04 15:39:41 RT @richsignorelli: If you want to be more informed about is going on with our country, study Germany in the early to mid 1930s.
2022-07-04 15:32:20 RT @zdeborova: So excited about #Leshouches2022 starting today. First week with lectures by @marc_mezard Rémi Monasson, Nati Srebro, @SaraA…
2022-07-04 15:31:44 RT @erikbryn: One of the most important charts in the world.
2022-07-04 15:26:05 RT @Isabelle_Ryl: July 11, from 8:30am to 6:00pm at @cdf1530 in Paris: Workshop in honor of Jean-Paul Laumond @LaasCNRS #prAIrie Program&
2022-07-04 15:16:46 @pmddomingos This is simultaneously true, irrelevant in practice, &
2022-07-04 14:32:40 @mlarocca @Nature Dude, this talks about carbon emissions *reduction*, not absolute carbon emissions.Obviously, countries like France and Sweden that already rely on nuclear have low emissions and are not going to reduce them nearly as much as, say Germany, Poland, and the US.
2022-07-04 14:29:10 @mlarocca Brazil has a favorable geography that allows it to produce 64% of its electricity from hydro.Québec, Costa Rica and a few other regions can rely primarily on hydro because of a favorable geography.
2022-07-04 13:15:33 @sjogren_rickard We all agree: maxing out on renewables is a Good Thing.If we had a scalable, low-cost, efficient way of storing energy, we could do everything with renewables.But we don't.So we need a low-emission energy source for when there is no sun nor wind.The only option is nuclear.
2022-07-03 17:51:39 RT @Thom_Hartmann: 1/ Dear Republicans: We Tried Your Way and It Does Not Work (a thread):
2022-07-03 13:58:39 @AnsDome @AjantisIlvastar I don't. Access to user data within FB is very tightly controled and monitored.
2022-07-03 13:56:03 @AjantisIlvastar No, that's just false.The US government has had acces to *some* telephone networks.But it does not have access to private user data in social networks and other online services. Unless a request comes with a good reason and a warrant.
2022-07-03 13:50:57 @togelius Sure, but there are things you can't just invent.Particularly when it comes to sauces.
2022-07-03 13:49:29 Very clear chart: grams of CO2 emissions per kWh of electricity produced for various countries.Probably the only problem for which "going nuclear" was the right thing to do. https://t.co/Cw4lalhHXn
2022-07-03 13:42:12 @francoisfleuret All these years spent studying, taking exams, doing research, submitting papers, getting trashed by Reviewer #2, that takes incredible dedication, energy, and abnegation, no?
2022-07-03 13:38:19 RT @soumithchintala: In-person conferences are amazing because people are co-ordinated in a way that online conferences aren't.Everyone c…
2022-07-03 13:35:50 @morungos @pmddomingos Learning representations efficiently requires multiple layers.SVM have a single layer and are therefore shallow, not deep.Now, in principle, you can represent any function with a shallow network.But you can't do it efficiently enough to make it practical.
2022-07-02 22:48:06 @pmddomingos It's the old joke: "okay, it works in practice. But does it work in theory?"
2022-07-02 22:46:32 @Gatti27 @HaraldLesch_ High-activity long-life nuclear wastes produced in France are less than 10 tons/year.The total produced since the 1950s occupies 3600 cubic meters.Storage would be simple, if it weren't for political issues: store underground at a few 100 meters depth in stable rock.
2022-07-02 22:33:03 RT @tribelaw: This is the full version of the #DemocracyDoomsday thread. Read it. But make sure you’re sitting down first. https://t.co/u…
2022-07-02 21:06:51 @marktsimelzon @pmddomingos We don't know, but all the alternatives we can think of are much too inefficient.
2022-07-02 20:57:55 @AjantisIlvastar Ask yourself why neither FB, Instagram, Google, nor Twitter operate in China.Two reasons: 1. The Chinese government wants crippled version of these services so people can't use them to organize themselves.2. The Chinese government wants unfettered access to private user data.
2022-07-02 20:33:02 Woops. https://t.co/GMqxXOQsAe
2022-07-02 20:15:18 Not a moment too soon.The reason behind the traditional opinion of Germans against "Atomkraft": Soviet propaganda.The reason behind the change of opinion: climate change, and Russian gas (or lack thereof). https://t.co/I6xhE54EQL
2022-07-02 19:56:47 IEEE Spectrum writes about progress in Self-Supervised Learning at Meta-FAIR, particularly the recent work on Masked Auto-Encoders with transformer architectures.https://t.co/powdDA901u
2022-07-02 19:33:18 @working_good @GaryMarcus @erikbryn @percyliang @ilyasut @fchollet @JeffDean @DigEconLab My number (from memory) was for something else. But my comparison with a transatlantic flight had the right order of magnitude.Tweet fixed.
2022-07-02 19:31:09 @working_good @GaryMarcus @erikbryn @percyliang @ilyasut @fchollet @JeffDean @DigEconLab 1. The FAIR paper on OPT-175B explicitly talks about this.2. In the grand scheme of things, it's peanuts.Training OPT-175B produces 75 tons of CO2, just a bit over the emissions of a single transatlantic flight by an airliner.There 1700 transatlantic flights per day.
2022-07-02 19:17:51 @pmddomingos But then, what alternative do you have to gradient-based optimization for learning?Gradient free optimization?Inefficient!Something else than optimization? What could that be?
2022-07-02 19:03:35 LeNet-5 implemented in Minecraft using Redstone dust.Very meta. https://t.co/yIlE97Z301
2022-07-02 18:04:52 @pmddomingos @metacognoscenti Markov Logic are a special type of factor graphs in which the energy (neg log likelihood) is a weighted sum of first-order logic formulae. The trainable parameters are those weights. Hence it's *shallow* learning.
2022-07-01 22:35:53 @jcllobet @RobertTLange @alfcnz There used to be an agreement between NYC graduate schools that made it possible for grad students to take classes at other schools.Not sure whether Cornell Tech is part of it though.
2022-07-01 22:29:47 A paper by @bendee983 at @VentureBeat going through the main point of my recent paper on autonomous machine intelligence. https://t.co/rtPsBmvG05
2022-07-01 13:47:24 @markcannon5 I will admit that it's harder to understand than the difference between "it's" and "its".
2022-07-01 12:04:09 AI has "hit a wall." As it turned out, the wall was a stepping stone. https://t.co/oNq8CuPMuX
2022-07-01 09:49:54 RT @SuryaGanguli: 1/Is scale all you need for AGI?(unlikely).But our new paper "Beyond neural scaling laws:beating power law scaling via da…
2022-07-01 09:48:50 RT @MargaretAtwood: https://t.co/tzvU7MH6rM
2022-07-01 09:39:08 RT @MetaAI: (1/4) Meta AI is sharing research on advancements in 3 audio-visual models that understand the world around us and transform au…
2022-07-01 09:26:44 RT @MetaAI: Congratulations to our very own Meta AI researcher @imisra_ for being named one of @TechReview’s #35InnovatorsUnder35! Ishan ha…
2022-07-01 08:44:27 @nandofioretto Well, I'm not listed under NYU in CSrankings!
2022-06-30 17:25:57 Congrats to Ishan Misra from Meta-FAIR for making the MIT Tech Review 35 under 35 list under the AI &
2022-06-30 16:57:44 According to Google Scholar's latest ranking of publication venues by h5-index, ICLR is #9 in all of science, a mere 9 years after its creation, just in front of NeurIPS.https://t.co/WzNDqAIZc8
2022-06-30 10:31:22 RT @ClaraJeffery: 1/ Now flying through France at approx 200 mph. Total cost for me and kid to get from Paris to Rome, 177 €. America, we…
2022-06-30 05:39:32 @GaryMarcus @ErnestSDavis @techreview Comments on my paper are now enabled on Open review.Knock yourself out!https://t.co/7ZgRtLIQWY
2022-06-30 05:36:27 Update: comments on the paper are now enabled on Open review.https://t.co/7ZgRtLIQWY
2022-06-30 05:35:01 @DaniloJRezende @TacoCohen @wellingmax @jhhalverson @KrippendorfSven Very nice and fascinating work!
2022-06-30 05:25:18 @danijarh @AleEscontrela @philippswu @Ken_Goldberg @pabbeel Very nice work.World models FTW!
2022-06-30 05:23:22 Nice approach. https://t.co/mKIx7otRzC
2022-06-29 19:34:41 RT @NYUDataScience: Yann LeCun (@ylecun) was recently featured in the MIT Technology Review (@techreview). The article discusses “his bold…
2022-06-29 19:31:27 RT @fpa: We now know all the 2022 #PrincessofAsturiasAwards Laureates. Learn about their work via the following link: https://t.co/Qx3qybIY…
2022-06-29 19:31:24 RT @fpa: Ya conocemos a todos los galardonados con los #PremiosPrincesadeAsturias 2022. Descubre su labor en el siguiente enlace: https://t…
2022-06-28 14:39:16 RT @YiMaTweets: I completely agree that much of learning (especially perception) is not directly driven by any specific external reward/ta…
2022-06-28 14:38:07 @MarcCoutanche [the neuroscience that inspired this is about as modern as the neuroscience that inspired convolutional nets 34 years ago]
2022-06-28 14:35:37 @MarcCoutanche I think I'll take that as a compliment
2022-06-28 14:06:29 @bubblemx Fits better with "almost everything is optimal control"
2022-06-28 14:04:46 @EnricRM12 In the trainable parameters of the various modules.
2022-06-28 13:59:55 RT @CNRS_Villejuif: Rendez-vous en ligne ce soir pour assister à la conférence de @ylecun https://t.co/WnVEOMrPDO #intelligenceartifici…
2022-06-28 13:55:03 @balazskegl @Plinz That would be defined by the cost module.
2022-06-28 12:31:59 @livcomp Good question.
2022-06-28 08:31:37 RT @BeschlossDC: Fascism (Merriam-Webster):"A political philosophy, movement or regime...that exalts nation and often race above the indiv…
2022-06-28 08:30:01 RT @Gregdt1: Déinformation... un threadJe vois passer cette carte depuis hier sur différents canaux. Elle est censée démontrer à quel poin…
2022-06-28 08:25:45 RT @Zachary_DeVito: We're developing a new take on named tensors by adding dimensions objects to PyTorch. No need to figure out how gather…
2022-06-28 06:57:33 @fractalfoxnode Yes. The inference procedure that minimizes the cost with respect to the actions and/or latent variables can be seen as minimizing a free energy if it produces a distribution over latents, as opposed to a single point estimate.
2022-06-28 06:55:06 @stenichele Indeed! Which is why it's called a position paper.And I'm not planning to submit it anywhere.
2022-06-28 06:53:02 @AniseFarshid The cost module computes a scalar cost.Gradients of the cost are backpropagated through the modules that feed into it.
2022-06-28 06:50:29 @yooceii The actor gets gradients backpropagated from the cost, through the world model, down to the actor.
2022-06-28 06:49:13 @NickRMorgan @interintel Gradients flow backwards from the cost module.Connections to the configurator have been left out for clarity: pretty much every other module feeds into it.
2022-06-27 22:41:25 Nice thread on the perception of neural nets by the AI community over 4 decades. https://t.co/bmdxIqBfu7
2022-06-27 22:29:47 RT @DimiGeorgoulas: Well, I finished the NYU 2021 Deep learning course and to celebrate, I wrote a small review while I'm sharing my notes.…
2022-06-27 20:46:39 @GaryMarcus @ErnestSDavis @techreview Here. It's on OpenReview so you can comment/criticize.https://t.co/7ZgRtLIQWY
2022-06-27 20:18:07 @andrewgwils Fantastic!(Not that there ever was any doubt )
2022-06-25 03:25:22 RT @BeschlossDC: Thomas opinion makes it very clear that rights to contraception and marriage equality are in immediate danger.
2022-06-25 03:19:10 RT @ariannahuff: So, to sum up the Supreme Court’s week: life begins at conception and ends in a mass shooting.
2022-06-25 01:35:59 @pandaym Europe.
2022-06-25 01:30:22 @mhdamrollahi Yes, 2022, in English.
2022-06-25 01:24:16 RT @ChrisMurphyCT: Let’s be 100% clear. If Republicans win control of the House, Senate and White House two years from now, they will pass…
2022-06-25 01:24:02 RT @BeschlossDC: Women’s Strike for Equality, August 1970: https://t.co/R5hBKCwVw5
2022-06-25 01:19:22 RT @DannyDeVito: Supreme Court my ass
2022-06-25 01:03:08 RT @BeschlossDC: Knowingly plotting to destroy our democracy is about the most heinous thing any President of the United States could do.
2022-06-24 14:48:45 @jeublanc @Melissahei @YLecu @strwbilly @LorijnSZ @rhodricusack @NatMachIntell It's a very nice overview paper.Indeed, my proposal is aligned with many of the conditions you state in the paper.Just in time for me to cite it !
2022-06-24 14:36:14 @RhadamisteX Oui.
2022-06-24 14:30:31 RT @jeublanc: @Melissahei @YLecu @strwbilly A lot of @ylecun's proposal (not starting with full input details, attention biases/selective f…
2022-06-24 14:20:51 Tuesday evening at 20:00.Institut Jacques Monod in Paris. https://t.co/8odvahRKX0
2022-06-24 14:17:45 RT @MetaFrance: Quelle sera la prochaine innovation dans l’#IA ?Retrouvez l’interview de @ylecun à @VivaTech 2022, où le VP &
2022-06-24 14:11:02 RT @peteryugray: A great overview of some big ideas from @ylecun about how we can get to human-level AI (spoiler: it’ll take more than just…
2022-06-24 13:58:52 RT @niallfirth: Scoop from @Melissahei and @strwbilly today. @ylecun has a bold new vision for the future of AI - but it raises plenty of q…
2022-06-24 13:38:34 Very nice article by @Melissahei at MIT Tech Review about an upcoming position paper of mine on a possible path towards autonomous AI, machine common sense, etc.Available soon at an ArXiv near you. https://t.co/MoPvZXcNNU
2022-06-24 13:34:39 @nearcyan About half. They are preventable deaths that would happen at a much lower rate were guns not so readily available.Even if you discount suicides, the rate of death by firearm in the US is still enormously higher than in all other OECD country.
2022-06-24 13:24:03 RT @BeschlossDC: Hail the members of the House January 6 Committee, who may, at this moment, be in the process of helping to save our democ…
2022-06-24 13:20:13 A map of the rate of deaths by firearm in the US.Correlated with gun availability and the prevalence of Christianity.Which one is causally related to deaths by firearm? https://t.co/CrsYDytvjr
2022-06-23 23:31:48 @grbradsk @machine_quest Haha!
2022-06-23 17:59:52 RT @tyrell_turing: Very interesting! Fits with other work finding that time-limited humans are susceptible to adversarial images.Speculat…
2022-06-23 12:33:09 @RespectToX 10^13 synapses, to be exact.Only a factor of 10 larger than the largest current models.(assuming 1 synapse is assimilable to 1 parameter). https://t.co/uyLvkzeqB3
2022-06-23 12:27:07 @dataghees @francoisfleuret The essence of FAIR's modus operandi is openness and reproducibility.Pretty much everything from FAIR is open sourced.Why?Because, as good as we are, we don't think we have a monopoly on good ideas.
2022-06-23 12:18:59 @francoisfleuret Profitable != Profit hungry.Profitable companies can afford to think long term and maximize positive long-term impact by being generous.Profit hungry companies just want to maximize short-term profit at the expense of long-term positive impact.
2022-06-23 12:14:22 @balazskegl My upcoming position paper on a path towards autonomous intelligence.
2022-06-23 12:09:45 @yecchs Exactly.
2022-06-23 12:06:52 @machine_quest Obviously, the ability to move doors in the right place is encoded in every cat's DNA
2022-06-23 11:38:09 Pretty good world model and pretty good planning abilities.Not bad for 900 million neurons. https://t.co/9jvn807f0X
2022-06-23 11:28:50 @balazskegl We agree on that.The book is probably Brachman &
2022-06-23 02:05:41 OPT-66B is available.Unrestricted, Open source. https://t.co/t6Xt2zdZO5
2022-06-22 16:51:43 RT @LeopolisDream: #NYU Deep Learning with @ylecunand @alfcnz, excellent visual explanations and grounded knowledge https://t.co/iGIUV8BW…
2022-06-22 16:35:38 RT @MetaAI: Stop by our booth 1019 at @CVPR from 6/21 to 6/23 to try our self-supervised learning demos, our avatar puppeteering and experi…
2022-06-22 16:33:38 An open-source, *trainable*, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2. https://t.co/vt1zVXu1Gx
2022-06-22 05:08:39 Large Language Models can't plan. https://t.co/EY1Im0u077
2022-06-22 04:44:29 RT @AlecStapp: "But where will we put all the waste?" seems like an extremely overrated criticism of nuclear energy given these facts: http…
2022-06-22 04:33:28 @maartengm Hence the "satirical"
2022-06-22 03:59:14 RT @KevinBankston: Big Responsible AI news today: in response to concerns around fairness in ads delivery, @Meta’s building a new machine l…
2022-06-22 03:49:31 We've had the Chinese Room Argument.We've had the Chinese Matrix Product Argument.Now we have the Chinese Ion Channels Satirical Argument.Pretty soon, they'll claim sentience isn't an emergent property of computational devices but requires some sort of mysterious life force. https://t.co/eM7K1zLiL7
2022-06-22 00:27:54 RT @say_cem: Recommended reading! By @ylecun &
2022-06-21 12:36:36 RT @koenfucius: Does symbolic manipulation need to be hard-coded, or can it be learned? The answers might change our understanding of how i…
2022-06-21 12:28:16 @wayneholmes 1. If my employer(s) constrained my thinking on the best path to progress in AI, I would change the employer, not the thinking.2. You have the wrong idea about my motivations, and about my employer's intentions and modus operandi.3. It's called Meta.
2022-06-21 12:21:12 RT @chriswolfvision: A nice read by @ylecun and Jacob Browning on symbolic AI vs. Deep Learning vs. Hybrid Models. I enjoyed its laid-back…
2022-06-20 22:44:36 RT @holmesjtg: Thanks to @ylecun and Jake Browning for helping us navigate the debate about the future of cognition in AI systems.https://…
2022-06-20 22:38:29 @vitalfunctions Agreed.
2022-06-20 16:31:15 @ElySpears Sure.But then you'll be exposed to a haystack of non-sense and will have to find the "best quality arguments" needle by yourself.Perhaps you have the time and breadth of abilities to do that for each one of your fields of interest.I don't.
2022-06-20 16:24:32 The relevant quantity is the relationship between your number of tweets and your number of peer-reviewed papers.Or perhaps between your number of social media followers and your h-index on Google Scholar, as famously suggested in the past. https://t.co/uudUSYzb3i
2022-06-20 16:18:34 @pedropbg @boredyannlecun If I were facetious I'd say: "so neural nets with capsules finally became practical!"But that would be mean
2022-06-20 16:14:02 To avoid excessive hype and naysaying about AI, one merely needs to listen to the right people.You know, like *actual* AI scientists without a huge axe to grind (financial or philosophical). https://t.co/LNxM6ZZAOi
2022-06-20 16:08:16 @boredyannlecun OK man, you are hired to write my serious tweets.I'll just write the fun stuff.
2022-06-20 15:26:25 Same plan they have to reduce global warming: once in power, deny it even exists. https://t.co/0F8xbEgzR2
2022-06-20 10:13:08 Scientist at startup= demos, product development, fighting for survival.Scientist in engineering division of large firm= product development.Scientist at industry research lab= research, technology transfer.Your mileage may vary. https://t.co/ooraT34cM6
2022-06-20 10:05:42 Fun exercise from @ziv_ravid ! https://t.co/GruwA5yarg
2022-06-20 09:43:24 RT @bobehayes: Can #deeplearning systems learn to manipulate symbols? The answers might change our understanding of how intelligence works…
2022-06-20 09:39:28 @JoseMPortilla That would be some sort of LispTorch, if I had my way
2022-06-20 09:31:38 RT @HLForum: Congratulations to @geoffreyhinton, @ylecun, Yoshua Bengio and also @demishassabis on receiving the 2022 Princess of Asturias…
2022-06-20 09:11:00 RT @ianbremmer: us: left govt, high inflation uk: right govt, high inflationgermany: centrist govt, high inflation italy: everyone in go…
2022-06-19 07:43:52 @mart1oeil @OlivierBabeau @MetaAI Les opérations de Meta sont déjà neutres pour les émissions de CO2.Tout est détaillé là :https://t.co/O89XYa0bHL
2022-06-19 07:41:28 @zachy_jones It's a game of leapfrogging.And research trends precede industry trends.Torch (Lua based) came first.Then TensorFlow appeared and became dominant.Then PyTorch appeared and became dominant in research.Now JAX is becoming dominant for research at Google and a few other places.
2022-06-19 07:22:45 Would be great if US banks used IBAN, too, in addition to contactless card payment and chip&
2022-06-19 07:19:31 RT @benedictevans: @billt It helps to think of American as a much richer version of Brazil rather than a dysfunctional version of any Europ…
2022-06-19 07:14:29 The great competition between Deep Learning frameworks enters a new phase.Now that Google's TensorFlow has lost to Meta's PyTorch, Google is internally switching to JAX. https://t.co/nLHldPXBTW
2022-06-19 07:08:27 RT @OlivierBabeau: Bon, pas de tweet politique pour cause de période de réserve. Alors juste un fun fact pour s’amuser et se lever moins b…
2022-06-19 07:05:11 @togelius As in "interpolating associative memory"
2022-06-18 20:24:22 RT @deviparikh: Excited to be part of this workshop, and to give my first in-person talk in so long! Come check it out — if nothing else, I…
2022-06-18 20:18:59 @davidwhogg Until Putin invades Poland or Xi invades Taiwan.Then you might very well become part of the solution
2022-06-18 15:58:44 RT @NoelSharkey: AI hype and symbolic reasoning. Interesting article by @ylecun on the current limitation of DL to learn symbol processing.…
2022-06-18 09:33:53 @wellingmax The most recent one of those two accidents killed exactly 1 person.
2022-06-18 08:23:33 Nice study.TL
2022-06-18 08:12:47 @EtienneKlein D'un gaucher contrariant à un autre: vous avez toute ma sympathie.
2022-06-18 06:50:54 Worth repeating. https://t.co/xSfEp6lhTt
2022-06-17 20:49:12 @kareem_carr Wait! Larry and Sergey dropped out of *PhD*.Ballmer dropped out of *MBA* (BA from Harvard).And Elon Musk graduated from U Penn.They are not college dropouts.Also Larry and Sergey eventually defended their PhDs, IIRC.
2022-06-17 20:41:22 @AlexMartin Nope, it's the "slightly conscious" one.
2022-06-17 20:39:41 @olujoe_1 @Twitter It's me, don't worry.
2022-06-17 20:38:52 @GaryMarcus @csabaveres I suppose that depends what you mean by "symbolically"
2022-06-17 20:34:33 @csabaveres @GaryMarcus That's a pretty ridiculous argument, sort of like "head I win, tail you lose".The point is to get DL system to learn to reason (symbolically or not) in ways that are compatible with gradient-based learning.
2022-06-17 20:27:43 RT @MetaAI: As impressive as #AI developments may be today, how does it compare to humans? At #VivaTech, @ylecun asserts: “Today, AI is ver…
2022-06-17 20:25:51 RT @MetaAI: Meta is thrilled to be a Platinum Sponsor of @CVPR. Join us in New Orleans 6/19-6/24 and meet our researchers presenting papers…
2022-06-17 20:22:02 Hilarity ensues from the complete disconnection of large language models from the underlying reality of the Real World. https://t.co/SFH3XY2wFh
2022-06-17 18:08:10 RT @USEmbassyFrance: Félicitations aux nouveaux associés étrangers de l'@AcadSciences et plus particulièrement aux 4 scientifiques : @A_N…
2022-06-17 18:06:55 RT @bymaddyness: On a parlé intelligence artificielle avec @ylecun dans les allées de #Vivatech. Quelle utilisation de l'#IA aujourd'hu…
2022-06-17 18:05:26 @OriolVinyalsML @geoffreyhinton @demishassabis Muchas gracias.
2022-06-17 18:04:53 RT @MetaAI: We invite you to join this year's Open Catalyst Challenge and develop #AI techniques to accelerate catalyst discovery for renew…
2022-06-17 14:56:48 @boazbaraktcs I won't correct you because you are absolutely right.
2022-06-17 14:56:12 RT @boazbaraktcs: Good article. There's old debate not just about symbolic vs. neural computation, but also about "hard coding" vs learning…
2022-06-17 14:55:17 @Jake_Browning00 @GaryMarcus Well, a piece by @Jake_Browning00 and me, but mostly by Jake.
2022-06-17 13:35:09 RT @SpainMMG: Four artificial intelligence pioneers honoured with Spain's Princess of Asturias Award for Technical and Scientific Research.…
2022-06-17 13:34:07 @noranta4 Yes. Search strategies, e.g. for planning, will probably need to be hardcoded for the foreseeable future.
2022-06-17 13:32:40 @csabaveres They can write short and approximately correct Python programs by exploiting statistical regularities in code.But writing longer (and correct) programs will require some hierarchical planning abilities that current systems simply can't do.
2022-06-17 13:28:29 A real pleasure to discuss the recent progress and future impact of AI with @ericschmidt.A dialog masterfully moderated by @cedric_o . https://t.co/qjyxpDnUrc
2022-06-17 13:26:12 RT @VPantaloni: Yann LeCun @ylecun etTimothy Gowers @wtgowers à l'@AcadSciences https://t.co/g1KZ8UK1MK
2022-06-17 13:25:49 RT @Madleen_Bultez: Top conférence du jour : @ylecun @ericschmidt invités à partager leur vision de l’IA dans le futur et en Europe. Accent…
2022-06-17 13:24:49 A paper of ours in Noema about some philosophical questions surrounding AI research and its recent progress. https://t.co/pK3GxJa4Ny
2022-06-16 18:49:26 A piece by Jake Browning and me (mostly by Jake) in the philosophy magazine Noema about AI and human intelligence, and walls not being hit by the former on the way to the latter. https://t.co/I7Vlxtwy0J
2022-06-16 15:54:03 @wtgowers Ouch! Sounds like this could have happened at the Académie events.
2022-06-16 14:20:04 RT @MetaFrance: Quelles pistes de recherche pour décupler le pouvoir de l’#IA ? Pour @ylecun et #FAIR "la recherche ouverte est la meil…
2022-06-16 14:19:53 RT @MetaFrance: Aussi impressionnants soient les progrès de l’#IA, comment se compare-t-elle à l’homme ?A #VivaTech, @ylecun le rappelle…
2022-06-16 14:16:04 RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&
2022-06-16 09:00:01 Un petit portrait dans Les Échos d'aujurd'hui.https://t.co/HJZU1lDMZl
2022-06-16 07:42:47 Very honored to be receiving the Princess of Asturias Award for Scientific Research, along with my dear friends and colleagues @demishassabis, @geoffreyhinton, and Yoshua Bengio.https://t.co/xbVT97s5Qo https://t.co/qV7bt14p66
2022-06-15 22:12:17 Amusant.Quelques citations illustrées extraites de ma présentation au congrès de la Société Informatique de France. https://t.co/4eAbXBY7cc
2022-06-15 22:06:02 RT @rtvenoticias: Geoffrey Hinton, Yann LeCun, Yoshua Bengio y Demis Hassabis, Premio Princesa de Asturias de Investigación Científica y…
2022-06-15 22:05:03 RT @cristiancanton: Congrats to @ylecun and others for getting the Princess of Asturias award, one of the highest honors you can achieve in…
2022-06-15 22:03:59 RT @fpa: ¿Conoces la trayectoria de @GeoffreyHinton, @Ylecun, Yoshua Bengio y @DemisHassabis, Premio Princesa de Asturias de Investigación…
2022-06-15 22:03:41 RT @fpa: #ÚLTIMAHORA: Geoffrey Hinton, Yann LeCun, Yoshua Bengio y Demis Hassabis han sido galardonados con el Premio Princesa de Asturias…
2022-06-15 21:25:59 Yoshua Bengio, @geoffreyhinton, @demishassabis, and I will be sharing Spain's Princess of Asturias Award at the end of October.I'm very honored by the award.https://t.co/lTDiFyhdLr
2022-06-15 11:23:21 RT @ptiberry: Non, l'intelligence artificielle de Google n'a pas atteint le stade de la conscience de soi, contrairement à ce qu'affirme un…
2022-06-15 11:22:05 Mon discours d'acceptation à l'Académie des Sciences est disponible sur YouTube. C'est court: 6 minutes 30. https://t.co/0Tf5RBlHJl
2022-06-14 15:06:43 RT @MilesCranmer: "Curriculum" for the first @FlatironInst ML x Science Summer School, which has been amazing so far. Lectures (ongoing) to…
2022-06-14 14:48:55 RT @AcadSciences: Plus que quelques minutes à attendre avant la séance de réception de nos nouveaux associés étrangers !À suivre à 14h50…
2022-06-14 14:48:00 RT @MetaFrance: À @VivaTech, @ylecun, VP &
2022-06-14 14:41:28 RT @AcadSciences: Suivez maintenant, en direct avec nous, la cérémonie en l'honneur de nos 16 nouveaux associés étrangers :"Le futur de l…
2022-06-14 14:40:40 RT @AcadSciences: Dans nos coulisses | Les associés étrangers de l'@AcadSciences élus en 2021 réunis maintenant sur le parvis du palais @I…
2022-06-13 16:16:29 Hilarious. https://t.co/ZXRXaoMN0v
2022-06-13 16:14:54 RT @Markzandi: Understanding what is behind the painfully high CPI inflation is key to understanding where it is headed and when. This tabl…
2022-06-13 07:11:46 Words of wisdom from @tdietterich https://t.co/tZVNs0rSMB
2022-06-12 15:11:46 @wamageed @lexfridman @JonHaidt The studies listed in the literature review are very mixed at best.This clearly does not constitute "overwhelming evidence".That said:There is no doubt that badly run social networks can have the negative effects you mention.The modern FB is designed to avoid them.
2022-06-12 08:01:50 @ItalyHighTech @GaryMarcus @elonmusk @Twitter Large transformers taking down hate speech and bullying caused the trolls to leave FB.96.1% of all hate speech is taken down automatically. Up from like 30% only a few years ago.We can thank Self-Supervised Learning and transformers for that.
2022-06-11 17:40:51 At Vivatech, June 16 at 14:15. https://t.co/7Jh4SwI1vF
2022-06-11 17:34:47 RT @paulkrugman: I'm almost at the end of a long European trip — mostly visiting friends and going to a few conferences, not having fancy c…
2022-06-11 17:28:58 @GaryMarcus @elonmusk @Twitter It's called post-2018 Facebook.And it's free.
2022-06-11 17:26:35 RT @phalpern: 'I am happy because I want nothing from anyone. I do not care for money. Titles or distinctions mean nothing. I do not crave…
2022-06-11 17:24:07 RT @AcadSciences: N'oubliez pas notre RDV mardi à 14h50 avec@MartinHairer@NicolaSpaldinW WERNSDORFER@EvaStukenbrock@ylecun@A_N…
2022-06-11 14:23:12 Un palmarès des startups françaises les plus innovatrices produit par Le Point.https://t.co/gWhwJpXc4d
2022-06-11 13:00:26 @kchonyc I still remember when some engineering conferences insisted that you had to use a Microsoft Word template
2022-06-10 23:02:56 RT @CodeZ: The best deep learning course I have seen so far. Thankyou @ylecun and @alfcnz. #AI #DeepLearning #100DaysOfCode https://t.c…
2022-06-10 22:00:34 RT @tyrell_turing: The Twitter-sphere sometimes portrays things as hopelessly broken in science. They're not. Let's remind ourselves of all…
2022-06-09 19:18:40 And now for a short and entertaining interlude... https://t.co/CH6CkCFrXM
2022-06-08 14:06:58 RT @DavidSimplot: En 2022, j’ai le plaisir de parrainer le magazine @SophiaMetroMag ! Au sommaire du numéro 37 : Vers un décloisonnement cr…
2022-06-08 14:05:53 @paulkrugman New York City is among the safest places in America.Among 3,143 counties in the US, Manhattan is the 11th safest, and Queens the 6th safest.
2022-06-08 01:44:10 RT @JoeBiden: In 1994, Congress passed a bipartisan assault weapons ban. Nine categories of semi-automatic weapons were included, like AK-4…
2022-06-08 01:07:41 RT @Nereide: #OTD in 1954, Alan Turing passed away: let's listen to the nice interview in English that Yann LeCun (@ylecun), pioneer in #AI…
2022-06-08 01:02:08 @adad8m You and me both!
2022-06-08 00:56:57 RT @McFaul: Americans don't need AR15s. We have plenty of other kinds of guns. This gun kills our children and scares our police officers.…
2022-06-07 07:54:41 @adad8m I'm really talking about the general class of gradient-based optimization, not strictly GD. I'm including preconditioning, stochastic optimization, 2nd order acceleration methods, and other techniques, many of which are used in deep learning.
2022-06-07 07:50:29 @1101011010nn @RWJE_BA Those are all gradient free, hence slow.What performance do you get and how long does it take to train something on ImageNet with these?
2022-06-07 07:43:06 @karpathy No.https://t.co/0HQzL2yLhi
2022-06-07 07:39:12 Complete version in English of an interview I did with RAI Radio3 Scienza. https://t.co/tyD3UC2Nq2
2022-06-07 07:35:37 No, brains don't build generative models at the pixel level.They learn abstract representations that *eliminate* noise, unpredictable stuff, and irrelevant information.The salvation is in Joint Embedding Predictive Architectures (JEPA).https://t.co/42ApHRbge9 https://t.co/g3oPzIliG2
2022-06-06 15:04:33 @patrickmesana @GaryMarcus It's not a rhetorical question.What if:(a) there is a better way to do ML optimization than gradient-based methods (very unlikely, but who knows).(b) there are other ways to do ML than by optimizing an objective function (unknown, so far).
2022-06-06 15:01:51 @GaryMarcus So "DL is part of the solution, but we need new components for reasoning"?Welcome to my world!
2022-06-06 14:52:22 @MFischetti The applied math / optimization community has come around on this question several years ago.They now realize that asymptotic convergence rates are irrelevant for ML and that the large scale of DL systems makes 2nd order methods impractical.https://t.co/F3p2NXjA4z
2022-06-06 14:50:32 @MFischetti As Léon Bottou and others have shown, there is no point in finding a perfect minimum on the training set.This will likely lead to bad generalization.That's one reason SGD works so well: it gets close enough to a minimum very quickly.https://t.co/ns2FhCYITq
2022-06-06 14:43:27 @aarbrk But that's the point: with large/overparameterized deep learning architectures, local minima don't seem to be a problem.Local minima are highly degenerate (flat in many dimensions) and largely connected with each other.
2022-06-06 14:40:37 @puffybsd Or perhaps "Gradient Indecent"
2022-06-06 14:39:53 @nileshbruh69 Try training something on ImageNet with simulated annealing, and come back with the result
2022-06-06 14:35:42 @mraginsky Not mentioning Yakov Tsypkin.
2022-06-06 14:30:56 @GaryMarcus 1. Gradient-based optimization is an ingredient for learning, not a complete set of component for human-level AI.2. What could possibly "come next" to *replace* gradient-based opt for learning?Do you believe that:(a) gradient-free opt is bad?or (b) optimization is bad?
2022-06-06 12:48:10 Awesome work from @JeanRemiKing and team shows that activities in the layer hierarchy of a transformer trained with Self-Supervised Learning on speech and audio correlates well with activities of the hierarchy of areas on the human auditory cortex. https://t.co/C7tyRSbvos
2022-06-06 12:43:16 RT @mariashriver: On this day in 1968, before many of you might even have been born, my uncle, Robert F. Kennedy was gunned down while runn…
2022-06-05 20:27:01 RT @zeitzoff: A recent article by @JonHaidt claims that social media is behind most of our problems. https://t.co/TspXAfMHMI It seems st…
2022-06-05 19:09:09 RT @RepSwalwell: We are voting to protect kids from the next school shooting. Every GOP member is opposed. So I asked my GOP colleagues, "W…
2022-06-05 15:25:58 A great list of academic studies about the effects of social media on political dysfunction: polarization, echo chambers, emotional amplification, incitement to violence, trust in institutions, populism....https://t.co/8wNPoHC6u1
2022-06-05 15:10:24 My friend Léon Bottou had to write a simple price of self-contained code to prove to people that a 3-line *stochastic* gradient method could beat sophisticated methods by orders of magnitude *even* for convex problems (SVM, CRF).https://t.co/ZRKbaEPEPP
2022-06-05 15:01:07 @RWJE_BA Do you have an alternative to gradient-based optimization for learning?
2022-06-05 14:56:59 @kntz @lexfridman @JonHaidt What you may observe on Twitter does not necessarily apply to Facebook.
2022-06-05 14:55:49 @wamageed @lexfridman @JonHaidt The "common narrative" I mention is one you find in the media, not in scientific publications.Unlike climate change, for which there is a quasi-unanimous consensus among scientists, there is no consensus on the impact of social networks on society among social scientists.
2022-06-05 14:52:54 @thejuicywitcher @lexfridman @JonHaidt None of the scientists whose names I mention are "on the payroll" of the industry.
2022-06-05 14:24:09 @RWJE_BA I submit that GD is part of the solution to human-level AI.
2022-06-05 14:21:50 @irfanbulu I *do* hang out with physicists!
2022-06-05 14:16:22 At the last NIPS held in Denver in 2000, a very prominent ML researcher asked at dinner "what is the most important thing we've learned in ML?"My answer: "the power of gradient descent."His dumfounded facial expression revealed how stupid he found my answer to be.
2022-06-05 14:11:19 I've been trying to convince many of my more theory-oriented colleagues of the unbelievable power of gradient descent for close to 4 decades.1/2 https://t.co/T4oobT8P5w
2022-06-04 20:37:50 @lexfridman @JonHaidt Bibliography on the issue of the impact of social media on society. https://t.co/8wNPoHC6u1
2022-06-04 20:35:24 @lexfridman @JonHaidt Haidt vs Gail vs Gentzkow vs Tucker. TL
2022-06-04 19:46:00 RT @awnihannun: A short thread on forward and reverse mode autograd:A great way to internalize the complexity difference between forward…
2022-06-04 14:03:08 @oliverlibaw There is a ray of hope at the end of the tunnel, though.
2022-06-04 13:40:10 @oliverlibaw And you are whaling in agony?
2022-06-04 13:30:29 @chris_jwala You don't seem to have read the article.
2022-06-04 13:01:57 Interesting piece on the debate in academia about the effects of social media on society, polarization,..Haidt vs Gail vs Gentzkow vs Tucker. TL
2022-06-03 22:36:12 I got a whiff of sole-crushing fishy tench from those floundering puns. https://t.co/9G21XaRl1V
2022-06-03 21:38:48 @Reda_Action La tradition veut que les français résidents à l'étranger soient membres étrangers.Il s'avère que c'est un club encore plus exclusif que les membres français.Par exemple, la majorité des informaticiens étrangers sont lauréats du prix Turing.https://t.co/nhJMnsVzH0
2022-06-03 21:31:55 @HappyAar 92% of Americans are in favor of background checks.A majority is in favor of banning military-style rifles (e.g. AR-15 and such).https://t.co/kxDgLoQR1r
2022-06-03 19:45:07 A revolting piece of statistics. https://t.co/MZTcqfkJiV
2022-06-03 19:22:46 Le 15 juin, au congrès des 10 ans de la Société Informatique de France. https://t.co/KK31xcAKYY
2022-06-03 15:09:39 @svogel @iamtrask s/though/through/
2022-06-03 13:03:17 @iamtrask "Advancing the state of the art in AI though open research for the benefit of all"
2022-06-03 13:01:50 @mierrashid @iamtrask That's the mission statement for the broader Meta AI organization, not FAIR specifically.
2022-06-03 12:54:08 RT @pierrepinna: Freely-available @nyuniversity course on #DeepLearning to check out from @ylecun and @alfcnz, including videos, slides, an…
2022-06-03 12:51:48 Jeudi 16 juin. https://t.co/OBEihAppU8
2022-06-03 12:49:50 @csimons84682057 @lxbrun @NablaTech No.
2022-06-03 12:42:50 @ch3njus That would have been very meta.
2022-06-03 12:33:02 RT @AlecStapp: China is going to build >
2022-06-03 12:25:14 @SMehrizi @davidwhogg Lots of issues and limitations.If there weren't any issues nor questions, I wouldn't need to be doing research on it, would I?And I certainly wouldn't start *every* *single* talk with a list of things DL can't do and obstacles to progress.
2022-06-02 20:51:15 RT @AcadSciences: 5⃣"Le futur de l’#IA", par @ylecun, prof. @nyuniversity, Chief #AI Scientist @Meta, élu dans la section Sciences #mécani…
2022-06-02 20:50:49 Cérémonie de réception des nouveaux membres étrangers de l'Académie des Sciences, le 14 juin à 14h50. https://t.co/FIw3XEvovL
2022-06-02 20:05:22 Schrep comments on the organizational changes of AI R&
2022-06-02 20:02:06 Nabla unrolls its AI/ML powered personalized medicine platform, making its SDK and API available to other healthcare companies.Bravo @lxbrun and the @NablaTech team. https://t.co/JGROTFVKYI
2022-06-02 19:58:55 RT @lxbrun: A very thoughtful article from @riptari about what we do at @NablaTech and why.https://t.co/ro8CZePU7I
2022-06-02 19:12:06 @assadollahi o, not really.VR is part of Meta Reality Labs.But Meta RL is much more than VR.Meta RL works new technologies to connect people to the digital world and to each other.Think of it as the next computing platform, which includes the Metaverse.AI is a key component of that.
2022-06-02 19:09:46 @alperenobot There is no simple recipe.But FAIR has had a huge impact on Meta's operations. This was largely the result of techniques originally developed at FAIR that were turned into deployable tech by Applied R&
2022-06-02 18:07:38 AI has become so central to operations that Meta AI groups working on product-oriented projects will now be part of the corresponding product groups.3/N, N=3
2022-06-02 18:07:11 - FAIR is still managed by Joelle Pineau and Antoine Bordes. Joëlle, Antoine, and I co-lead FAIR. They do the hard work. I help them with strategy.- FAIR now stand for "Fundamental AI Research"!2/N
2022-06-02 18:06:35 Big changes for AI R&
2022-06-02 18:05:18 @an_open_mind @MetaAI Thanks for all you've done, Jérôme!
2022-06-02 15:14:19 @davidwhogg Answer: no.https://t.co/yLpUqghU6Y
2022-06-02 12:29:17 RT @MetaAI: Large language models can memorize examples in their training data, but this phenomenon is not yet well understood. In a new pa…
2022-06-02 12:22:40 @AnsDome @alfcnz We ended up using some custom variation of VAE: https://t.co/OYxDicpwKN (ICLR 2019)Now we are experimenting with Joint Embedding Predictive Architectures:https://t.co/42ApHRbge9
2022-06-02 12:15:29 @schrep @sama I have the perfect name for these groups: information bubbles.
2022-06-02 12:08:21 RT @ASlavitt: The US today faces the ultimate indictment. We are a country not worthy of its own children.
2022-06-02 12:00:09 RT @deviparikh: (Late to the game, but) Inspired by @xtinuccia's template for @Artchild_io. CC @Toni_Marinara. https://t.co/3YzOWwVKI3
2022-06-02 11:55:34 @lxbrun Amazing journey!
2022-06-02 11:54:10 An amazing journey!Keep blazing trails, @lxbrun and the @NablaTech team. https://t.co/B4tAXrm7sE
2022-05-31 14:47:56 RT @FabriceFrossard: Aréopage de prestige pour les speakers : Bernard Arnault, @Cheydema (Orange), Corinne de Bilbao (Microsoft), @ryros…
2022-05-31 14:46:36 Speaking at @VivaTech in a couple of weeks. https://t.co/NdLqDjSRZZ
2022-05-30 20:05:07 RT @AcadSciences: #BonneNouvelle, le décret de l’élection de nos nouveaux associés étrangers est paru dans le JO : https://t.co/ChK63tQB7e…
2022-05-30 19:58:53 RT @therobotbrains: @SomeRobot1 @ylecun @ylecun is also not our guest this week but we got it - you can’t miss this conversation! https://t…
2022-05-29 20:48:48 @vo_d_p @JFPuget @francoisfleuret @rasbt Along with many other things.
2022-05-29 20:47:03 RT @DavidSimplot: Retrouvez le #Replay de la cérémonie docteur honoris causa d’@Univ_CotedAzur de @ylecun avec sa conférence "Dans les pron…
2022-05-29 20:45:59 RT @dpkingma: Around the same time, I started reading up on all of @ylecun's papers and online videos. Loved his talk "Who is Afraid of Non…
2022-05-29 18:59:44 @JFPuget @francoisfleuret @rasbt Perhaps.But the nice thing about open basic research is that the entire world profits from it.Not just its main sponsor.Want an examples? Convolutional nets.
2022-05-29 18:38:23 RT @PyTorchPractice: Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch https://t.co/r9fQAqWzU6 #deeplearning #mac…
2022-05-29 14:34:08 @Gage_W_T @MeenaArjune Aurora: AR-15Uvalde: AR-15Buffalo: AR-15Boulder: AR-15Midland: AR-15Orlando: AR-15Parkland: AR-15Las Vegas: AR-15Sandy Hook: AR-15Waffle House: AR-15San Bernardino: AR-15Poway synagogue: AR-15Sutherland Springs: AR-15See a pattern?
2022-05-29 14:31:12 @JFPuget @francoisfleuret @rasbt There are umbrella agreements between FAIR and a number of universities that allow academic and FAIR researchers to collaborate and allows academic researchers (including PhD students) to use the FAIR computing infrastructure.It's not that different.
2022-05-29 14:21:14 RT @vardi: :-( https://t.co/AnVV99LSPA
2022-05-29 14:20:33 RT @cosmo_shirley: Very excited about our recent paper on using machine learning to estimate the Galactic accelerations! Led by @JakeNiba…
2022-05-29 12:52:08 RT @BeschlossDC: Nixon on his secret tapes,, May 1972:“I don’t know why any individual should have a right to have a revolver in his house…
2022-05-29 12:37:33 @francoisfleuret @rasbt So you would be fine with rules that would inevitably *slow down the progress of our field* just for the purpose of counting points more fairly?Isn't that unethical?If physicists has similar rules: No LHC, no Hubble, no JWST, no DoE supercomputer for climate modeling.
2022-05-29 12:18:01 @BionicDad55 @jmsykes15 @meganinlisbon Western European countries are inmensely less religious than the US, yet their people don't kill each other at nearly the rate as Americans.Difference? No guns.
2022-05-29 12:13:39 @GeorgeSFrankl I'll give you a logical reason:Aurora: AR-15Uvalde: AR-15Buffalo: AR-15Boulder: AR-15Midland: AR-15Orlando: AR-15Parkland: AR-15Las Vegas: AR-15Sandy Hook: AR-15Waffle House: AR-15San Bernardino: AR-15Poway synagogue: AR-15Sutherland Springs: AR-15See a pattern?
2022-05-29 05:39:14 @MeenaArjune Does the fact that anyone can buy an AR-15 make you feel safer or not?So you protect yourself with an AR-15?
2022-05-29 05:35:45 @maxpayne123477 These folks would be safer if there were fewer guns.Particularly if there were no AR15.
2022-05-29 05:32:22 @provilkov Domestic authoritarians are actually *supported* by gun advocates (and vice versa, as we just saw), ready to believe Great Lies and take arms against fellow citizens who in favor of democracy.As for foreign authoritarians, there is an army for that.
2022-05-29 05:26:32 @menomnon Who says I can't invent colorful expressions?In this case by interpolation between lily-livered and yellow-bellied.
2022-05-29 05:20:47 RT @MetaAI: Read about new developments in deep learning with authors and researchers Daniel A. Roberts (@danintheory), Sho Yaida (@Shoyaid…
2022-05-28 20:54:11 @ronbrachman I've been interested in the question of common sense for quite a while.
2022-05-28 20:51:40 @seanmcbride That's a fantasy.Historically, regimes that devolved into oppressive dictatorships were first elected, often with strong support from people who are ideologically aligned with the idea that violence and gun ownership can solve problems.You know, fascists.
2022-05-28 20:37:19 @starkweatherdg I would leave for a more civilized region or country long before that.Providing safety for residents is the first mission of a functioning government.
2022-05-28 20:33:03 @Veronickapinke Deaths by firearms are *much* rarer in every developed liberal democracies than in the US.The exceptional unicorn is the US.
2022-05-28 20:30:37 @oliviernovel Je parle du citoyen moyen, las de l'armée ni de la police.
2022-05-28 20:29:25 @ducha_aiki Ordinary US citizens, obviously.
2022-05-28 20:28:44 @GeorgeSFrankl I believe that a proper system of laws, law enforcement, and justice can protect your daughter considerably more efficiently than a gun.Current gun laws are insufficient.If you want a gun, go right ahead.But why insist that anyone should be able to get an AR15?
2022-05-28 20:22:13 @bitenthusiastic Almost no one in developed liberal democracies owns guns.They don't seem to need them for self defense or any other purpose.Almost no one, except a number of yellow-livered Americans.
2022-05-28 20:18:26 @CalvinLow5 I'm talking about ordinary citizens of a purported liberal democracy.Of course, law enforcement and the military should have weapons.The Swiss have to keep their military rifle at home, but can't have ammos.
2022-05-28 20:15:49 @MoreeSpinne Australia banned many kinds of firearms in 1996 and established a gun buyback program that took hundreds of thousands of guns from circulation.Firearm homicides and suicides were greatly reduced as a consequence.https://t.co/gTly2z7mpF https://t.co/kU0q3lt66Q
2022-05-28 20:01:19 @seanmcbride Obviously, an armed conflict is an armed conflict.And at least some proportion of law enforcement officers should be armed.We are talking about citizens in what is meant to be a liberal democracy here.
2022-05-28 19:59:25 @Veronickapinke How about living in a place where the laws, the justice system, and the police allow citizens to live in peace without fear?Most liberal democracies in the developed world provide that.What about the US?
2022-05-28 19:41:31 This https://t.co/PjyHXZs1jS
2022-05-28 19:37:43 Guns are for paranoid, yellow-livered cowards.
2022-05-28 19:24:43 @ronbrachman I'm preparing a long paper about this architecture.The short version, in the form of a blog post is here: https://t.co/42ApHRbge9
2022-05-27 10:11:08 @anandcheam Black-Scholes was very well understood.That didn't stop it from being dangerous.
2022-05-27 10:07:17 @kchonyc @yoavgo Messenger pigeons.
2022-05-27 10:04:40 Precisely. https://t.co/ESqUk3eNJS
2022-05-27 10:00:36 @neurograce @nyuniversity @NYUPsych @NYUDataScience Welcome to NYU, Grace!
2022-05-26 06:28:56 As always... https://t.co/KFveJMvmQX
2022-05-26 00:41:58 RT @Radio3scienza: Eccoli qui, insieme, i due protagonisti della puntata di oggi: @TomasoPoggio e @ylecun, padri dell'intelligenza artifici…
2022-05-26 00:38:33 @BethCarey12 @PhilosophorumQ Before you can do science, you need to hypothesize a model.That requires creative intuition.The science comes after.Right now, we are debating which class of hypotheses is most likely to be useful.
2022-05-26 00:34:01 @gottfriedmath Humans are pretty bad at causal inference.If they were so good at it, they wouldn't assign the cause of unexplained or random phenomena to imaginary deities, and religion would not exist.
2022-05-25 16:18:58 RT @tribelaw: 50 GOP Senators could vote for H.R. 8, universal background checks, TODAY. Every Senator who backs McCONNELL in blocking a vo…
2022-05-25 16:04:30 RT @paulkrugman: In a different time zone, so I woke up to the Uvalde news. As usual, the people making such things possible are demanding…
2022-05-25 15:35:11 @PhilosophorumQ My claim is merely that gradient-based learning is part of the solution.
2022-05-25 14:52:45 Contrary to claims that I somehow "dismiss" the idea reasoning in DL systems, I've long listed 3 main challenges to AI in my talks of the last several years, one of which is "learning to reason, in ways that are compatible with gradient-based learning" https://t.co/Oxc7dR8jF4 https://t.co/VoCdb8h3mP
2022-05-25 14:45:52 RT @ConfindustriaUd: Thursday, 26th MayConfindustria Udine Academy will host @ylecun, Chief #AI Scientist at #Meta: the scientific min…
2022-05-25 14:45:18 RT @s_scardapane: *Contrastive and Non-Contrastive SSL Recover Global and Local Spectral Embedding Methods*by @randall_balestr @ylecun I…
2022-05-25 03:05:06 RT @Radio3scienza: Sono due veri giganti dell'intelligenza artificiale. Li riconoscete? @TomasoPoggio e @ylecun, entrambi trapiantati negl…
2022-05-25 03:01:32 RT @arXiv_Daily: Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methodshttps://t.co/…
2022-05-25 02:53:20 @_iBassam You can call evolution by any name you like. But it's still evolution.
2022-05-24 20:53:04 @absudabsu Octopus don't have caretakers, and they get pretty smart pretty quickly.
2022-05-24 20:52:20 @_amirrahnama It's a bit like asking: "is there work in biochemistry that suggests that the biochemical process in living cells has some parallels with thermodynamics?"Well, yes!There is no way to escape that.
2022-05-24 20:47:25 The connection between SSL and spectral embedding methods.SSL uses a similarity graph, so does spectral embedding (SE)Contrastive methods (e.g. SimCLR) are analogous to global SE.Non-contrastive methods (e.g. VICReg) are analogous to local SE. https://t.co/PqeDRTvW0N
2022-05-24 20:29:02 Pretty much exactly what I have said for several years: how to make reasoning compatible with gradient-based learning. https://t.co/gB2OS1FS8N
2022-05-24 16:21:32 @bjamalbhutta With very sparse rewards, you can't learn much in 21,900 hours.
2022-05-24 16:17:05 @kauga1241 Simulated environment.
2022-05-24 16:14:36 @ajaydiv I don't think Linda's work conflicts with what I stated.
2022-05-24 16:10:16 @EncodeThis Yes, I'm aware of Linda Smith's work and have talked to her.
2022-05-24 13:17:22 @AngeloDalli Sure.We don't have good smell sensors though.
2022-05-24 13:11:11 @ISusmelj It's an advantage, but not a huge one given how people with one functional eye develop normally.
2022-05-24 13:08:45 @TonyZador It's a trade-off between adaptability and learning time.More adaptable organisms will rely more on learning and less on genetic hardwiring.But also, I believe a lot of "hardwiring" does not hardwire behavior but hardwire surrogate objectives that the cortex optimizes.
2022-05-24 13:05:10 @crude2refined It's still 800 million frames, even if the frames contain less information.
2022-05-24 13:03:51 @KostasPenn True. But I'm talking about the learning cycle time.
2022-05-24 13:03:00 @liuyao12 Not as much as you think.The entire optic nerve only has 1 million fibers.So, it's like a pair of 1000x1000 images. Except that the resolution is high at the center and decreases with eccentricity.
2022-05-24 13:01:01 @BlindDou TL
2022-05-24 12:59:09 @facts_first_ Okay, how about octopus?They get really smart within a few months and never meet their parents.
2022-05-24 12:56:50 @FlipNothling The human brain power consumption is more like 20 or 25 Watts.With current fabrication technology, electronics would require at least 100,000 that much for a similar computational power.That has nothing to do with whether you use artificial neural nets or not.
2022-05-20 08:11:00 CAFIAC FIX
2022-10-29 07:50:59 @franciscoortin @ComputingOviedo @demishassabis @fpa A pleasure to hear about your research!
2022-10-29 07:49:53 RT @invest_asturias: #ArtificialIntelligence | #AI pioneers @ylecun and @demishassabis receive the 2022 Princess of Asturias Award for Tech…
2022-10-29 07:44:45 Picture gallery of the Princess of Asturias Awards ceremony.What an incredible event! https://t.co/PlAdVRKClX
2022-10-29 06:49:18 Many thanks to Princess Leonor, and to the Foundation of the Princess of Asturias Awards. https://t.co/rlnIoL7b73
2022-10-29 06:22:56 @JozsefSzalma [reference needed]
2022-10-29 06:20:01 @tdietterich At the risk of sounding like a nativist, I suspect that the motivation that causes this behavior is evolved rather than planned, learned, and derived from some higher-level notion of empathy.
2022-10-29 05:51:18 Elon is putting himself into an untenable situation, conflicts of interest between content moderation on Twitter and Tesla business in various countries.From FB's former head of security. https://t.co/eqFUAVmT22
2022-10-28 15:35:10 @alscor1966 Hiding being a bowtie to the left of King Felipe.
2022-10-28 15:32:23 RT @martarroyo: Yann Le Cun (@ylecun)Apasionado del concepto de inteligencia desde pequeño, descubrió el preceptrón trás leer un libro…
2022-10-28 15:31:21 @jamesbuchanan27 Lots. Starting with Memory Networks, end-to-end Memory Networks, key-value Memory Networks, all from FAIR.
2022-10-28 15:29:24 @xuanhao_cao @alscor1966 In this case, it is the royal family honoring us!
2022-10-28 15:27:04 @FelixHill84 As long as the regime is a liberal democracy....Doesn't hurt that the royals seem like very nice people.Adam Michnik, another laureate, said "when in Poland, I'm a Jacobin revolutionary. When in Spain, I'm a monarchist."
2022-10-28 15:02:18 RT @fpa: Audiencia de los Reyes, la Princesa de Asturias y la Infanta Sofía a los galardonados con los #PremiosPrincesadeAsturias 2022. @…
2022-10-28 14:53:28 @AlexKontorovich Obviously.
2022-10-28 14:52:47 An audience of the royal family of Spain with the laureates of the Princess of Asturias Award. https://t.co/HhUptU6hwO
2022-10-28 09:51:38 RT @AstroCKragh: Graph Networks show huge potential for physics. But, in astrophysics, are there any *true* graph structures?YES! Causal…
2022-10-28 09:36:45 Effective altruism: limulus version.Doesn't require too many neurons, apparently. https://t.co/csRSRmFwaG
2022-10-28 09:24:28 RT @paulkrugman: That would be the 1950s in which the top tax rate was 91 percent and a third of private-sector workers were union members
2022-10-28 07:30:11 RT @MetaAI: Hey #ECCV2022, whether it was for: Demos of Project Aria. Our presentation on Make-A-Scene. Or maybe you just stopped…
2022-10-28 07:25:52 @loiannog @BotJunkie @ieeeras @2022Iros @nyutandon @nyuniversity Congratulations !
2022-10-27 23:06:06 RT @giacaglia: Amazing to see how many GPUs each organization uses. This seems to be a good proxy of how much they are adopting neural nets…
2022-10-27 22:31:25 @robbensinger I think I'm more to the right, near the Y axis.
2022-10-27 17:28:37 @SahilAk27054390 @demishassabis Sadly Geoff and Yoshis couldn't make the trip.
2022-10-27 17:24:22 s/to/two/
2022-10-27 17:19:02 @brandondamos Fame!
2022-10-27 16:32:13 RT @fpa: The scene is set and everything’s ready for the King, Queen, Princess of Asturias and Infanta Sofía to receive the 2022 Princess o…
2022-10-27 16:25:50 RT @fpa: Ya está todo listo para la audiencia que los Reyes, la Princesa de Asturias y la Infanta Sofía ofrecerán el viernes a los galardon…
2022-10-23 13:25:39 RT @DavidDeutschOxf: Why isn't there a White Mirror show that guesses what may happen when the technology that improves our lives goes on t…
2022-10-23 13:23:36 @pmddomingos Errr, also Western Europe never adopted the whole "dictatorship of the proletariat" thing, and largely stuck with liberal social democracy once they tried it (unlike much of the US).
2022-10-22 17:30:09 @TonyZador There is a direct line from Hubel &
2022-10-22 17:28:25 @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &
2022-10-20 21:32:44 "Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution"The NeuroAI manifesto: Neuroscience has long been an important driver of progress in AI. To accelerate progress in AI, we must invest in fundamental research in NeuroAI.https://t.co/JbjeNIhnB7 https://t.co/CiNLUb8tf7
2022-10-20 21:24:43 RT @MetaAI: @SiVola @ylecun Thanks to @HuggingFace, you can try demos for both:Hokkien: https://t.co/RICjW1AacdSpeechMatrix: https://t.co…
2022-10-20 16:37:25 Wednesday 27, in Oviedo. https://t.co/11orpd4v5d
2022-10-20 14:09:16 RT @gabrielpeyre: The celebrated Iterative Soft Thresholding (ISTA) algorithm to solve the LASSO is a special case of the Forward-Backward…
2022-10-20 09:32:18 RT @pierrepinna: #MachineLearning@ylecun’s Version of Autonomous Machine Intelligencehttps://t.co/CCDq1qejYOGiving the ability to mac…
2022-10-20 06:24:42 VentureBeat writes about Meta AI's Universal Speech Translator project.https://t.co/p0T9RQQGaL
2022-10-20 06:23:01 RT @boztank: One step closer to the universal translator! Thousands of languages around the world have no standard written form, and today…
2022-10-20 00:53:24 @arnauddsm @adjiboussodieng When 500 years old you are, look as good will you not! https://t.co/ZcgFUr3IQH
2022-10-19 23:54:56 @adjiboussodieng So, if you are 60, that makes me
2022-10-19 21:29:45 RT @BennieMols: 10 years after the breakthrough of Deep Learning, the 9th @HLForum organized a panel discussion (a.o. 3 Turing Award winner…
2022-10-19 20:38:31 RT @jeremyhsu: AI can outplay expert human players in a simplified version of the board game Diplomacy.But the cooperative aspect makes i…
2022-10-19 20:12:08 RT @LerrelPinto: Almost unlabeled data is the “secret sauce” for today's ML, but how do we use uncurated datasets in robot learning?Con…
2022-10-19 20:08:18 Dataset for speech-to-speech translation. https://t.co/ZS4NedARQN
2022-10-19 17:21:39 English <
2022-10-19 15:13:18 @kevin_zakka They got into the habit of calling successive generations of AI hardware after famous natural sites and national parks.They didn't ask any of the numerous French/Spanish speaker at FAIR about that one.Innocent multilingual hash code collision!
2022-10-19 13:11:03 @rasbt Precisely.Constructive debates, which may include harsh critiques, are fine.In fact, they are necessary!But "gotcha seeking" is a waste of time for the target, for the source, and for the community.
2022-10-19 13:04:50 RT @yuxiangw_cs: They say deep learning is just curve fitting. But how good is it in curve fitting exactly? Are DNNs as good as, or even be…
2022-10-19 12:50:09 @JeanRemiKing @theASSC @NicPes @StanDehaene Félicitations Jean-Rémi!
2022-10-19 12:38:30 (exceptionally, the talk will actually start at 11:15am EST)
2022-10-19 12:35:54 Giving a talk today at 11:00am EST in the van Vreeswijk Theoretical Neuroscience Seminar series (VVTNS).https://t.co/PwAUjIOtWL
2022-10-30 08:17:35 RT @schrep: So many high quality founders building non-zero sum answers to the climate crisis at #toughtechsummit. Food ground in waste…
2022-10-30 08:17:22 "True remote presence [through the metaverse] is a game changer for climate" https://t.co/lacTkf8Lg8
2022-10-30 08:08:42 @ZainulA40877140 What is happening in the field the *opposite* of the "exclusion of other methods [than ConvNets]".There is a *huge* amount of architectural exploration.We can debate whether there is enough originality in that, but there is huge incentives to devise new architectures.
2022-10-30 08:04:40 @cristiancanton @MetaAI Gràcies Cristian!
2022-10-30 07:58:46 Four of the coauthors are senior members of FAIR: Bottou, LeCun, Vincent, and Weston. https://t.co/q2tD9M1x3L
2022-10-30 07:44:33 @MilesCranmer @ChengSoonOng @earnmyturns @bschoelkopf @smolix @jaseweston 4 of the coauthors are at FAIR: Bottou, LeCun, Vincent, and Weston.That tells you something.
2022-10-30 07:41:52 @talrid23 Not just academia, research in general.It makes sense too: research is about exploring new things.Criteria for papers are different from criteria for practical products.Also, mixing Conv at the bottom layers with transformer modules at the top makes sense to me (DETR like)
2022-10-30 01:36:28 @natwitte You got it backwards.This is royalty honoring Science.
2022-10-30 01:35:15 @themintsv France, Ireland, Italy, Germany, and lots of others in the Eastern part of the EU.
2022-10-29 16:21:37 A new flavor of ConvNet crushes various flavors of transformers (as well as state-space models) for sequence modeling with long-range dependencies. https://t.co/EVvYsHGnp8
2022-10-29 11:43:18 @courchayj1 Those policies are not designed by a single person, not by engineers, but by a large body of people with very diverse backgrounds (human rights, law, politics, social science ...).Additionally, FB has an Independent Oversight Board to arbitrate content policy disputes.
2022-10-29 11:40:41 @courchayj1 FB has asked governments of liberal democracies to define what constitutes acceptable and unacceptable content online, because it doesn't see itself as having the legitimacy to do so.The response has been largely inexistent.Hence FB had to establish its own content policies.
2022-10-29 11:35:50 @courchayj1 Perhaps someone whose mission in life is to connect people with each other.
2022-10-29 11:28:50 @courchayj1 Avoiding dictatorship includes preventing authoritarian forces from corrupting the democratic process by spewing misinformation on social media."in order to maintain a tolerant society, the society must retain the right to be intolerant of intolerance."https://t.co/9NnIwTcBPx
2022-10-29 11:23:17 @andrei_no_no Illegal content in the EU includes hate speech, neonazi propaganda, Holocaust denial, &
2022-10-29 07:50:59 @franciscoortin @ComputingOviedo @demishassabis @fpa A pleasure to hear about your research!
2022-10-29 07:49:53 RT @invest_asturias: #ArtificialIntelligence | #AI pioneers @ylecun and @demishassabis receive the 2022 Princess of Asturias Award for Tech…
2022-10-29 07:44:45 Picture gallery of the Princess of Asturias Awards ceremony.What an incredible event! https://t.co/PlAdVRKClX
2022-10-29 06:49:18 Many thanks to Princess Leonor, and to the Foundation of the Princess of Asturias Awards. https://t.co/rlnIoL7b73
2022-10-29 06:22:56 @JozsefSzalma [reference needed]
2022-10-29 06:20:01 @tdietterich At the risk of sounding like a nativist, I suspect that the motivation that causes this behavior is evolved rather than planned, learned, and derived from some higher-level notion of empathy.
2022-10-29 05:51:18 Elon is putting himself into an untenable situation, conflicts of interest between content moderation on Twitter and Tesla business in various countries.From FB's former head of security. https://t.co/eqFUAVmT22
2022-10-28 15:35:10 @alscor1966 Hiding being a bowtie to the left of King Felipe.
2022-10-28 15:32:23 RT @martarroyo: Yann Le Cun (@ylecun)Apasionado del concepto de inteligencia desde pequeño, descubrió el preceptrón trás leer un libro…
2022-10-28 15:31:21 @jamesbuchanan27 Lots. Starting with Memory Networks, end-to-end Memory Networks, key-value Memory Networks, all from FAIR.
2022-10-28 15:29:24 @xuanhao_cao @alscor1966 In this case, it is the royal family honoring us!
2022-10-28 15:27:04 @FelixHill84 As long as the regime is a liberal democracy....Doesn't hurt that the royals seem like very nice people.Adam Michnik, another laureate, said "when in Poland, I'm a Jacobin revolutionary. When in Spain, I'm a monarchist."
2022-10-28 15:02:18 RT @fpa: Audiencia de los Reyes, la Princesa de Asturias y la Infanta Sofía a los galardonados con los #PremiosPrincesadeAsturias 2022. @…
2022-10-28 14:53:28 @AlexKontorovich Obviously.
2022-10-28 14:52:47 An audience of the royal family of Spain with the laureates of the Princess of Asturias Award. https://t.co/HhUptU6hwO
2022-10-28 09:51:38 RT @AstroCKragh: Graph Networks show huge potential for physics. But, in astrophysics, are there any *true* graph structures?YES! Causal…
2022-10-28 09:36:45 Effective altruism: limulus version.Doesn't require too many neurons, apparently. https://t.co/csRSRmFwaG
2022-10-28 09:24:28 RT @paulkrugman: That would be the 1950s in which the top tax rate was 91 percent and a third of private-sector workers were union members
2022-10-28 07:30:11 RT @MetaAI: Hey #ECCV2022, whether it was for: Demos of Project Aria. Our presentation on Make-A-Scene. Or maybe you just stopped…
2022-10-28 07:25:52 @loiannog @BotJunkie @ieeeras @2022Iros @nyutandon @nyuniversity Congratulations !
2022-10-27 23:06:06 RT @giacaglia: Amazing to see how many GPUs each organization uses. This seems to be a good proxy of how much they are adopting neural nets…
2022-10-27 22:31:25 @robbensinger I think I'm more to the right, near the Y axis.
2022-10-27 17:28:37 @SahilAk27054390 @demishassabis Sadly Geoff and Yoshis couldn't make the trip.
2022-10-27 17:24:22 s/to/two/
2022-10-27 17:19:02 @brandondamos Fame!
2022-10-27 16:32:13 RT @fpa: The scene is set and everything’s ready for the King, Queen, Princess of Asturias and Infanta Sofía to receive the 2022 Princess o…
2022-10-27 16:25:50 RT @fpa: Ya está todo listo para la audiencia que los Reyes, la Princesa de Asturias y la Infanta Sofía ofrecerán el viernes a los galardon…
2022-10-23 13:25:39 RT @DavidDeutschOxf: Why isn't there a White Mirror show that guesses what may happen when the technology that improves our lives goes on t…
2022-10-23 13:23:36 @pmddomingos Errr, also Western Europe never adopted the whole "dictatorship of the proletariat" thing, and largely stuck with liberal social democracy once they tried it (unlike much of the US).
2022-10-22 17:30:09 @TonyZador There is a direct line from Hubel &
2022-10-22 17:28:25 @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &
2022-10-20 21:32:44 "Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution"The NeuroAI manifesto: Neuroscience has long been an important driver of progress in AI. To accelerate progress in AI, we must invest in fundamental research in NeuroAI.https://t.co/JbjeNIhnB7 https://t.co/CiNLUb8tf7
2022-10-20 21:24:43 RT @MetaAI: @SiVola @ylecun Thanks to @HuggingFace, you can try demos for both:Hokkien: https://t.co/RICjW1AacdSpeechMatrix: https://t.co…
2022-10-20 16:37:25 Wednesday 27, in Oviedo. https://t.co/11orpd4v5d
2022-10-20 14:09:16 RT @gabrielpeyre: The celebrated Iterative Soft Thresholding (ISTA) algorithm to solve the LASSO is a special case of the Forward-Backward…
2022-10-20 09:32:18 RT @pierrepinna: #MachineLearning@ylecun’s Version of Autonomous Machine Intelligencehttps://t.co/CCDq1qejYOGiving the ability to mac…
2022-10-20 06:24:42 VentureBeat writes about Meta AI's Universal Speech Translator project.https://t.co/p0T9RQQGaL
2022-10-20 06:23:01 RT @boztank: One step closer to the universal translator! Thousands of languages around the world have no standard written form, and today…
2022-10-20 00:53:24 @arnauddsm @adjiboussodieng When 500 years old you are, look as good will you not! https://t.co/ZcgFUr3IQH
2022-10-19 23:54:56 @adjiboussodieng So, if you are 60, that makes me
2022-10-19 21:29:45 RT @BennieMols: 10 years after the breakthrough of Deep Learning, the 9th @HLForum organized a panel discussion (a.o. 3 Turing Award winner…
2022-10-19 20:38:31 RT @jeremyhsu: AI can outplay expert human players in a simplified version of the board game Diplomacy.But the cooperative aspect makes i…
2022-10-19 20:12:08 RT @LerrelPinto: Almost unlabeled data is the “secret sauce” for today's ML, but how do we use uncurated datasets in robot learning?Con…
2022-10-19 20:08:18 Dataset for speech-to-speech translation. https://t.co/ZS4NedARQN
2022-10-19 17:21:39 English <
2022-10-19 15:13:18 @kevin_zakka They got into the habit of calling successive generations of AI hardware after famous natural sites and national parks.They didn't ask any of the numerous French/Spanish speaker at FAIR about that one.Innocent multilingual hash code collision!
2022-10-19 13:11:03 @rasbt Precisely.Constructive debates, which may include harsh critiques, are fine.In fact, they are necessary!But "gotcha seeking" is a waste of time for the target, for the source, and for the community.
2022-10-19 13:04:50 RT @yuxiangw_cs: They say deep learning is just curve fitting. But how good is it in curve fitting exactly? Are DNNs as good as, or even be…
2022-10-19 12:50:09 @JeanRemiKing @theASSC @NicPes @StanDehaene Félicitations Jean-Rémi!
2022-10-19 12:38:30 (exceptionally, the talk will actually start at 11:15am EST)
2022-10-19 12:35:54 Giving a talk today at 11:00am EST in the van Vreeswijk Theoretical Neuroscience Seminar series (VVTNS).https://t.co/PwAUjIOtWL
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-18 22:10:08 RT @MetaAI: Meet MultiRay, Meta’s new platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an…
2022-11-18 21:56:07 @mrgreene1977 You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias". https://t.co/bfZSwffQYs
2022-11-18 21:53:05 @Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2022-11-18 21:38:39 @mrgreene1977 You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica. The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.
2022-11-18 19:35:22 @mrgreene1977 In what scenarios would this type of generated text actually be harmful?
2022-11-18 19:28:47 @mrgreene1977 You might also want to look at Page 27 of that paper, Section 6, entitled "Toxicity and Bias".
2022-11-18 19:24:18 @mrgreene1977 You literally have no clue what's in the Galactica dataset and are making incorrect assumptions of incompetence. The training set consists of scientific papers and reference materials. You should have had a look at the paper (Appendix A, page 42): https://t.co/ZajoGZonB3 https://t.co/oLJ7ENYIt7
2022-11-18 17:59:13 @Abebab So Galactica is automatically bad because it comes from a "powerful, wealthy" and [according to you] irresponsible corp"? We are talking about a *free and open source* demo put together by a small team of *real* people who are distraught by the attacks on their work.
2022-11-18 16:03:52 @yoavgo @GaryMarcus @rasbt @manes @Abebab Also registering on ArXiv requires *some* vetting that rules out troll farms.
2022-11-18 15:57:35 @Pestopublic I've known @Michael_J_Black for decades. I like him and respect him for his work. But I think he is just wrong on this point.
2022-11-18 15:53:54 @AVMiceliBarone @GaryMarcus @yoavgo @rasbt @manes @Abebab Serious scientific journalism involves asking uninvolved third-party experts about the correctness and importance of a new piece of work, *even* if the work has gone through a credible peer review process.
2022-11-18 15:47:37 @Abebab No claim has been walked back. But the team who built Galactica was so distraught by the vitriol on Twitter that they decided to take it down. So, progress towards a system that "stands up to scrutiny" has paused. Is that good?
2022-11-18 15:43:54 Good question. https://t.co/fUZ2JNkfeM
2022-11-18 15:40:41 @boompig Casual misuse for amusement is fine. But one might think serious scientists would be inclined to test (&
2022-11-18 15:32:07 @rasbt @manes @GaryMarcus @Abebab I was the editor for https://t.co/W0chxpFxHd on ArXiv for many years and the current president of the ICLR foundation. I'm pretty familiar with the issues. One can already flood ArXiv with generated non-sense. Galactica in itself will not make this better or worse.
2022-11-18 15:26:40 @SergeThill @Grady_Booch @RWerpachowski The only thing I can say is that you completely misinterpreted the description. Galactica *is* an assistant. As with any tool, you are in charge, in control, and *responsible* what is produced with its assistance.
2022-11-18 15:02:12 @Grady_Booch @RWerpachowski I used this example on purpose. People will misuse tools and do stupid and dangerous things with them. Yet those driving assistance and collision avoidance systems, overall, reduce collisions by 40% and save lives. Banning them would be dangerous and unethical.
2022-11-18 14:39:13 @Abebab Sure, that's the point of demos. But does the discovery of a flaw need to be accompanied by vitriolic accusations of dangerousness and lack of ethics? The real question is: once perfected, would such a system facilitate the production of scientific content? Would you use it?
2022-11-18 14:30:30 @Grady_Booch @RWerpachowski The point is those systems provide driving assistance but shouldn't be used to drive your car while you sleep on the backseat. Similarly "writing assistance" shouldn't be used to generate text on random topics without a human keeping their hands in the keyboard at all times.
2022-11-18 14:22:16 @loybeek @RWerpachowski @Grady_Booch @ArthurD3791 So, what you are saying is that we should ban knives because, although they are extremely useful, there also present a risk that people will misuse them?
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-20 02:52:48 @thesasho @guyi @Grady_Booch @Abebab Writing bogus papers, or having them written automatically, will make you a bogus researcher with no future.
2022-11-20 02:50:26 @JacquesThibs Thanks.
2022-11-20 02:06:39 @AlphaSignalAI Thanks for not remaining quiet.
2022-11-20 02:05:15 @Caleb_Speak @andrewthesmart @mrgreene1977 Large Language Models have been widely available for 4 years. What harm have they caused. Have they actually been used to cause any of the catastrophe scenarios that have been listed on these threads?
2022-11-20 01:57:50 @MarielzaTalks @mrgreene1977 Describe a scenario in which Galactica would be used to do so and cause "tremendous harm" more "efficiently" than without it.
2022-11-20 01:49:29 @tomtaroo @Abebab Last time I checked, I wasn't a company. How's that for sarcasm?
2022-11-20 00:35:07 @GaryMarcus @drng @Grady_Booch @Jeff_Aronson @Abebab @mrgreene1977 Garbage in, garbage out.
2022-11-20 00:33:49 @leonpalafox @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Then Galactica would be useless and quickly forgotten. But I'm quite sure it will turn out to be useful. The sad thing is that none of the critics actually tried to use it for that purpose.
2022-11-20 00:31:08 @drng @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Inference based on what evidence?
2022-11-20 00:28:16 @kelvindotchan It would be easier to attempt that with one of the standard open source LLMs (tons of them available from @huggingface). LLMs have been widely available for 4 years. Number of LLM-powered WMDs so far: zero.
2022-11-20 00:23:33 @gwynsoul @Michael_J_Black As much as I like and respect Michael, I totally disagree with him on that point.
2022-11-20 00:22:50 @LeonDerczynski No, we don't.
2022-11-20 00:22:19 @wissam_antoun Exactly.
2022-11-20 00:21:59 @Yann_Le_Du The creators of Galactica were distraught by the vitriol and negativity on Twitter. They couldn't take it any longer. They genuinely believe they produced something very valuable, and so do I. I have thick skin and can take the blows for them
2022-11-20 00:12:53 @guyi @Grady_Booch @Abebab Galactica's main purpose is to predict what you are about to type while writing a scientific paper. It just needs to be accurate enough often enough to save you time and effort when writing the paper.
2022-11-20 00:09:54 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab It doesn't need to "stick to reality" to be both useful and harmless. It just needs to predict what you might be about to write and be accurate enough often enough to help you write your paper and save you time and efforts.
2022-11-20 00:02:15 @srijankedia @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian The bottleneck is not in the production but in the difficulty of disseminating the content widely, attracting the attention of the public, and getting people to believe it. Content generated automatically with little human oversight has zero chance of making it.
2022-11-19 23:58:01 @mpimemes @saltig_ai Tell us what you think happened with Cambridge Analytica.
2022-11-19 23:54:32 @jppesky @GaryMarcus @mrgreene1977 Yes, because I'm clearly clueless about how scientific information gets disseminated. And I'm equally clueless about the potential uses of LMs, having worked with them for only 13 years. By the way, LLMs have been widely available for about 4 years. What harm have they caused?
2022-11-19 23:39:45 @saltig_ai Remember 4 years ago how LLMs &
2022-11-18 22:10:08 RT @MetaAI: Meet MultiRay, Meta’s new platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an…
2022-11-18 21:56:07 @mrgreene1977 You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias". https://t.co/bfZSwffQYs
2022-11-18 21:53:05 @Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2022-11-18 21:38:39 @mrgreene1977 You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica. The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.
2022-11-18 19:35:22 @mrgreene1977 In what scenarios would this type of generated text actually be harmful?
2022-11-18 19:28:47 @mrgreene1977 You might also want to look at Page 27 of that paper, Section 6, entitled "Toxicity and Bias".
2022-11-18 19:24:18 @mrgreene1977 You literally have no clue what's in the Galactica dataset and are making incorrect assumptions of incompetence. The training set consists of scientific papers and reference materials. You should have had a look at the paper (Appendix A, page 42): https://t.co/ZajoGZonB3 https://t.co/oLJ7ENYIt7
2022-11-18 17:59:13 @Abebab So Galactica is automatically bad because it comes from a "powerful, wealthy" and [according to you] irresponsible corp"? We are talking about a *free and open source* demo put together by a small team of *real* people who are distraught by the attacks on their work.
2022-11-18 16:03:52 @yoavgo @GaryMarcus @rasbt @manes @Abebab Also registering on ArXiv requires *some* vetting that rules out troll farms.
2022-11-18 15:57:35 @Pestopublic I've known @Michael_J_Black for decades. I like him and respect him for his work. But I think he is just wrong on this point.
2022-11-18 15:53:54 @AVMiceliBarone @GaryMarcus @yoavgo @rasbt @manes @Abebab Serious scientific journalism involves asking uninvolved third-party experts about the correctness and importance of a new piece of work, *even* if the work has gone through a credible peer review process.
2022-11-18 15:47:37 @Abebab No claim has been walked back. But the team who built Galactica was so distraught by the vitriol on Twitter that they decided to take it down. So, progress towards a system that "stands up to scrutiny" has paused. Is that good?
2022-11-18 15:43:54 Good question. https://t.co/fUZ2JNkfeM
2022-11-18 15:40:41 @boompig Casual misuse for amusement is fine. But one might think serious scientists would be inclined to test (&
2022-11-18 15:32:07 @rasbt @manes @GaryMarcus @Abebab I was the editor for https://t.co/W0chxpFxHd on ArXiv for many years and the current president of the ICLR foundation. I'm pretty familiar with the issues. One can already flood ArXiv with generated non-sense. Galactica in itself will not make this better or worse.
2022-11-18 15:26:40 @SergeThill @Grady_Booch @RWerpachowski The only thing I can say is that you completely misinterpreted the description. Galactica *is* an assistant. As with any tool, you are in charge, in control, and *responsible* what is produced with its assistance.
2022-11-18 15:02:12 @Grady_Booch @RWerpachowski I used this example on purpose. People will misuse tools and do stupid and dangerous things with them. Yet those driving assistance and collision avoidance systems, overall, reduce collisions by 40% and save lives. Banning them would be dangerous and unethical.
2022-11-18 14:39:13 @Abebab Sure, that's the point of demos. But does the discovery of a flaw need to be accompanied by vitriolic accusations of dangerousness and lack of ethics? The real question is: once perfected, would such a system facilitate the production of scientific content? Would you use it?
2022-11-18 14:30:30 @Grady_Booch @RWerpachowski The point is those systems provide driving assistance but shouldn't be used to drive your car while you sleep on the backseat. Similarly "writing assistance" shouldn't be used to generate text on random topics without a human keeping their hands in the keyboard at all times.
2022-11-18 14:22:16 @loybeek @RWerpachowski @Grady_Booch @ArthurD3791 So, what you are saying is that we should ban knives because, although they are extremely useful, there also present a risk that people will misuse them?
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-20 23:46:14 @yoavgo @ykilcher What? What 3rd party?
2022-11-20 23:45:36 @ykilcher You just wait. Or don't and download the open source release.
2022-11-20 23:01:59 @untitled01ipynb @rsalakhu @GaryMarcus Well, Gary is the one doing the attention-seeking trolling. I sometimes countertroll, but only very rarely.
2022-11-20 22:59:19 @horstao @rsalakhu Don't worry, Galactica is very much alive. And thank you for the kind words
2022-11-20 22:29:57 @LeonDerczynski @Aspie96 @artistexyz Only if you misuse it. Garbage in, garbage out.
2022-11-20 22:28:02 @krivokuca @dela3499 @lexfridman He has been banned from Twitter and FB since January!
2022-11-20 22:14:51 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab I guess you'll be able to judge for yourself at some point.
2022-11-20 20:24:50 @oliverobst @rsalakhu It is actually the case. Also a high level of openness and an adherence to the "release early, release often" mantra popularized by the open source movement. Certainly night and day compared with Apple where you can't even asked your office neighbors what they work on
2022-11-20 20:19:30 @rsalakhu You don't understand, Russ. This is a kind of weekend hobby for me But you're right: I can't imagine this kind of things happening at Apple. I mean, until recently Apple employees weren't even allowed to show their affiliation on their name tags at conferences
2022-11-20 20:12:01 @dela3499 @lexfridman It is quite obvious that Trump-style authoritarianism has *not* been "countered by rational arguments and kept in check by public opinion". We are way past that stage. Trump keeps denying the validity of elections and other factual truths, and has attempted a coup.
2022-11-20 19:57:48 @saplaksnis @kelvindotchan @huggingface Has that actually happened? If LLM made that so easy, it should have happened by now.
2022-11-20 19:55:13 @artistexyz @LeonDerczynski Find a single instance of me ridiculing causal inference. I'll wait. Incidentally, number of people at FAIR have worked, and still work, on causal inference, including ny old friend and colleague Lèon Bottou.
2022-11-20 19:33:59 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Do you think the demo would have been released if it were just a time waster?
2022-11-20 17:56:55 @vokaysh @AlphaSignalAI @omarsar0 True on Twitter. Not on other social media in my experience.
2022-11-20 17:56:20 @KarlXOblique @AlphaSignalAI @omarsar0 You seem awfully fond of promptly throwing the bath water without checking if there is a baby in it.
2022-11-20 17:53:53 @ASteckley @CriticalAI @GaryMarcus @TonyZador @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian LLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousness. Galactica is an LLM trained on scientific writing and equipped with toxicity filters. It is reasonable to expect it to be even less "dangerous" than your average LLM.
2022-11-20 17:48:26 @krivokuca Public rational discourse completely broke down under Trump. Obviously, the public discourse antidote already failed to "keep them in check by public opinion". Time for more drastic measures.
2022-11-20 17:45:18 @LudwigArisleib I submit that an authoritarian leader who (1) ignores the result of a fair election, (2) attempts to stay in power by force, (3) ignores all truth and rational argument (and encourage his followers to do the same), absolutely *does* fit Popper's criteria.
2022-11-20 17:36:32 @IanFelipeSays @lexfridman I look at who wants to preserve the principles of liberal democracy. Authoritarianism, summarily ignoring the results of an election, and attempting to stay in power by force, is obviously not on that side.
2022-11-20 17:34:33 @Kubilay_1453 @lexfridman Which side wanted to replace what remains a liberal democracy (albeit a flawed one) by an authoritarian leader that ignores the result of a fair election and attempted to remain in power by force?
2022-11-20 02:52:48 @thesasho @guyi @Grady_Booch @Abebab Writing bogus papers, or having them written automatically, will make you a bogus researcher with no future.
2022-11-20 02:50:26 @JacquesThibs Thanks.
2022-11-20 02:06:39 @AlphaSignalAI Thanks for not remaining quiet.
2022-11-20 02:05:15 @Caleb_Speak @andrewthesmart @mrgreene1977 Large Language Models have been widely available for 4 years. What harm have they caused. Have they actually been used to cause any of the catastrophe scenarios that have been listed on these threads?
2022-11-20 01:57:50 @MarielzaTalks @mrgreene1977 Describe a scenario in which Galactica would be used to do so and cause "tremendous harm" more "efficiently" than without it.
2022-11-20 01:49:29 @tomtaroo @Abebab Last time I checked, I wasn't a company. How's that for sarcasm?
2022-11-20 00:35:07 @GaryMarcus @drng @Grady_Booch @Jeff_Aronson @Abebab @mrgreene1977 Garbage in, garbage out.
2022-11-20 00:33:49 @leonpalafox @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Then Galactica would be useless and quickly forgotten. But I'm quite sure it will turn out to be useful. The sad thing is that none of the critics actually tried to use it for that purpose.
2022-11-20 00:31:08 @drng @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Inference based on what evidence?
2022-11-20 00:28:16 @kelvindotchan It would be easier to attempt that with one of the standard open source LLMs (tons of them available from @huggingface). LLMs have been widely available for 4 years. Number of LLM-powered WMDs so far: zero.
2022-11-20 00:23:33 @gwynsoul @Michael_J_Black As much as I like and respect Michael, I totally disagree with him on that point.
2022-11-20 00:22:50 @LeonDerczynski No, we don't.
2022-11-20 00:22:19 @wissam_antoun Exactly.
2022-11-20 00:21:59 @Yann_Le_Du The creators of Galactica were distraught by the vitriol and negativity on Twitter. They couldn't take it any longer. They genuinely believe they produced something very valuable, and so do I. I have thick skin and can take the blows for them
2022-11-20 00:12:53 @guyi @Grady_Booch @Abebab Galactica's main purpose is to predict what you are about to type while writing a scientific paper. It just needs to be accurate enough often enough to save you time and effort when writing the paper.
2022-11-20 00:09:54 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab It doesn't need to "stick to reality" to be both useful and harmless. It just needs to predict what you might be about to write and be accurate enough often enough to help you write your paper and save you time and efforts.
2022-11-20 00:02:15 @srijankedia @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian The bottleneck is not in the production but in the difficulty of disseminating the content widely, attracting the attention of the public, and getting people to believe it. Content generated automatically with little human oversight has zero chance of making it.
2022-11-19 23:58:01 @mpimemes @saltig_ai Tell us what you think happened with Cambridge Analytica.
2022-11-19 23:54:32 @jppesky @GaryMarcus @mrgreene1977 Yes, because I'm clearly clueless about how scientific information gets disseminated. And I'm equally clueless about the potential uses of LMs, having worked with them for only 13 years. By the way, LLMs have been widely available for about 4 years. What harm have they caused?
2022-11-19 23:39:45 @saltig_ai Remember 4 years ago how LLMs &
2022-11-18 22:10:08 RT @MetaAI: Meet MultiRay, Meta’s new platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an…
2022-11-18 21:56:07 @mrgreene1977 You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias". https://t.co/bfZSwffQYs
2022-11-18 21:53:05 @Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2022-11-18 21:38:39 @mrgreene1977 You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica. The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.
2022-11-18 19:35:22 @mrgreene1977 In what scenarios would this type of generated text actually be harmful?
2022-11-18 19:28:47 @mrgreene1977 You might also want to look at Page 27 of that paper, Section 6, entitled "Toxicity and Bias".
2022-11-18 19:24:18 @mrgreene1977 You literally have no clue what's in the Galactica dataset and are making incorrect assumptions of incompetence. The training set consists of scientific papers and reference materials. You should have had a look at the paper (Appendix A, page 42): https://t.co/ZajoGZonB3 https://t.co/oLJ7ENYIt7
2022-11-18 17:59:13 @Abebab So Galactica is automatically bad because it comes from a "powerful, wealthy" and [according to you] irresponsible corp"? We are talking about a *free and open source* demo put together by a small team of *real* people who are distraught by the attacks on their work.
2022-11-18 16:03:52 @yoavgo @GaryMarcus @rasbt @manes @Abebab Also registering on ArXiv requires *some* vetting that rules out troll farms.
2022-11-18 15:57:35 @Pestopublic I've known @Michael_J_Black for decades. I like him and respect him for his work. But I think he is just wrong on this point.
2022-11-18 15:53:54 @AVMiceliBarone @GaryMarcus @yoavgo @rasbt @manes @Abebab Serious scientific journalism involves asking uninvolved third-party experts about the correctness and importance of a new piece of work, *even* if the work has gone through a credible peer review process.
2022-11-18 15:47:37 @Abebab No claim has been walked back. But the team who built Galactica was so distraught by the vitriol on Twitter that they decided to take it down. So, progress towards a system that "stands up to scrutiny" has paused. Is that good?
2022-11-18 15:43:54 Good question. https://t.co/fUZ2JNkfeM
2022-11-18 15:40:41 @boompig Casual misuse for amusement is fine. But one might think serious scientists would be inclined to test (&
2022-11-18 15:32:07 @rasbt @manes @GaryMarcus @Abebab I was the editor for https://t.co/W0chxpFxHd on ArXiv for many years and the current president of the ICLR foundation. I'm pretty familiar with the issues. One can already flood ArXiv with generated non-sense. Galactica in itself will not make this better or worse.
2022-11-18 15:26:40 @SergeThill @Grady_Booch @RWerpachowski The only thing I can say is that you completely misinterpreted the description. Galactica *is* an assistant. As with any tool, you are in charge, in control, and *responsible* what is produced with its assistance.
2022-11-18 15:02:12 @Grady_Booch @RWerpachowski I used this example on purpose. People will misuse tools and do stupid and dangerous things with them. Yet those driving assistance and collision avoidance systems, overall, reduce collisions by 40% and save lives. Banning them would be dangerous and unethical.
2022-11-18 14:39:13 @Abebab Sure, that's the point of demos. But does the discovery of a flaw need to be accompanied by vitriolic accusations of dangerousness and lack of ethics? The real question is: once perfected, would such a system facilitate the production of scientific content? Would you use it?
2022-11-18 14:30:30 @Grady_Booch @RWerpachowski The point is those systems provide driving assistance but shouldn't be used to drive your car while you sleep on the backseat. Similarly "writing assistance" shouldn't be used to generate text on random topics without a human keeping their hands in the keyboard at all times.
2022-11-18 14:22:16 @loybeek @RWerpachowski @Grady_Booch @ArthurD3791 So, what you are saying is that we should ban knives because, although they are extremely useful, there also present a risk that people will misuse them?
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-22 04:42:27 @Miles_Brundage s/conversion/conversation/
2022-11-22 04:38:26 @csabaveres @saltig_ai It might very well be. But if we are talking about the US, the UK or Australia, I would join former Australia PM Kevin Rudd and blame Rupert Murdoch. LLM effects are way below the noise floor, if they exist at all. https://t.co/difREgo0Ct
2022-11-22 04:19:37 @Miles_Brundage Anyway, thanks for holding a rational conversion without turning it into a shouting match with accusations of ill intent or stupidity.
2022-11-22 04:12:31 @ColHilbertSpace @GaelVaroquaux Working at the largest social network company in the world, I'm quite familiar with the concept of "tsunami of BS" Thankfully, it's being held back by a *lot more* than the mere difficulty of writing authoritative sounding prose. At least on FB.
2022-11-22 04:06:16 @Miles_Brundage The point is, if LLMs could so easily be used flood the world with harmful disinformation, it would have happened already. Lots of bots spew misinformation on line. But so far, they have been little more than an annoyance. They are taken down on FB &
2022-11-22 03:59:09 @Miles_Brundage For Galactica specifically, look at section 6 entitled "toxicity &
2022-11-22 03:53:29 @Miles_Brundage Now there have been *enormous* benefits to the use of large-scale transformers (different from generative LLMs, but the same underlying tech) particularly in language translation and multilingual content moderation.
2022-11-22 03:50:33 @Miles_Brundage LLMs have now been widely available for 4 years. That's plenty of time to observe any deleterious side effect. What are they? I'm asking about *actual* harm, not hypothetical/potential harm.
2022-11-22 03:48:01 @Miles_Brundage My comment was about LLMs in general, not Galactica in particular. You have studied their effects and published about it. After a short waiting period, your employer decided to release LLMs for general use. So you must have concluded that the benefits greatly outweigh the risks.
2022-11-22 01:51:22 @MereSophistry @cgarciae88 How about a 60-page technical paper? https://t.co/ZajoGZoVqB
2022-11-22 01:34:45 @falsalem76 Indeed they are.
2022-11-21 21:22:11 @jessyseonoob Even that would only attract undeserved attention to him
2022-11-21 21:17:38 @3DTOPO @cgarciae88 Just curious, what's your experience in writing scientific papers?
2022-11-21 21:01:08 One of those dudes is appealing to journalists, claiming that I'm refusing to answer "the critical question", which is oh so unethical and revealing of my moral turpitude and the purported incompetence of my employer! But I simply refuse to answer *any* of *his* provocations.
2022-11-21 20:36:17 @LilithByTheSea There are literally 100s of very talented people at Meta working on ML for content moderation. In fact, there is a whole division called "integrity" working on this + security &
2022-11-21 20:29:43 @WickedViper23 It's a weekend hobby
2022-11-21 19:41:22 There are people asking me the same question multiple times on Twitter. They want their followers to believe that I don't respond because don't have good answers. I have answers. But don't engage because it always turns out to be a giant waste of time. Trolls gonna troll.
2022-11-21 19:34:23 @LilithByTheSea @johann_p I think you grossly underestimate the benefits, and vastly overestimate the risks.
2022-11-21 19:18:01 @chris_jwala @bartholmberg @pmddomingos I'm just about as atheist as Sam Harris, and as rationalist as Steven Pinker (though I disagree with him on nativism).
2022-11-21 19:13:42 @LilithByTheSea @yoavgo While this type of misinformation is dangerous, LLMs have had no role in producing it. LLMs have been around for 4 years, but their use for such nefarious purpose is entirely hypothetical. In fact, large-scale NLP systems have played a big role in *suppressing* it.
2022-11-21 14:19:46 Any opinion on this? https://t.co/GIDvn5s5IX
2022-11-20 23:46:14 @yoavgo @ykilcher What? What 3rd party?
2022-11-20 23:45:36 @ykilcher You just wait. Or don't and download the open source release.
2022-11-20 23:01:59 @untitled01ipynb @rsalakhu @GaryMarcus Well, Gary is the one doing the attention-seeking trolling. I sometimes countertroll, but only very rarely.
2022-11-20 22:59:19 @horstao @rsalakhu Don't worry, Galactica is very much alive. And thank you for the kind words
2022-11-20 22:29:57 @LeonDerczynski @Aspie96 @artistexyz Only if you misuse it. Garbage in, garbage out.
2022-11-20 22:28:02 @krivokuca @dela3499 @lexfridman He has been banned from Twitter and FB since January!
2022-11-20 22:14:51 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab I guess you'll be able to judge for yourself at some point.
2022-11-20 20:24:50 @oliverobst @rsalakhu It is actually the case. Also a high level of openness and an adherence to the "release early, release often" mantra popularized by the open source movement. Certainly night and day compared with Apple where you can't even asked your office neighbors what they work on
2022-11-20 20:19:30 @rsalakhu You don't understand, Russ. This is a kind of weekend hobby for me But you're right: I can't imagine this kind of things happening at Apple. I mean, until recently Apple employees weren't even allowed to show their affiliation on their name tags at conferences
2022-11-20 20:12:01 @dela3499 @lexfridman It is quite obvious that Trump-style authoritarianism has *not* been "countered by rational arguments and kept in check by public opinion". We are way past that stage. Trump keeps denying the validity of elections and other factual truths, and has attempted a coup.
2022-11-20 19:57:48 @saplaksnis @kelvindotchan @huggingface Has that actually happened? If LLM made that so easy, it should have happened by now.
2022-11-20 19:55:13 @artistexyz @LeonDerczynski Find a single instance of me ridiculing causal inference. I'll wait. Incidentally, number of people at FAIR have worked, and still work, on causal inference, including ny old friend and colleague Lèon Bottou.
2022-11-20 19:33:59 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Do you think the demo would have been released if it were just a time waster?
2022-11-20 17:56:55 @vokaysh @AlphaSignalAI @omarsar0 True on Twitter. Not on other social media in my experience.
2022-11-20 17:56:20 @KarlXOblique @AlphaSignalAI @omarsar0 You seem awfully fond of promptly throwing the bath water without checking if there is a baby in it.
2022-11-20 17:53:53 @ASteckley @CriticalAI @GaryMarcus @TonyZador @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian LLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousness. Galactica is an LLM trained on scientific writing and equipped with toxicity filters. It is reasonable to expect it to be even less "dangerous" than your average LLM.
2022-11-20 17:48:26 @krivokuca Public rational discourse completely broke down under Trump. Obviously, the public discourse antidote already failed to "keep them in check by public opinion". Time for more drastic measures.
2022-11-20 17:45:18 @LudwigArisleib I submit that an authoritarian leader who (1) ignores the result of a fair election, (2) attempts to stay in power by force, (3) ignores all truth and rational argument (and encourage his followers to do the same), absolutely *does* fit Popper's criteria.
2022-11-20 17:36:32 @IanFelipeSays @lexfridman I look at who wants to preserve the principles of liberal democracy. Authoritarianism, summarily ignoring the results of an election, and attempting to stay in power by force, is obviously not on that side.
2022-11-20 17:34:33 @Kubilay_1453 @lexfridman Which side wanted to replace what remains a liberal democracy (albeit a flawed one) by an authoritarian leader that ignores the result of a fair election and attempted to remain in power by force?
2022-11-20 02:52:48 @thesasho @guyi @Grady_Booch @Abebab Writing bogus papers, or having them written automatically, will make you a bogus researcher with no future.
2022-11-20 02:50:26 @JacquesThibs Thanks.
2022-11-20 02:06:39 @AlphaSignalAI Thanks for not remaining quiet.
2022-11-20 02:05:15 @Caleb_Speak @andrewthesmart @mrgreene1977 Large Language Models have been widely available for 4 years. What harm have they caused. Have they actually been used to cause any of the catastrophe scenarios that have been listed on these threads?
2022-11-20 01:57:50 @MarielzaTalks @mrgreene1977 Describe a scenario in which Galactica would be used to do so and cause "tremendous harm" more "efficiently" than without it.
2022-11-20 01:49:29 @tomtaroo @Abebab Last time I checked, I wasn't a company. How's that for sarcasm?
2022-11-20 00:35:07 @GaryMarcus @drng @Grady_Booch @Jeff_Aronson @Abebab @mrgreene1977 Garbage in, garbage out.
2022-11-20 00:33:49 @leonpalafox @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Then Galactica would be useless and quickly forgotten. But I'm quite sure it will turn out to be useful. The sad thing is that none of the critics actually tried to use it for that purpose.
2022-11-20 00:31:08 @drng @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Inference based on what evidence?
2022-11-20 00:28:16 @kelvindotchan It would be easier to attempt that with one of the standard open source LLMs (tons of them available from @huggingface). LLMs have been widely available for 4 years. Number of LLM-powered WMDs so far: zero.
2022-11-20 00:23:33 @gwynsoul @Michael_J_Black As much as I like and respect Michael, I totally disagree with him on that point.
2022-11-20 00:22:50 @LeonDerczynski No, we don't.
2022-11-20 00:22:19 @wissam_antoun Exactly.
2022-11-20 00:21:59 @Yann_Le_Du The creators of Galactica were distraught by the vitriol and negativity on Twitter. They couldn't take it any longer. They genuinely believe they produced something very valuable, and so do I. I have thick skin and can take the blows for them
2022-11-20 00:12:53 @guyi @Grady_Booch @Abebab Galactica's main purpose is to predict what you are about to type while writing a scientific paper. It just needs to be accurate enough often enough to save you time and effort when writing the paper.
2022-11-20 00:09:54 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab It doesn't need to "stick to reality" to be both useful and harmless. It just needs to predict what you might be about to write and be accurate enough often enough to help you write your paper and save you time and efforts.
2022-11-20 00:02:15 @srijankedia @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian The bottleneck is not in the production but in the difficulty of disseminating the content widely, attracting the attention of the public, and getting people to believe it. Content generated automatically with little human oversight has zero chance of making it.
2022-11-19 23:58:01 @mpimemes @saltig_ai Tell us what you think happened with Cambridge Analytica.
2022-11-19 23:54:32 @jppesky @GaryMarcus @mrgreene1977 Yes, because I'm clearly clueless about how scientific information gets disseminated. And I'm equally clueless about the potential uses of LMs, having worked with them for only 13 years. By the way, LLMs have been widely available for about 4 years. What harm have they caused?
2022-11-19 23:39:45 @saltig_ai Remember 4 years ago how LLMs &
2022-11-18 22:10:08 RT @MetaAI: Meet MultiRay, Meta’s new platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an…
2022-11-18 21:56:07 @mrgreene1977 You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias". https://t.co/bfZSwffQYs
2022-11-18 21:53:05 @Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2022-11-18 21:38:39 @mrgreene1977 You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica. The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.
2022-11-18 19:35:22 @mrgreene1977 In what scenarios would this type of generated text actually be harmful?
2022-11-18 19:28:47 @mrgreene1977 You might also want to look at Page 27 of that paper, Section 6, entitled "Toxicity and Bias".
2022-11-18 19:24:18 @mrgreene1977 You literally have no clue what's in the Galactica dataset and are making incorrect assumptions of incompetence. The training set consists of scientific papers and reference materials. You should have had a look at the paper (Appendix A, page 42): https://t.co/ZajoGZonB3 https://t.co/oLJ7ENYIt7
2022-11-18 17:59:13 @Abebab So Galactica is automatically bad because it comes from a "powerful, wealthy" and [according to you] irresponsible corp"? We are talking about a *free and open source* demo put together by a small team of *real* people who are distraught by the attacks on their work.
2022-11-18 16:03:52 @yoavgo @GaryMarcus @rasbt @manes @Abebab Also registering on ArXiv requires *some* vetting that rules out troll farms.
2022-11-18 15:57:35 @Pestopublic I've known @Michael_J_Black for decades. I like him and respect him for his work. But I think he is just wrong on this point.
2022-11-18 15:53:54 @AVMiceliBarone @GaryMarcus @yoavgo @rasbt @manes @Abebab Serious scientific journalism involves asking uninvolved third-party experts about the correctness and importance of a new piece of work, *even* if the work has gone through a credible peer review process.
2022-11-18 15:47:37 @Abebab No claim has been walked back. But the team who built Galactica was so distraught by the vitriol on Twitter that they decided to take it down. So, progress towards a system that "stands up to scrutiny" has paused. Is that good?
2022-11-18 15:43:54 Good question. https://t.co/fUZ2JNkfeM
2022-11-18 15:40:41 @boompig Casual misuse for amusement is fine. But one might think serious scientists would be inclined to test (&
2022-11-18 15:32:07 @rasbt @manes @GaryMarcus @Abebab I was the editor for https://t.co/W0chxpFxHd on ArXiv for many years and the current president of the ICLR foundation. I'm pretty familiar with the issues. One can already flood ArXiv with generated non-sense. Galactica in itself will not make this better or worse.
2022-11-18 15:26:40 @SergeThill @Grady_Booch @RWerpachowski The only thing I can say is that you completely misinterpreted the description. Galactica *is* an assistant. As with any tool, you are in charge, in control, and *responsible* what is produced with its assistance.
2022-11-18 15:02:12 @Grady_Booch @RWerpachowski I used this example on purpose. People will misuse tools and do stupid and dangerous things with them. Yet those driving assistance and collision avoidance systems, overall, reduce collisions by 40% and save lives. Banning them would be dangerous and unethical.
2022-11-18 14:39:13 @Abebab Sure, that's the point of demos. But does the discovery of a flaw need to be accompanied by vitriolic accusations of dangerousness and lack of ethics? The real question is: once perfected, would such a system facilitate the production of scientific content? Would you use it?
2022-11-18 14:30:30 @Grady_Booch @RWerpachowski The point is those systems provide driving assistance but shouldn't be used to drive your car while you sleep on the backseat. Similarly "writing assistance" shouldn't be used to generate text on random topics without a human keeping their hands in the keyboard at all times.
2022-11-18 14:22:16 @loybeek @RWerpachowski @Grady_Booch @ArthurD3791 So, what you are saying is that we should ban knives because, although they are extremely useful, there also present a risk that people will misuse them?
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-22 23:29:20 @tdietterich @leonpalafox @wightmanr @EMostaque @paperswithcode @MetaAI Indeed. In fact, since Galactica was trained on scientific papers, it's likely to be more benign than other LLMs. FYI: OPT-175b weights aren't available, but the smaller OPT weights are.
2022-11-22 23:21:43 A video of CICERO in action, and a link to the Science paper in this thread by the CICERO team lead @polynoamial . https://t.co/l7JAqJNJUh
2022-11-22 23:17:42 RT @polynoamial: 3 years ago my teammates and I set out toward a goal that seemed like science fiction: to build an AI that could strategic…
2022-11-22 20:55:15 Paper here https://t.co/krV5zaxh6p
2022-11-22 17:05:40 @aimatej At least the language is grounded in an underlying reality.
2022-11-22 17:04:06 @jarkko_kuoppala As the video points out, it turns out that lying is not a good strategy to win at Diplomacy.
2022-11-22 15:56:18 @chickenbreast68 @Grady_Booch @thesasho @guyi @Abebab Exactly.
2022-11-22 15:53:10 Correction "top 10%", not "top 10".
2022-11-22 15:49:30 CICERO plays the strategy game Diplomacy at human level. It is able to use language to build relationships with humans and collaborate with them to achieve a goal. https://t.co/eg3OIsXzpx
2022-11-22 15:45:21 Big AI milestone today: CICERO, an AI agent that can negotiate and cooperates with people. It is the first AI system to achieve human-level performance in the popular strategy game Diplomacy. Cicero ranked in the top 10 of participants on https://t.co/dC0sCIWAc8 https://t.co/Kb0InCVMX4
2022-11-22 13:59:12 @TheRandomMtrix @GaelVaroquaux And China uses ConvNets for massive face recognition. But would those things have been sufficient reasons to keep ConvNets and transformers under wrap? (Assuming that was even possible) Our best protection against abusive use technology is the strength of democratic institutions
2022-11-22 13:36:32 @TheRandomMtrix @GaelVaroquaux China and Russia simply block FB &
2022-11-22 13:32:26 @davidmanheim @GaelVaroquaux The only difference between a BERT-style pre-trained transformer and an LLM is which part of the input you mask during training. For an LLM, you just mask the last word.
2022-11-22 04:42:27 @Miles_Brundage s/conversion/conversation/
2022-11-22 04:38:26 @csabaveres @saltig_ai It might very well be. But if we are talking about the US, the UK or Australia, I would join former Australia PM Kevin Rudd and blame Rupert Murdoch. LLM effects are way below the noise floor, if they exist at all. https://t.co/difREgo0Ct
2022-11-22 04:19:37 @Miles_Brundage Anyway, thanks for holding a rational conversion without turning it into a shouting match with accusations of ill intent or stupidity.
2022-11-22 04:12:31 @ColHilbertSpace @GaelVaroquaux Working at the largest social network company in the world, I'm quite familiar with the concept of "tsunami of BS" Thankfully, it's being held back by a *lot more* than the mere difficulty of writing authoritative sounding prose. At least on FB.
2022-11-22 04:06:16 @Miles_Brundage The point is, if LLMs could so easily be used flood the world with harmful disinformation, it would have happened already. Lots of bots spew misinformation on line. But so far, they have been little more than an annoyance. They are taken down on FB &
2022-11-22 03:59:09 @Miles_Brundage For Galactica specifically, look at section 6 entitled "toxicity &
2022-11-22 03:53:29 @Miles_Brundage Now there have been *enormous* benefits to the use of large-scale transformers (different from generative LLMs, but the same underlying tech) particularly in language translation and multilingual content moderation.
2022-11-22 03:50:33 @Miles_Brundage LLMs have now been widely available for 4 years. That's plenty of time to observe any deleterious side effect. What are they? I'm asking about *actual* harm, not hypothetical/potential harm.
2022-11-22 03:48:01 @Miles_Brundage My comment was about LLMs in general, not Galactica in particular. You have studied their effects and published about it. After a short waiting period, your employer decided to release LLMs for general use. So you must have concluded that the benefits greatly outweigh the risks.
2022-11-22 01:51:22 @MereSophistry @cgarciae88 How about a 60-page technical paper? https://t.co/ZajoGZoVqB
2022-11-22 01:34:45 @falsalem76 Indeed they are.
2022-11-21 21:22:11 @jessyseonoob Even that would only attract undeserved attention to him
2022-11-21 21:17:38 @3DTOPO @cgarciae88 Just curious, what's your experience in writing scientific papers?
2022-11-21 21:01:08 One of those dudes is appealing to journalists, claiming that I'm refusing to answer "the critical question", which is oh so unethical and revealing of my moral turpitude and the purported incompetence of my employer! But I simply refuse to answer *any* of *his* provocations.
2022-11-21 20:36:17 @LilithByTheSea There are literally 100s of very talented people at Meta working on ML for content moderation. In fact, there is a whole division called "integrity" working on this + security &
2022-11-21 20:29:43 @WickedViper23 It's a weekend hobby
2022-11-21 19:41:22 There are people asking me the same question multiple times on Twitter. They want their followers to believe that I don't respond because don't have good answers. I have answers. But don't engage because it always turns out to be a giant waste of time. Trolls gonna troll.
2022-11-21 19:34:23 @LilithByTheSea @johann_p I think you grossly underestimate the benefits, and vastly overestimate the risks.
2022-11-21 19:18:01 @chris_jwala @bartholmberg @pmddomingos I'm just about as atheist as Sam Harris, and as rationalist as Steven Pinker (though I disagree with him on nativism).
2022-11-21 19:13:42 @LilithByTheSea @yoavgo While this type of misinformation is dangerous, LLMs have had no role in producing it. LLMs have been around for 4 years, but their use for such nefarious purpose is entirely hypothetical. In fact, large-scale NLP systems have played a big role in *suppressing* it.
2022-11-21 14:19:46 Any opinion on this? https://t.co/GIDvn5s5IX
2022-11-20 23:46:14 @yoavgo @ykilcher What? What 3rd party?
2022-11-20 23:45:36 @ykilcher You just wait. Or don't and download the open source release.
2022-11-20 23:01:59 @untitled01ipynb @rsalakhu @GaryMarcus Well, Gary is the one doing the attention-seeking trolling. I sometimes countertroll, but only very rarely.
2022-11-20 22:59:19 @horstao @rsalakhu Don't worry, Galactica is very much alive. And thank you for the kind words
2022-11-20 22:29:57 @LeonDerczynski @Aspie96 @artistexyz Only if you misuse it. Garbage in, garbage out.
2022-11-20 22:28:02 @krivokuca @dela3499 @lexfridman He has been banned from Twitter and FB since January!
2022-11-20 22:14:51 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab I guess you'll be able to judge for yourself at some point.
2022-11-20 20:24:50 @oliverobst @rsalakhu It is actually the case. Also a high level of openness and an adherence to the "release early, release often" mantra popularized by the open source movement. Certainly night and day compared with Apple where you can't even asked your office neighbors what they work on
2022-11-20 20:19:30 @rsalakhu You don't understand, Russ. This is a kind of weekend hobby for me But you're right: I can't imagine this kind of things happening at Apple. I mean, until recently Apple employees weren't even allowed to show their affiliation on their name tags at conferences
2022-11-20 20:12:01 @dela3499 @lexfridman It is quite obvious that Trump-style authoritarianism has *not* been "countered by rational arguments and kept in check by public opinion". We are way past that stage. Trump keeps denying the validity of elections and other factual truths, and has attempted a coup.
2022-11-20 19:57:48 @saplaksnis @kelvindotchan @huggingface Has that actually happened? If LLM made that so easy, it should have happened by now.
2022-11-20 19:55:13 @artistexyz @LeonDerczynski Find a single instance of me ridiculing causal inference. I'll wait. Incidentally, number of people at FAIR have worked, and still work, on causal inference, including ny old friend and colleague Lèon Bottou.
2022-11-20 19:33:59 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Do you think the demo would have been released if it were just a time waster?
2022-11-20 17:56:55 @vokaysh @AlphaSignalAI @omarsar0 True on Twitter. Not on other social media in my experience.
2022-11-20 17:56:20 @KarlXOblique @AlphaSignalAI @omarsar0 You seem awfully fond of promptly throwing the bath water without checking if there is a baby in it.
2022-11-20 17:53:53 @ASteckley @CriticalAI @GaryMarcus @TonyZador @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian LLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousness. Galactica is an LLM trained on scientific writing and equipped with toxicity filters. It is reasonable to expect it to be even less "dangerous" than your average LLM.
2022-11-20 17:48:26 @krivokuca Public rational discourse completely broke down under Trump. Obviously, the public discourse antidote already failed to "keep them in check by public opinion". Time for more drastic measures.
2022-11-20 17:45:18 @LudwigArisleib I submit that an authoritarian leader who (1) ignores the result of a fair election, (2) attempts to stay in power by force, (3) ignores all truth and rational argument (and encourage his followers to do the same), absolutely *does* fit Popper's criteria.
2022-11-20 17:36:32 @IanFelipeSays @lexfridman I look at who wants to preserve the principles of liberal democracy. Authoritarianism, summarily ignoring the results of an election, and attempting to stay in power by force, is obviously not on that side.
2022-11-20 17:34:33 @Kubilay_1453 @lexfridman Which side wanted to replace what remains a liberal democracy (albeit a flawed one) by an authoritarian leader that ignores the result of a fair election and attempted to remain in power by force?
2022-11-20 02:52:48 @thesasho @guyi @Grady_Booch @Abebab Writing bogus papers, or having them written automatically, will make you a bogus researcher with no future.
2022-11-20 02:50:26 @JacquesThibs Thanks.
2022-11-20 02:06:39 @AlphaSignalAI Thanks for not remaining quiet.
2022-11-20 02:05:15 @Caleb_Speak @andrewthesmart @mrgreene1977 Large Language Models have been widely available for 4 years. What harm have they caused. Have they actually been used to cause any of the catastrophe scenarios that have been listed on these threads?
2022-11-20 01:57:50 @MarielzaTalks @mrgreene1977 Describe a scenario in which Galactica would be used to do so and cause "tremendous harm" more "efficiently" than without it.
2022-11-20 01:49:29 @tomtaroo @Abebab Last time I checked, I wasn't a company. How's that for sarcasm?
2022-11-20 00:35:07 @GaryMarcus @drng @Grady_Booch @Jeff_Aronson @Abebab @mrgreene1977 Garbage in, garbage out.
2022-11-20 00:33:49 @leonpalafox @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Then Galactica would be useless and quickly forgotten. But I'm quite sure it will turn out to be useful. The sad thing is that none of the critics actually tried to use it for that purpose.
2022-11-20 00:31:08 @drng @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Inference based on what evidence?
2022-11-20 00:28:16 @kelvindotchan It would be easier to attempt that with one of the standard open source LLMs (tons of them available from @huggingface). LLMs have been widely available for 4 years. Number of LLM-powered WMDs so far: zero.
2022-11-20 00:23:33 @gwynsoul @Michael_J_Black As much as I like and respect Michael, I totally disagree with him on that point.
2022-11-20 00:22:50 @LeonDerczynski No, we don't.
2022-11-20 00:22:19 @wissam_antoun Exactly.
2022-11-20 00:21:59 @Yann_Le_Du The creators of Galactica were distraught by the vitriol and negativity on Twitter. They couldn't take it any longer. They genuinely believe they produced something very valuable, and so do I. I have thick skin and can take the blows for them
2022-11-20 00:12:53 @guyi @Grady_Booch @Abebab Galactica's main purpose is to predict what you are about to type while writing a scientific paper. It just needs to be accurate enough often enough to save you time and effort when writing the paper.
2022-11-20 00:09:54 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab It doesn't need to "stick to reality" to be both useful and harmless. It just needs to predict what you might be about to write and be accurate enough often enough to help you write your paper and save you time and efforts.
2022-11-20 00:02:15 @srijankedia @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian The bottleneck is not in the production but in the difficulty of disseminating the content widely, attracting the attention of the public, and getting people to believe it. Content generated automatically with little human oversight has zero chance of making it.
2022-11-19 23:58:01 @mpimemes @saltig_ai Tell us what you think happened with Cambridge Analytica.
2022-11-19 23:54:32 @jppesky @GaryMarcus @mrgreene1977 Yes, because I'm clearly clueless about how scientific information gets disseminated. And I'm equally clueless about the potential uses of LMs, having worked with them for only 13 years. By the way, LLMs have been widely available for about 4 years. What harm have they caused?
2022-11-19 23:39:45 @saltig_ai Remember 4 years ago how LLMs &
2022-11-18 22:10:08 RT @MetaAI: Meet MultiRay, Meta’s new platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an…
2022-11-18 21:56:07 @mrgreene1977 You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias". https://t.co/bfZSwffQYs
2022-11-18 21:53:05 @Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2022-11-18 21:38:39 @mrgreene1977 You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica. The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.
2022-11-18 19:35:22 @mrgreene1977 In what scenarios would this type of generated text actually be harmful?
2022-11-18 19:28:47 @mrgreene1977 You might also want to look at Page 27 of that paper, Section 6, entitled "Toxicity and Bias".
2022-11-18 19:24:18 @mrgreene1977 You literally have no clue what's in the Galactica dataset and are making incorrect assumptions of incompetence. The training set consists of scientific papers and reference materials. You should have had a look at the paper (Appendix A, page 42): https://t.co/ZajoGZonB3 https://t.co/oLJ7ENYIt7
2022-11-18 17:59:13 @Abebab So Galactica is automatically bad because it comes from a "powerful, wealthy" and [according to you] irresponsible corp"? We are talking about a *free and open source* demo put together by a small team of *real* people who are distraught by the attacks on their work.
2022-11-18 16:03:52 @yoavgo @GaryMarcus @rasbt @manes @Abebab Also registering on ArXiv requires *some* vetting that rules out troll farms.
2022-11-18 15:57:35 @Pestopublic I've known @Michael_J_Black for decades. I like him and respect him for his work. But I think he is just wrong on this point.
2022-11-18 15:53:54 @AVMiceliBarone @GaryMarcus @yoavgo @rasbt @manes @Abebab Serious scientific journalism involves asking uninvolved third-party experts about the correctness and importance of a new piece of work, *even* if the work has gone through a credible peer review process.
2022-11-18 15:47:37 @Abebab No claim has been walked back. But the team who built Galactica was so distraught by the vitriol on Twitter that they decided to take it down. So, progress towards a system that "stands up to scrutiny" has paused. Is that good?
2022-11-18 15:43:54 Good question. https://t.co/fUZ2JNkfeM
2022-11-18 15:40:41 @boompig Casual misuse for amusement is fine. But one might think serious scientists would be inclined to test (&
2022-11-18 15:32:07 @rasbt @manes @GaryMarcus @Abebab I was the editor for https://t.co/W0chxpFxHd on ArXiv for many years and the current president of the ICLR foundation. I'm pretty familiar with the issues. One can already flood ArXiv with generated non-sense. Galactica in itself will not make this better or worse.
2022-11-18 15:26:40 @SergeThill @Grady_Booch @RWerpachowski The only thing I can say is that you completely misinterpreted the description. Galactica *is* an assistant. As with any tool, you are in charge, in control, and *responsible* what is produced with its assistance.
2022-11-18 15:02:12 @Grady_Booch @RWerpachowski I used this example on purpose. People will misuse tools and do stupid and dangerous things with them. Yet those driving assistance and collision avoidance systems, overall, reduce collisions by 40% and save lives. Banning them would be dangerous and unethical.
2022-11-18 14:39:13 @Abebab Sure, that's the point of demos. But does the discovery of a flaw need to be accompanied by vitriolic accusations of dangerousness and lack of ethics? The real question is: once perfected, would such a system facilitate the production of scientific content? Would you use it?
2022-11-18 14:30:30 @Grady_Booch @RWerpachowski The point is those systems provide driving assistance but shouldn't be used to drive your car while you sleep on the backseat. Similarly "writing assistance" shouldn't be used to generate text on random topics without a human keeping their hands in the keyboard at all times.
2022-11-18 14:22:16 @loybeek @RWerpachowski @Grady_Booch @ArthurD3791 So, what you are saying is that we should ban knives because, although they are extremely useful, there also present a risk that people will misuse them?
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-11-29 07:36:37 @BobbyAlter @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus Also because it is not the place of a communication platform to arbitrate political truth, beyond the basic measures to ensure public safety and the proper conduct of the democratic process. Regardless, it's a difficult trade-off to establish, as Elon is surely learning.
2022-11-29 07:33:34 @BobbyAlter @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus An interesting corner case is misinformation by government officials or electoral candidates. Even if what they say may be harmful, it may need to be distributed because people need to know what their government officials say, especially when what they say is batshit crazy.
2022-11-29 07:30:07 @BobbyAlter @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus Now, there are types of misinformation, malicious or not, that directly endanger public safety or the proper function of the democratic process. Those are unambiguously bad and should be taken down. Much of the rest is up for debate.
2022-11-29 07:28:15 @BobbyAlter @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus Perhaps, but (1) how do you distinguish malice from say, sarcasm, irony, or just incompetence, (2) someone's misinformation is someone else's absolute truth, particularly when it come to politics or religion, (3) assuming you can solve 1 and 2, how do you automate the detection?
2022-11-29 07:15:27 @debadeepta @anilananth BellCore is that part of Bell Labs that spun off from AT&
2022-11-29 04:09:36 @anilananth Also Léon Bottou, Yoshua Bengio, Patrice Simard, John Denker, &
2022-11-29 03:57:43 RT @philipcball: Nice to see this question raised. Bell Labs was such an intellectual powerhouse at least up to and through the 1990s. http…
2022-11-29 03:04:55 @BobbyAlter @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus As for (1), jokes, fictional stories, poetry, etc are technically "misinformation" but are perfectly harmless.
2022-11-29 03:03:09 @BobbyAlter @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus FB suppresses or severely down-ranks dangerous misinformation, such as misinfo about health (vaccines), voting sites, or other things that threaten public health, safety, and the democratic process.
2022-11-29 02:50:24 @asnar002 Alex Jones produces shitty gibberish without LLMs. The reason his gibberish is harmful is his large number of followers, not the volume of gibberish. Also, he has now been sued into oblivion, thankfully.
2022-11-29 02:44:29 @srikumarks @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus @JeanDreze @AltNews @theliverdr As I said, *some* misinformation is harmful. The most harmful is carefully crafted and distributed through influential outlets or personalities. LLMs will not help with careful crafting and distribution. Generated text with no promotion will remain obscure b/c of reasons 2 and 3.
2022-11-28 23:12:24 @relnox Galactica was not "marketed", unless you view one of my tweets as "marketing". It's a demo of a research project.
2022-11-28 23:11:10 @asnar002 Only scientists will find it useful/helpful. Others may attempt to use it. But you feed it with garbage, it will produce useless garbage. And than what can you do with garbage other than send it to the trash?
2022-11-28 23:05:14 @wamageed Read the tweet: IT SAVES YOU TIME AND EFFORT BY PREDICTING WHAT YOU MIGHT TYPE. It will not produce "sound science". That for the scientist to do. Your hands must remain on the keyboard at all times.
2022-11-28 22:52:07 Worth repeating, in case that wasn't completely clear. https://t.co/IlW1LaB65g
2022-11-28 22:50:35 RT @imisra_: Attending #NeurIPS2022 this year. Super excited to see the awesome research! Organizing the 3rd SSL workshop https://t.co/QVJm…
2022-11-28 22:49:59 @iatevale @AlexKontorovich American politics is definitely an under-damped system.
2022-11-28 22:49:14 RT @bleepbeepbzzz: This paper argues that a conv prior is natural. (As I write in the blog post, it's not just innate post-evolution, but p…
2022-11-28 22:37:49 @honab199 That's what I've had for years.
2022-11-28 22:33:10 @aa73561 What you call Apple's "privacy features" are "let's keep the data to ourselves so we can grab a part of the advertising market" features. Apple doesn't do any less with your data than FB would. Neither sells nor share the data with anyone.
2022-11-28 22:28:44 @mrgreene1977 @rhinigtas @ykilcher @GaryMarcus I'd say this question is at the crux of the current debate. There is tons of misinformation today. Some is harmful. Almost all of it is harmless because (1) it's actually harmless (2) very few people see it (3) the people who see it don't believe it and/or aren't harmed by it.
2022-11-28 22:20:30 Just do like most of the world and buy Android. It's not like the idea that Apple sells overpriced products that lock you into an annoying ecosystem is new or anything. https://t.co/AaxEw9Ygrw
2022-11-28 22:16:04 Tomorrow at #NeurIPS2022 Poster Session 2, Tue Nov 29, 17:00: "Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods" by Randall Balestriero, Yann LeCun https://t.co/2OhFCNGkP8
2022-11-28 20:10:03 @bleepbeepbzzz John Denker and I actually had a paper at PhysComp 1992 on the idea that "natural" priors, such as the prevalence of local connections in the brain, happen to be appropriate and for understanding a world where locality matters *and* efficient to implement https://t.co/iQ1UKJtdTi
2022-11-27 20:56:46 RT @ashkamath20: Will be in New Orleans presenting FIBER at NeurIPS this week (second poster session on Thursday, Dec 1st!). Come say hi!…
2022-11-27 19:50:04 @BirneHelene_ @randall_balestr Randall linked to the ArXiv preprint versions because linking to the NeurIPS website versions is more complicated.
2022-11-27 19:19:59 @BirneHelene_ @randall_balestr ??? Those are papers at NeurIPS. They have been peer reviewed in a double-blind fashion. The process is very selective.
2022-11-27 19:12:33 But we were told it was going to destroy the fabric of society https://t.co/pys87rgId2
2022-11-27 19:05:20 RT @StanDehaene: Mon cours au Collège de France, qui commence le 6 janvier 2023, portera sur le code neural des représentations mentales --…
2022-11-27 18:59:35 @AlexKontorovich It's the phase shift that makes the oscillator. With faster-reacting animals, this could be damped.
2022-11-27 16:58:38 @rao2z I do But I always decline honoraria.
2022-11-27 16:38:45 @gchrupala Printing press enabled literacy, allowing many to read the Bible. This led to protestantism in Europe, a decrease of the Catholic church influence, &
2022-11-27 16:37:58 @KarlLandheer Printing press enabled literacy, allowing many to read the Bible. This led to protestantism in Europe, a decrease of the Catholic church influence, &
2022-11-27 15:54:53 RT @paulkrugman: This is a very good article about US traffic deaths, which are now far above comparable nations. But it doesn't mention hu…
2022-11-27 15:49:44 RT @SCOTTeHENSLEY: We developed a new multivalent mRNA vaccine against all known influenza virus subtypes. Our study describing the vaccine…
2022-11-27 04:34:43 @StephenPAdams @Grady_Booch @Meta He then built a vote prediction model from it, which he illegally tried to sell to Cambridge Analytica. CA evaluated the model, decided it was useless &
2022-11-27 03:07:15 @StephenPAdams @Grady_Booch @Meta You have this wrong. A. Kogan was an academic at Cambridge U. who developed a questionnaire app that used the FB API. He collected data, not just from people who installed his app, but also illegally from some of their friends (about 1M people)....
2022-11-26 19:52:01 Four intersting presentations at NeurIPS co-authored by @randall_balestr and me with various sets of coauthors. https://t.co/5BycLBVe6e
2022-11-26 19:15:46 @Grady_Booch @Meta (Also, you are the one who brought up the question Meta's ethics into this conversation).
2022-11-26 19:13:18 @BirneHelene_ @Grady_Booch @Meta Meta being large and influential, it creates a whole ecosystem of critics. The author of this particular article has some sort of vendetta against Meta, and against me for some reason.
2022-11-26 19:08:26 @yoavgo @Grady_Booch @ddanielbee @Meta Excellent point.
2022-11-26 19:07:57 @ProfNoahGian @Grady_Booch @Meta If it's just marketing, and if it was all about profits, why would FB do this *knowing in advance* that it was going to decrease usage time, profits, and stock price?
2022-11-26 18:59:06 @juanmirod You can already do that be choosing not to read or interact content or sources you judge unreliable. The ML-based feed ranking system will figure that out pretty quickly.
2022-11-26 18:57:01 @juanmirod No, it's not profitable in the long term.
2022-11-26 18:55:59 @juanmirod Generated content is very far from being the problem here.
2022-11-26 18:54:35 @Grady_Booch @Meta This narrative sells books and attracts clicks on news websites. But it's false. E.g. Feed was overhauled in 2018 to favor of "meaningful social interactions". People spent less time on FB. Profits went down, the stock tanked. FB predicted this, but it was the Right Thing to do.
2022-11-26 18:47:22 @juanmirod That's false. You don't make money with outrage. You make money with happy user and correspondingly happy advertisers.
2022-11-26 18:46:09 @juanmirod That's called crowdsourcing moderation. The problem is that lots of people flag things they disagree with, whether it's legitimate or not, truthful or not. Yes, outrageous content attracts more clicks. But it doesn't mean that people actually believe it.
2022-11-26 18:19:15 @Grady_Booch @ddanielbee @Meta A) is an extensive annotated bibliography with links to the corresponding peer reviewed studies. Have you looked at it?
2022-11-26 18:04:35 @Grady_Booch @Meta The interpretation of what those documents contain is up to the whistleblower, the journalists, the editors, and the publisher. Take any piece of news and see how Fox News spins it to fit their narrative. Same media company as WSJ. Decades of evidence.
2022-11-26 17:04:05 @Grady_Booch @Meta No, I'm accusing Murdoch to be a much greater danger to Truth and democracy in the Anglosphere than all social networks and AI research combined. In this, I agree with Australia's former prime minister. https://t.co/xUoQbi9Dd3
2022-11-26 17:00:53 @ddanielbee @Grady_Booch @Meta If you are interested in a recent assessment from Facebook's VP of user research, here it is: https://t.co/j2RFEZzZFw
2022-11-26 16:59:00 @ddanielbee @Grady_Booch @Meta You have? I suggest you have a look at serious scholarly studies on the effect of social media on political dysfunction. You will see that the social science community is extremely divided in its assessment. This Google doc is an annotated bibliography: https://t.co/Ri5NLKxf5Z
2022-11-26 16:45:27 @HeinrichKuttler Haha! Every Thanksgiving involves some sort of turkey shoot. Sic transit gloria mundi. https://t.co/ZlbKd2LG3v
2022-11-26 06:50:03 RT @arimorcos: I'll be at #NeurIPS2022 this week! Will be presenting "Beyond neural scaling laws" (Outstanding Paper) Wed morning and at @M…
2022-11-26 00:19:44 @AbstractionPhys @Silimund @j_a_tucker Alright dude, you you've gone way off the deep end.
2022-11-25 23:39:32 @LucaAmb I'm a professor
2022-11-25 23:37:28 @BobbyAlter @GaryMarcus FB will down-rank your posts into oblivion. Not sure about Twitter. Regardless no one would read your content, unless you somehow build a large following beforehand. But that's hard to do, particularly with generated content.
2022-11-25 23:33:20 @Grady_Booch Quoting HAL9000 ? Looks like you were the LLM all along https://t.co/ABQPQA4Roe
2022-11-25 23:23:42 @leonpalafox @Grady_Booch @Meta https://t.co/GT8Q6TuVp1
2022-11-25 22:47:38 @leonpalafox @Grady_Booch @Meta But then you how do you interpret what you read from the material? Meta has a famously open culture. Lots of people say lots of thing about lots of topics on internal forums. They are not necessarily the most informed and rarely the ones making decisions.
2022-11-25 20:57:42 @GaryMarcus @Grady_Booch @Meta Do you really want to play a game of rank pulling?
2022-11-25 20:52:26 @AbstractionPhys @Silimund @j_a_tucker So you have a peer-reviewed publication in Science (no less) with data to back up your claim, like the paper I link? Or are you just pulling this out of some two-bit language model?
2022-11-25 20:46:51 @Grady_Booch @Meta If I believed that the risks of Galactica were as high as you claim them to be, then I would turn to action. But I just think you are wrong. Do not interpet my disagreement with your predictions as a lack of interest for ethical question nor a moral failing leading to no action.
2022-11-25 20:28:27 @nazarre @Grady_Booch But there is only one Q-Anon. There are 1000s of Q-Anon wannabes. You never heard of them bc their posts never get more than a few viewers. Hence my non-naive point: almost all content is in the long tail of the power-law distribution that no one looks at. The pb is to get heard.
2022-11-25 20:15:45 @arthurbmello Haha!
2022-11-25 20:15:01 @Grady_Booch @Meta You think you know that because you read it in one of Rupert Murdoch's rags. But it's false. You are being manipulated by an media group with a track record of viewing the tech industry as an ideological and financial enemy.
2022-11-25 19:56:54 RT @polynoamial: I'm excited to meet folks at #NeurIPS2022! I'll be there all week
2022-11-25 19:56:35 RT @marc_mezard: The fourth special issue of JSTAT on the Statistical Physics aspects of Machine Learning/Artificial Intelligence has been…
2022-11-25 19:34:14 @Grady_Booch @Meta You have absolutely no idea what you are talking about.
2022-11-25 19:30:51 @Grady_Booch @Meta But you know what? Questions of ethics rarely have simple answers. They often involve trade-offs &
2022-11-25 19:24:58 @Grady_Booch @Meta Yes, ethical considerations must be a factor in all *technology* (not just software). Contrary to your claim, this is something that is on my mind all the time. It's ridiculously to think ethical questions aren't on the minds of *every* decision makers at @Meta.
2022-11-25 19:08:53 Galactica: the science, the paper, and the Twitter drama. https://t.co/80HX1N3Zoh
2022-11-25 17:07:35 @YagaoDirac Exactly!
2022-11-25 16:54:43 @therealjoeallen By the time our lunch is over, it will be the End of Times.
2022-11-25 00:27:37 @abhishek_s_1 I can tell you that social platforms (FB in particular) have systems to detect these things and take them down.
2022-11-25 00:24:44 @Silimund Here is a '19 Science paper by my colleagues @j_a_tucker et al. at the NYU Social Media and Political Participation (SMaPP) Lab: https://t.co/qpgJ5sclz7 Quote: "On average, users over 65 shared nearly seven times as many articles from fake news domains as the youngest age group"
2022-11-25 00:09:15 @atamahjoubfar Sort contents by decreasing number of views &
2022-11-25 00:05:07 @atamahjoubfar No, because there is a constant amount of eyeball-hours. The bottleneck is that there is much more content available than there are hours people are spending absorbing it. So people tend to be selective and only pay attention to content they enjoy or think they need.
2022-11-25 00:00:50 @json_dirs Whenever someone finds an automated way to attract search engines hits, search engine companies quickly deploy countermeasures.
2022-11-24 23:59:08 Just do the experiment: 1. create a new Twitter account 2. write a piece of disinformation and post it. 3. generate a piece of disinformation and post it. 4. See how NOBODY pays attention to either #2 or #3. 5. repeat 1-3 using a Twitter account with 1 million followers....
2022-11-24 23:53:49 @tunguz The smaller it gets, the more information I produce. the latest tweetstorm is actually me evaporating quickly.
2022-11-24 23:51:21 @vayuvegula Thank you and best wishes.
2022-11-24 23:48:25 @traderyau No risk of that.
2022-11-24 23:46:55 @mszczodrak If it's an LLM, it's a pretty good one, no?
2022-11-24 23:46:07 @wayneholmes Some push-backs were reasonable and deserved an answer. Some were too disconnected from reality to deserve an answer. Some were insults, personal attacks, or accusations of guilt-by-association. And then some came from attention-seekers with whom I specifically avoid engaging.
2022-11-24 22:00:01 @ziv_ravid It's a good idea, with some high-level conceptual connection with Boltzmann Machines, but it's not the same!
2022-11-24 21:57:39 I'm *actually* a black hole Any information coming out of me is a form of Hawking radiation. As with any black hole, any information I absorb increases my surface area proportionally. Basically, I'm overweight because I'm over-educated Happy Thanksgiving everyone https://t.co/l9k7ZAsTfq
2022-11-24 21:31:45 @dlowd It's the kind of thing that search engines build countermeasures against when they start becoming a problem. It's a never-ending game of whack-a-mole.
2022-11-24 21:29:44 @dlowd It looks like some obscure site that makes money by showing ads and uses generated text to attract hits from search engines. There are tons of those, most of which don't use LLMs. The content is merely designed to fool search engines spam filters, not human readers.
2022-11-24 21:00:40 @mdcatchen @fhuszar But there is a big difference between missing some hateful posts (by Myanmar govt sock puppet accounts, no less) while ramping up Burmese hate speech detection as quickly as possible and "promoting hate speech" as claimed in this title.
2022-11-24 20:54:57 @mdcatchen @fhuszar Transformer-based multilingual NLP only became available on the last 2 years. The Myanmar events took place long before. Hate speech detection in Burmese had to be done manually outside Myanmar. That's one reason FAIR built the world's best Bur-Eng translation system in 2018.
2022-11-24 20:02:03 @agarret7 @fhuszar Prejudice again!
2022-11-24 19:57:17 @QuotedforTruth @dmonett I have better things to do with my time than spending all of it responding to baseless accusations of guilt-by-association. And I don't work on product policy nor integrity and content moderation at Meta. There are entire divisions working on that. I do fundamental research on AI
2022-11-24 19:50:48 @DrCMcMaster @BablBrain *All* important research should be scrutinized and questioned, regardless of its origin. One thing about Meta-FAIR though is that our research is famously open, with papers, code, data, and yes demos. In fact the team that produced Galactica is even called Papers With Code !!!
2022-11-22 23:29:20 @tdietterich @leonpalafox @wightmanr @EMostaque @paperswithcode @MetaAI Indeed. In fact, since Galactica was trained on scientific papers, it's likely to be more benign than other LLMs. FYI: OPT-175b weights aren't available, but the smaller OPT weights are.
2022-11-22 23:21:43 A video of CICERO in action, and a link to the Science paper in this thread by the CICERO team lead @polynoamial . https://t.co/l7JAqJNJUh
2022-11-22 23:17:42 RT @polynoamial: 3 years ago my teammates and I set out toward a goal that seemed like science fiction: to build an AI that could strategic…
2022-11-22 20:55:15 Paper here https://t.co/krV5zaxh6p
2022-11-22 17:05:40 @aimatej At least the language is grounded in an underlying reality.
2022-11-22 17:04:06 @jarkko_kuoppala As the video points out, it turns out that lying is not a good strategy to win at Diplomacy.
2022-11-22 15:56:18 @chickenbreast68 @Grady_Booch @thesasho @guyi @Abebab Exactly.
2022-11-22 15:53:10 Correction "top 10%", not "top 10".
2022-11-22 15:49:30 CICERO plays the strategy game Diplomacy at human level. It is able to use language to build relationships with humans and collaborate with them to achieve a goal. https://t.co/eg3OIsXzpx
2022-11-22 15:45:21 Big AI milestone today: CICERO, an AI agent that can negotiate and cooperates with people. It is the first AI system to achieve human-level performance in the popular strategy game Diplomacy. Cicero ranked in the top 10 of participants on https://t.co/dC0sCIWAc8 https://t.co/Kb0InCVMX4
2022-11-22 13:59:12 @TheRandomMtrix @GaelVaroquaux And China uses ConvNets for massive face recognition. But would those things have been sufficient reasons to keep ConvNets and transformers under wrap? (Assuming that was even possible) Our best protection against abusive use technology is the strength of democratic institutions
2022-11-22 13:36:32 @TheRandomMtrix @GaelVaroquaux China and Russia simply block FB &
2022-11-22 13:32:26 @davidmanheim @GaelVaroquaux The only difference between a BERT-style pre-trained transformer and an LLM is which part of the input you mask during training. For an LLM, you just mask the last word.
2022-11-22 04:42:27 @Miles_Brundage s/conversion/conversation/
2022-11-22 04:38:26 @csabaveres @saltig_ai It might very well be. But if we are talking about the US, the UK or Australia, I would join former Australia PM Kevin Rudd and blame Rupert Murdoch. LLM effects are way below the noise floor, if they exist at all. https://t.co/difREgo0Ct
2022-11-22 04:19:37 @Miles_Brundage Anyway, thanks for holding a rational conversion without turning it into a shouting match with accusations of ill intent or stupidity.
2022-11-22 04:12:31 @ColHilbertSpace @GaelVaroquaux Working at the largest social network company in the world, I'm quite familiar with the concept of "tsunami of BS" Thankfully, it's being held back by a *lot more* than the mere difficulty of writing authoritative sounding prose. At least on FB.
2022-11-22 04:06:16 @Miles_Brundage The point is, if LLMs could so easily be used flood the world with harmful disinformation, it would have happened already. Lots of bots spew misinformation on line. But so far, they have been little more than an annoyance. They are taken down on FB &
2022-11-22 03:59:09 @Miles_Brundage For Galactica specifically, look at section 6 entitled "toxicity &
2022-11-22 03:53:29 @Miles_Brundage Now there have been *enormous* benefits to the use of large-scale transformers (different from generative LLMs, but the same underlying tech) particularly in language translation and multilingual content moderation.
2022-11-22 03:50:33 @Miles_Brundage LLMs have now been widely available for 4 years. That's plenty of time to observe any deleterious side effect. What are they? I'm asking about *actual* harm, not hypothetical/potential harm.
2022-11-22 03:48:01 @Miles_Brundage My comment was about LLMs in general, not Galactica in particular. You have studied their effects and published about it. After a short waiting period, your employer decided to release LLMs for general use. So you must have concluded that the benefits greatly outweigh the risks.
2022-11-22 01:51:22 @MereSophistry @cgarciae88 How about a 60-page technical paper? https://t.co/ZajoGZoVqB
2022-11-22 01:34:45 @falsalem76 Indeed they are.
2022-11-21 21:22:11 @jessyseonoob Even that would only attract undeserved attention to him
2022-11-21 21:17:38 @3DTOPO @cgarciae88 Just curious, what's your experience in writing scientific papers?
2022-11-21 21:01:08 One of those dudes is appealing to journalists, claiming that I'm refusing to answer "the critical question", which is oh so unethical and revealing of my moral turpitude and the purported incompetence of my employer! But I simply refuse to answer *any* of *his* provocations.
2022-11-21 20:36:17 @LilithByTheSea There are literally 100s of very talented people at Meta working on ML for content moderation. In fact, there is a whole division called "integrity" working on this + security &
2022-11-21 20:29:43 @WickedViper23 It's a weekend hobby
2022-11-21 19:41:22 There are people asking me the same question multiple times on Twitter. They want their followers to believe that I don't respond because don't have good answers. I have answers. But don't engage because it always turns out to be a giant waste of time. Trolls gonna troll.
2022-11-21 19:34:23 @LilithByTheSea @johann_p I think you grossly underestimate the benefits, and vastly overestimate the risks.
2022-11-21 19:18:01 @chris_jwala @bartholmberg @pmddomingos I'm just about as atheist as Sam Harris, and as rationalist as Steven Pinker (though I disagree with him on nativism).
2022-11-21 19:13:42 @LilithByTheSea @yoavgo While this type of misinformation is dangerous, LLMs have had no role in producing it. LLMs have been around for 4 years, but their use for such nefarious purpose is entirely hypothetical. In fact, large-scale NLP systems have played a big role in *suppressing* it.
2022-11-21 14:19:46 Any opinion on this? https://t.co/GIDvn5s5IX
2022-11-20 23:46:14 @yoavgo @ykilcher What? What 3rd party?
2022-11-20 23:45:36 @ykilcher You just wait. Or don't and download the open source release.
2022-11-20 23:01:59 @untitled01ipynb @rsalakhu @GaryMarcus Well, Gary is the one doing the attention-seeking trolling. I sometimes countertroll, but only very rarely.
2022-11-20 22:59:19 @horstao @rsalakhu Don't worry, Galactica is very much alive. And thank you for the kind words
2022-11-20 22:29:57 @LeonDerczynski @Aspie96 @artistexyz Only if you misuse it. Garbage in, garbage out.
2022-11-20 22:28:02 @krivokuca @dela3499 @lexfridman He has been banned from Twitter and FB since January!
2022-11-20 22:14:51 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab I guess you'll be able to judge for yourself at some point.
2022-11-20 20:24:50 @oliverobst @rsalakhu It is actually the case. Also a high level of openness and an adherence to the "release early, release often" mantra popularized by the open source movement. Certainly night and day compared with Apple where you can't even asked your office neighbors what they work on
2022-11-20 20:19:30 @rsalakhu You don't understand, Russ. This is a kind of weekend hobby for me But you're right: I can't imagine this kind of things happening at Apple. I mean, until recently Apple employees weren't even allowed to show their affiliation on their name tags at conferences
2022-11-20 20:12:01 @dela3499 @lexfridman It is quite obvious that Trump-style authoritarianism has *not* been "countered by rational arguments and kept in check by public opinion". We are way past that stage. Trump keeps denying the validity of elections and other factual truths, and has attempted a coup.
2022-11-20 19:57:48 @saplaksnis @kelvindotchan @huggingface Has that actually happened? If LLM made that so easy, it should have happened by now.
2022-11-20 19:55:13 @artistexyz @LeonDerczynski Find a single instance of me ridiculing causal inference. I'll wait. Incidentally, number of people at FAIR have worked, and still work, on causal inference, including ny old friend and colleague Lèon Bottou.
2022-11-20 19:33:59 @PeterShor1 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Do you think the demo would have been released if it were just a time waster?
2022-11-20 17:56:55 @vokaysh @AlphaSignalAI @omarsar0 True on Twitter. Not on other social media in my experience.
2022-11-20 17:56:20 @KarlXOblique @AlphaSignalAI @omarsar0 You seem awfully fond of promptly throwing the bath water without checking if there is a baby in it.
2022-11-20 17:53:53 @ASteckley @CriticalAI @GaryMarcus @TonyZador @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian LLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousness. Galactica is an LLM trained on scientific writing and equipped with toxicity filters. It is reasonable to expect it to be even less "dangerous" than your average LLM.
2022-11-20 17:48:26 @krivokuca Public rational discourse completely broke down under Trump. Obviously, the public discourse antidote already failed to "keep them in check by public opinion". Time for more drastic measures.
2022-11-20 17:45:18 @LudwigArisleib I submit that an authoritarian leader who (1) ignores the result of a fair election, (2) attempts to stay in power by force, (3) ignores all truth and rational argument (and encourage his followers to do the same), absolutely *does* fit Popper's criteria.
2022-11-20 17:36:32 @IanFelipeSays @lexfridman I look at who wants to preserve the principles of liberal democracy. Authoritarianism, summarily ignoring the results of an election, and attempting to stay in power by force, is obviously not on that side.
2022-11-20 17:34:33 @Kubilay_1453 @lexfridman Which side wanted to replace what remains a liberal democracy (albeit a flawed one) by an authoritarian leader that ignores the result of a fair election and attempted to remain in power by force?
2022-11-20 02:52:48 @thesasho @guyi @Grady_Booch @Abebab Writing bogus papers, or having them written automatically, will make you a bogus researcher with no future.
2022-11-20 02:50:26 @JacquesThibs Thanks.
2022-11-20 02:06:39 @AlphaSignalAI Thanks for not remaining quiet.
2022-11-20 02:05:15 @Caleb_Speak @andrewthesmart @mrgreene1977 Large Language Models have been widely available for 4 years. What harm have they caused. Have they actually been used to cause any of the catastrophe scenarios that have been listed on these threads?
2022-11-20 01:57:50 @MarielzaTalks @mrgreene1977 Describe a scenario in which Galactica would be used to do so and cause "tremendous harm" more "efficiently" than without it.
2022-11-20 01:49:29 @tomtaroo @Abebab Last time I checked, I wasn't a company. How's that for sarcasm?
2022-11-20 00:35:07 @GaryMarcus @drng @Grady_Booch @Jeff_Aronson @Abebab @mrgreene1977 Garbage in, garbage out.
2022-11-20 00:33:49 @leonpalafox @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Then Galactica would be useless and quickly forgotten. But I'm quite sure it will turn out to be useful. The sad thing is that none of the critics actually tried to use it for that purpose.
2022-11-20 00:31:08 @drng @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab Inference based on what evidence?
2022-11-20 00:28:16 @kelvindotchan It would be easier to attempt that with one of the standard open source LLMs (tons of them available from @huggingface). LLMs have been widely available for 4 years. Number of LLM-powered WMDs so far: zero.
2022-11-20 00:23:33 @gwynsoul @Michael_J_Black As much as I like and respect Michael, I totally disagree with him on that point.
2022-11-20 00:22:50 @LeonDerczynski No, we don't.
2022-11-20 00:22:19 @wissam_antoun Exactly.
2022-11-20 00:21:59 @Yann_Le_Du The creators of Galactica were distraught by the vitriol and negativity on Twitter. They couldn't take it any longer. They genuinely believe they produced something very valuable, and so do I. I have thick skin and can take the blows for them
2022-11-20 00:12:53 @guyi @Grady_Booch @Abebab Galactica's main purpose is to predict what you are about to type while writing a scientific paper. It just needs to be accurate enough often enough to save you time and effort when writing the paper.
2022-11-20 00:09:54 @GaryMarcus @Grady_Booch @Jeff_Aronson @Abebab It doesn't need to "stick to reality" to be both useful and harmless. It just needs to predict what you might be about to write and be accurate enough often enough to help you write your paper and save you time and efforts.
2022-11-20 00:02:15 @srijankedia @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian The bottleneck is not in the production but in the difficulty of disseminating the content widely, attracting the attention of the public, and getting people to believe it. Content generated automatically with little human oversight has zero chance of making it.
2022-11-19 23:58:01 @mpimemes @saltig_ai Tell us what you think happened with Cambridge Analytica.
2022-11-19 23:54:32 @jppesky @GaryMarcus @mrgreene1977 Yes, because I'm clearly clueless about how scientific information gets disseminated. And I'm equally clueless about the potential uses of LMs, having worked with them for only 13 years. By the way, LLMs have been widely available for about 4 years. What harm have they caused?
2022-11-19 23:39:45 @saltig_ai Remember 4 years ago how LLMs &
2022-11-18 22:10:08 RT @MetaAI: Meet MultiRay, Meta’s new platform for efficiently running large-scale, state-of-the-art AI models. By converting input to an…
2022-11-18 21:56:07 @mrgreene1977 You are pulling your tweet out of thin air and obviously haven't read the Galactica paper, particularly Section 6, page 27 entitled "Toxicity and Bias". https://t.co/bfZSwffQYs
2022-11-18 21:53:05 @Abebab Who has Galactica hurt? Will you be upset if it gains wide adoption once deployed? What if actually help scientists write papers more efficiently and more correctly, particularly scientists whose main language is not English, or who don't work in a major research institution?
2022-11-18 21:38:39 @mrgreene1977 You make the same incorrect assumption of incompetence about journalists and academics as you previously made about the creators of Galactica. The literal job of academics and journalists is to seek the truth and to avoid getting fooled by nature, other humans, or themselves.
2022-11-18 19:35:22 @mrgreene1977 In what scenarios would this type of generated text actually be harmful?
2022-11-18 19:28:47 @mrgreene1977 You might also want to look at Page 27 of that paper, Section 6, entitled "Toxicity and Bias".
2022-11-18 19:24:18 @mrgreene1977 You literally have no clue what's in the Galactica dataset and are making incorrect assumptions of incompetence. The training set consists of scientific papers and reference materials. You should have had a look at the paper (Appendix A, page 42): https://t.co/ZajoGZonB3 https://t.co/oLJ7ENYIt7
2022-11-18 17:59:13 @Abebab So Galactica is automatically bad because it comes from a "powerful, wealthy" and [according to you] irresponsible corp"? We are talking about a *free and open source* demo put together by a small team of *real* people who are distraught by the attacks on their work.
2022-11-18 16:03:52 @yoavgo @GaryMarcus @rasbt @manes @Abebab Also registering on ArXiv requires *some* vetting that rules out troll farms.
2022-11-18 15:57:35 @Pestopublic I've known @Michael_J_Black for decades. I like him and respect him for his work. But I think he is just wrong on this point.
2022-11-18 15:53:54 @AVMiceliBarone @GaryMarcus @yoavgo @rasbt @manes @Abebab Serious scientific journalism involves asking uninvolved third-party experts about the correctness and importance of a new piece of work, *even* if the work has gone through a credible peer review process.
2022-11-18 15:47:37 @Abebab No claim has been walked back. But the team who built Galactica was so distraught by the vitriol on Twitter that they decided to take it down. So, progress towards a system that "stands up to scrutiny" has paused. Is that good?
2022-11-18 15:43:54 Good question. https://t.co/fUZ2JNkfeM
2022-11-18 15:40:41 @boompig Casual misuse for amusement is fine. But one might think serious scientists would be inclined to test (&
2022-11-18 15:32:07 @rasbt @manes @GaryMarcus @Abebab I was the editor for https://t.co/W0chxpFxHd on ArXiv for many years and the current president of the ICLR foundation. I'm pretty familiar with the issues. One can already flood ArXiv with generated non-sense. Galactica in itself will not make this better or worse.
2022-11-18 15:26:40 @SergeThill @Grady_Booch @RWerpachowski The only thing I can say is that you completely misinterpreted the description. Galactica *is* an assistant. As with any tool, you are in charge, in control, and *responsible* what is produced with its assistance.
2022-11-18 15:02:12 @Grady_Booch @RWerpachowski I used this example on purpose. People will misuse tools and do stupid and dangerous things with them. Yet those driving assistance and collision avoidance systems, overall, reduce collisions by 40% and save lives. Banning them would be dangerous and unethical.
2022-11-18 14:39:13 @Abebab Sure, that's the point of demos. But does the discovery of a flaw need to be accompanied by vitriolic accusations of dangerousness and lack of ethics? The real question is: once perfected, would such a system facilitate the production of scientific content? Would you use it?
2022-11-18 14:30:30 @Grady_Booch @RWerpachowski The point is those systems provide driving assistance but shouldn't be used to drive your car while you sleep on the backseat. Similarly "writing assistance" shouldn't be used to generate text on random topics without a human keeping their hands in the keyboard at all times.
2022-11-18 14:22:16 @loybeek @RWerpachowski @Grady_Booch @ArthurD3791 So, what you are saying is that we should ban knives because, although they are extremely useful, there also present a risk that people will misuse them?
2022-11-17 21:13:24 ImageNetX: more detailed annotations for ImageNet. https://t.co/AhulGrit05
2022-11-17 20:38:15 Pretty much exactly what happened. https://t.co/4zGRgiyS7C
2022-11-17 19:36:38 @Sergei_Imaging @Grady_Booch Paused.
2022-11-17 19:32:33 @Grady_Booch The vast majority of modern AEBS are made by MobilEye, and they do use ConvNets.
2022-11-17 19:31:35 @Grady_Booch Same with Galactica.
2022-11-17 19:31:10 @Grady_Booch Same with Galactica.
2022-11-17 18:25:41 @EMostaque @rao2z @MetaOpenSource Exactly. It's open source.
2022-11-17 17:20:41 Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy? https://t.co/K56r2LpvFD
2022-11-17 17:08:14 @ArthurD3791 @Grady_Booch You'll see.
2022-11-17 14:06:03 @Grady_Booch Oh come on Grady! Is your predictive keyboard dangerous and unethical? Is GitHub Copilot dangerous and unethical? Is the Automatic Emergency Braking System in your car dangerous and unethical because it doesn't do Level-5 fully autonomous driving?
2022-11-17 13:08:47 @mostlygalaxies @MilesCranmer Do you give attribution to your predictive keyboard for words it write? To your spelling corrector for mistakes it fixes? To your computer for results it produces?
2022-11-17 13:06:43 Exactly. https://t.co/R8XWHbqwYy
2022-11-17 12:02:17 RT @paulkrugman: Catching up on Trump's speech — and noticing that they can't quit gas prices, even though they're not under policy control…
2022-11-17 04:08:31 RT @c_caucheteux: Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with sel…
2022-11-17 03:39:36 @mariososadi @GaryMarcus @MetaAI @paperswithcode One is a regular CNC machine, one is a laser cutter/engraver, and the last one is a high precision CNC for engraving circuit boards.
2022-11-17 03:35:39 @jasonslenderman Soon.
2022-11-17 03:27:07 RT @DanielSodickson: @ylecun @MatiasCalandre2 @MetaAI @paperswithcode A quote from Curt Langlotz at Stanford gives the direct analogy for M…
2022-11-16 21:06:33 @antoniogulli @MetaAI @paperswithcode Because it's new and lots of people want to try it at the same time.
2022-11-16 17:22:24 @MatiasCalandre2 @MetaAI @paperswithcode Real articles will contain new and interesting science. That will include articles whose authors used Galactica to help them write those papers.
2022-11-16 17:15:10 @mariososadi @GaryMarcus @MetaAI @paperswithcode I have 3 CNC machines in my home workshop, and I don't do mass production.
2022-11-16 15:28:30 Correcting sh*tposting about the proper way to use a new AI tool is one way to get me to retweet your tweet. https://t.co/rmYNyaVgte
2022-11-16 15:03:21 @togelius A better question is: how much time &
2022-11-16 13:23:05 @mjs2342 @GaryMarcus @MetaAI @paperswithcode It's only nonsense for people who misinterpret it.
2022-11-16 13:22:27 @GaryMarcus @MetaAI @paperswithcode When you have a tool at your disposal, you have to know what to use it for and how. E.g. a CNC machine will help you build a piece of furniture, but it won't design it for you. Galactica will help you write papers, but you still have to come up with the substance of the paper.
2022-11-16 13:11:19 @rogerkmoore It encourages laziness and promote fallacies like the predictive typing and spelling corrector on your mobile keyboard. It will help you write scientific papers, but it won't come up with the substance of the paper.
2022-11-16 13:07:56 @ezeferrero Answering short questions is not what the system was built to do. It's designed to help you write scientific papers. But you still have to co e up with the substance of the paper. The system will help you fill in the text, references, formulas, and SOTA results.
2022-11-16 12:59:18 @rayohauno @zdeborova That's called https://t.co/tOM7lHcmSz
2022-11-16 12:58:51 @zdeborova There is a simple solution to this: ignore predatory journals, avoid for-profit publishers, &
2022-11-16 02:13:22 @honab199 Google Pixel 6
2022-11-16 00:34:12 This tool is to paper writing as driving assistance is to driving. It won't write papers automatically for you, but it will greatly reduce your cognitive load while you write them. https://t.co/0WgR8DWUV6
2022-11-15 21:57:09 @janosch_ortmann @MetaAI @paperswithcode Spell out KPZ perhaps? https://t.co/3kENSaFAMj
2022-11-15 21:41:22 @omarsar0 @MetaAI @paperswithcode Yes!
2022-11-15 21:40:25 Correction : https://t.co/9NoM8Xhaop
2022-11-15 21:14:59 Apple simply grabbed part of the advertising market for themselves under the guise of protecting their users privacy. "Privacy is protected if *we* collect the data, not if Meta or Google does it" https://t.co/hMxuDrWjQn
2022-11-15 20:53:34 RT @JitendraMalikCV: Our robot dog can go up and down stairs, walk on stepping stones where even a single bad foot placement would lead to…
2022-11-15 20:43:49 A Large Language Model trained on scientific papers. Type a text and https://t.co/XKTkxs8Ae0 will generate a paper with relevant references, formulas, and everything. Amazing work by @MetaAI / @paperswithcode https://t.co/IWGNAXiFeU
2022-11-15 17:25:21 @togelius @chriswolfvision You exposed yourself to cancelation by only mentioning European tribes and empires, and by failing to mention the indigenous C-tribes whose systematic marginalization pushed them to refugee status in Ireland, Brittany, Scotland, Wales, Galicia, and Asturias.
2022-11-15 12:40:15 @tdietterich @DebasmitDas1 @roydanroy Actually, I disagree.Original ideas that turn out to have a long shelf life first appear with results on toy problems.Only later do they get scaled up and shown to work on real problems.That's because innovative ideas require lots of tweaks to work, which take time to develop.
2022-11-15 02:50:59 RT @NoemaMag: “Language doesn’t exhaust knowledge
2022-11-15 02:46:06 RT @neiltyson: Vaccine hesitancy, which was much higher among Republican voters than Democrats during COVID, led to disproportionate deaths…
2022-11-14 19:37:08 A visual history of neural net research through diagrams from papers.Philipp was artist-in-residence in my NYU lab, funded by the Berggruen Foundation, when he started this project. https://t.co/X4OXdKIIce
2022-11-14 19:35:10 @MaxGruenberg @philippschmitt @haltingproblem Diagrams became more abstract. You no-longer needed to explain what a convolutional layer was. You merely had to say it was a Conv together with the kernel size, stride, dilation and number of input and output channels.
2022-11-14 14:41:49 @dntse @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @AlexTensor @isbellHFh When digital communication started taking off, the channel capacity theorems played a similar role as the 2nd law of thermodynamics.Trying to find encoding schemes that went above the Shannon limit was as pointless as trying to design a perpetual motion engine.
2022-11-14 14:37:58 @ChombaBupe @RWerpachowski Sorry, but you seem to have misunderstood my point.
2022-11-14 14:37:06 @ChombaBupe @RWerpachowski I didn't say that *all* discussions about priors were pointless.In fact, an enormous proportion of ML/CV/NLP papers are all about architecture (i.e. priors).I said that discussions about whether priors were necessary or not are pointless.Of course they are necessary!
2022-11-14 14:34:07 @ChombaBupe @RWerpachowski For small distances, all distance measures are equivalent.So, asymptotically, it doesn't matter which distance measure you use.Of course, in practice, which distance/kernel you use matters *a lot*.
2022-11-14 14:30:02 @KordingLab @yudapearl @RasulElon @pmddomingos Imagine an input contains, not just observations, but also a description of experiments/interventions with resulting observations.Infinite data contains the results of all possible experiments/interventions.With this, a prior-free model will learn causal relationships.
2022-11-14 13:51:12 RT @gabrielpeyre: Oldies but goldies: K Fukunaga, L Hostetler, The Estimation of the Gradient of a Density Function, 1975. The mean-shift a…
2022-12-07 19:54:13 @MaximZiatdinov Yes. Proteins, materials, optics, optoelectronics, energy storage, carbon capture,...
2022-12-07 19:49:21 Given Elon's positive response to Emmanuel Macron demanding hate speech moderation, I surmise that Elon is hoping to reach Level 11 of this platform game. https://t.co/aznneWwYTv
2022-12-07 19:27:48 RT @togelius: Extremely unpopular opinion: earning a living through artistry in any domain (paintings, movies, dance, novels, etc) has alwa…
2022-12-07 19:08:22 Humanity has hundreds of millennia of experience in designing reward functions to train intelligent learning agents. It's called children education. Also law and justice. https://t.co/cnPZnzUGJc
2022-12-07 19:05:06 RT @KyleCranmer: @itamblyn This quote from Mark Twain is quite relevant to the human vs AI/transformer discussion https://t.co/oPr4kIaEmI
2022-12-07 13:50:13 RT @DrJimFan: Excited to go to NeurIPS conference tomorrow! It's an annual gala for AI. Many revolutionary ideas debuted here, like AlexNet…
2022-12-07 13:38:06 @48_65_6c_70 FAIR practices open science: everything is published and open sourced. Much of it gets developed and deployed in products by Meta and many others. OpenAI mostly makes flashy demos.
2022-12-07 12:16:09 @ttnguyenho Unlimited supply of naïveté at the Pessemist Archive. One of numerous examples: photography was going to destroy the fabric of society. https://t.co/fQC2LjYwS3
2022-12-07 23:51:13 RT @polynoamial: The CICERO Diplomacy AI team will do a Reddit AMA on r/machinelearning (@slashML) tomorrow (Thursday) starting at 10am PT.…
2022-12-07 23:29:19 RT @MetaAI: Don’t miss it, tomorrow the team will be answering your questions on Reddit — our #CICERObyMetaAI AMA starts at 10am PT. Ask us…
2022-12-07 23:23:55 @erikbryn Protein folding. https://t.co/zATmNZXKGE
2022-12-07 23:21:56 Combining LLMs with search engines for question answering. From a startup founded by NYU alumni. https://t.co/FqJpdXK5kA
2022-12-07 23:11:48 @suryafyi @Plinz I didn't decide to make it public. It was standard FAIR procedure: we release stuff (source code is available). And I didn't decide to take it down either. This was decided by the Galactica team in reaction to the vitriol on Twitter. It was all pretty depressing for them.
2022-12-07 23:07:04 @medievalsilence How so?
2022-12-07 22:33:51 @GaryMarcus @hayabhay @ZDNET @TiernanRayTech So, no paper then.
2022-12-07 22:21:56 @GaryMarcus @hayabhay @ZDNET @TiernanRayTech Please list all of your published papers that describe an algorithm.
2022-12-07 22:01:27 @hayabhay @GaryMarcus Gary has an algorithm now? Clearly this chatbot is hallucinating
2022-12-07 21:58:43 RT @stevejarrett: Yes…and I think there’s a clear need for expert moderation as ChatGPT can be very compelling BUT sometimes wrong or contr…
2022-12-07 19:54:13 @MaximZiatdinov Yes. Proteins, materials, optics, optoelectronics, energy storage, carbon capture,...
2022-12-07 19:49:21 Given Elon's positive response to Emmanuel Macron demanding hate speech moderation, I surmise that Elon is hoping to reach Level 11 of this platform game. https://t.co/aznneWwYTv
2022-12-07 19:27:48 RT @togelius: Extremely unpopular opinion: earning a living through artistry in any domain (paintings, movies, dance, novels, etc) has alwa…
2022-12-07 19:08:22 Humanity has hundreds of millennia of experience in designing reward functions to train intelligent learning agents. It's called children education. Also law and justice. https://t.co/cnPZnzUGJc
2022-12-07 19:05:06 RT @KyleCranmer: @itamblyn This quote from Mark Twain is quite relevant to the human vs AI/transformer discussion https://t.co/oPr4kIaEmI
2022-12-07 13:50:13 RT @DrJimFan: Excited to go to NeurIPS conference tomorrow! It's an annual gala for AI. Many revolutionary ideas debuted here, like AlexNet…
2022-12-07 13:38:06 @48_65_6c_70 FAIR practices open science: everything is published and open sourced. Much of it gets developed and deployed in products by Meta and many others. OpenAI mostly makes flashy demos.
2022-12-07 12:16:09 @ttnguyenho Unlimited supply of naïveté at the Pessemist Archive. One of numerous examples: photography was going to destroy the fabric of society. https://t.co/fQC2LjYwS3
2022-12-08 22:30:08 @notSoJunkDNA @GaryMarcus @ZDNET @kenneth0stanley Your inference is correct, despite the incorrect causality assumption.
2022-12-08 22:27:21 @ubiquity75 These problems are never just "solved", because (1) they evolve all the time &
2022-12-08 22:09:34 @blamblamtheman @ESYudkowsky That only happens in countries that allow their political process to be corrupted by money. Like the United States of America.
2022-12-08 22:07:10 @roydanroy @ESYudkowsky When you find problems, you fix them. I'm not entirely sure which "debacle" you are referring to, but if it's attempts by Russia and others to corrupt the electoral process, this was quickly fixed to avoid a repeat during the French and German elections a few months later.
2022-12-08 22:02:54 @joe_shabadoo You might be on to something here
2022-12-08 21:56:43 @GaryMarcus @ZDNET @kenneth0stanley I co-authored papers in - Physical Review Letters. But that doesn't make me a physicist. - Noema. I'm not a philosopher. - Cell. No biologist. - SIAM. No mathematician. - Genome biology. No geneticist. - NBER. No economist. - Clinical Neurophysiology &
2022-12-08 21:27:48 @Namenode5 It's just Twitter. You can have nice things on Facebook and LinkedIn.
2022-12-08 18:54:12 Weirdest Twitter Drama of the Day: CS/AI credentials of MIT AI researcher &
2022-12-08 13:40:41 @ESYudkowsky We make laws and regulations for corporation (call it reward shaping), which are organized to have superhuman collective intelligence.
2022-12-08 13:23:01 RT @ValaAfshar: In 1983, two professors debated the future relevance of home computers. Both experts shared valid talking points
2022-12-08 05:14:42 RT @ntsnaleatorias: Scholarships for 200,000 researchers and 14,000 residents were not paid today by CAPES Who supports national science an…
2022-12-08 04:52:12 RT @MetaFrance: @Meta annonce un investissement de 2,5 millions $ pour soutenir la recherche académique indépendante en sur les défis e…
2022-12-08 04:49:49 RT @OpenCatalyst: The concluding event for this year's Open Catalyst Challenge at #NeurIPS2022 will take place online on Dec. 8, 1pm PT. W…
2022-12-08 04:45:18 RT @rejuvyesh: Excited to opensource PDEArena: a modern, scalable PDE surrogate learning framework. With over 20 models and many different…
2022-12-07 23:51:13 RT @polynoamial: The CICERO Diplomacy AI team will do a Reddit AMA on r/machinelearning (@slashML) tomorrow (Thursday) starting at 10am PT.…
2022-12-07 23:29:19 RT @MetaAI: Don’t miss it, tomorrow the team will be answering your questions on Reddit — our #CICERObyMetaAI AMA starts at 10am PT. Ask us…
2022-12-07 23:23:55 @erikbryn Protein folding. https://t.co/zATmNZXKGE
2022-12-07 23:21:56 Combining LLMs with search engines for question answering. From a startup founded by NYU alumni. https://t.co/FqJpdXK5kA
2022-12-07 23:11:48 @suryafyi @Plinz I didn't decide to make it public. It was standard FAIR procedure: we release stuff (source code is available). And I didn't decide to take it down either. This was decided by the Galactica team in reaction to the vitriol on Twitter. It was all pretty depressing for them.
2022-12-07 23:07:04 @medievalsilence How so?
2022-12-07 22:33:51 @GaryMarcus @hayabhay @ZDNET @TiernanRayTech So, no paper then.
2022-12-07 22:21:56 @GaryMarcus @hayabhay @ZDNET @TiernanRayTech Please list all of your published papers that describe an algorithm.
2022-12-07 22:01:27 @hayabhay @GaryMarcus Gary has an algorithm now? Clearly this chatbot is hallucinating
2022-12-07 21:58:43 RT @stevejarrett: Yes…and I think there’s a clear need for expert moderation as ChatGPT can be very compelling BUT sometimes wrong or contr…
2022-12-07 19:54:13 @MaximZiatdinov Yes. Proteins, materials, optics, optoelectronics, energy storage, carbon capture,...
2022-12-07 19:49:21 Given Elon's positive response to Emmanuel Macron demanding hate speech moderation, I surmise that Elon is hoping to reach Level 11 of this platform game. https://t.co/aznneWwYTv
2022-12-07 19:27:48 RT @togelius: Extremely unpopular opinion: earning a living through artistry in any domain (paintings, movies, dance, novels, etc) has alwa…
2022-12-07 19:08:22 Humanity has hundreds of millennia of experience in designing reward functions to train intelligent learning agents. It's called children education. Also law and justice. https://t.co/cnPZnzUGJc
2022-12-07 19:05:06 RT @KyleCranmer: @itamblyn This quote from Mark Twain is quite relevant to the human vs AI/transformer discussion https://t.co/oPr4kIaEmI
2022-12-07 13:50:13 RT @DrJimFan: Excited to go to NeurIPS conference tomorrow! It's an annual gala for AI. Many revolutionary ideas debuted here, like AlexNet…
2022-12-07 13:38:06 @48_65_6c_70 FAIR practices open science: everything is published and open sourced. Much of it gets developed and deployed in products by Meta and many others. OpenAI mostly makes flashy demos.
2022-12-07 12:16:09 @ttnguyenho Unlimited supply of naïveté at the Pessemist Archive. One of numerous examples: photography was going to destroy the fabric of society. https://t.co/fQC2LjYwS3
2022-12-09 00:07:35 RT @DKThomp: In 2022, we - reversed organ death in pigs - made the first embryo from stem cells - made a pan-influenza vaccine - saw the b…
2022-12-08 23:57:15 One thing https://t.co/4qtFrzcULW could help with. https://t.co/Pzz9RcRN4w
2022-12-08 23:47:26 @GaryMarcus @ZDNET @kenneth0stanley If I wrote a book entitled "Rebooting Neurology" in which I explained how neurology has "run into a wall", I would not expect it to be taken seriously by the neurology community.
2022-12-08 23:18:21 @roydanroy @ESYudkowsky That one was fixed years before the CA scandal surfaced. The FB Social Graph API was shut down precisely because of privacy concerns. Turns out developers, and even academics like Aleksandr Kogan, could not be trusted to not breach their contract and misuse user data.
2022-12-08 22:30:08 @notSoJunkDNA @GaryMarcus @ZDNET @kenneth0stanley Your inference is correct, despite the incorrect causality assumption.
2022-12-08 22:27:21 @ubiquity75 These problems are never just "solved", because (1) they evolve all the time &
2022-12-08 22:09:34 @blamblamtheman @ESYudkowsky That only happens in countries that allow their political process to be corrupted by money. Like the United States of America.
2022-12-08 22:07:10 @roydanroy @ESYudkowsky When you find problems, you fix them. I'm not entirely sure which "debacle" you are referring to, but if it's attempts by Russia and others to corrupt the electoral process, this was quickly fixed to avoid a repeat during the French and German elections a few months later.
2022-12-08 22:02:54 @joe_shabadoo You might be on to something here
2022-12-08 21:56:43 @GaryMarcus @ZDNET @kenneth0stanley I co-authored papers in - Physical Review Letters. But that doesn't make me a physicist. - Noema. I'm not a philosopher. - Cell. No biologist. - SIAM. No mathematician. - Genome biology. No geneticist. - NBER. No economist. - Clinical Neurophysiology &
2022-12-08 21:27:48 @Namenode5 It's just Twitter. You can have nice things on Facebook and LinkedIn.
2022-12-08 18:54:12 Weirdest Twitter Drama of the Day: CS/AI credentials of MIT AI researcher &
2022-12-08 13:40:41 @ESYudkowsky We make laws and regulations for corporation (call it reward shaping), which are organized to have superhuman collective intelligence.
2022-12-08 13:23:01 RT @ValaAfshar: In 1983, two professors debated the future relevance of home computers. Both experts shared valid talking points
2022-12-08 05:14:42 RT @ntsnaleatorias: Scholarships for 200,000 researchers and 14,000 residents were not paid today by CAPES Who supports national science an…
2022-12-08 04:52:12 RT @MetaFrance: @Meta annonce un investissement de 2,5 millions $ pour soutenir la recherche académique indépendante en sur les défis e…
2022-12-08 04:49:49 RT @OpenCatalyst: The concluding event for this year's Open Catalyst Challenge at #NeurIPS2022 will take place online on Dec. 8, 1pm PT. W…
2022-12-08 04:45:18 RT @rejuvyesh: Excited to opensource PDEArena: a modern, scalable PDE surrogate learning framework. With over 20 models and many different…
2022-12-07 23:51:13 RT @polynoamial: The CICERO Diplomacy AI team will do a Reddit AMA on r/machinelearning (@slashML) tomorrow (Thursday) starting at 10am PT.…
2022-12-07 23:29:19 RT @MetaAI: Don’t miss it, tomorrow the team will be answering your questions on Reddit — our #CICERObyMetaAI AMA starts at 10am PT. Ask us…
2022-12-07 23:23:55 @erikbryn Protein folding. https://t.co/zATmNZXKGE
2022-12-07 23:21:56 Combining LLMs with search engines for question answering. From a startup founded by NYU alumni. https://t.co/FqJpdXK5kA
2022-12-07 23:11:48 @suryafyi @Plinz I didn't decide to make it public. It was standard FAIR procedure: we release stuff (source code is available). And I didn't decide to take it down either. This was decided by the Galactica team in reaction to the vitriol on Twitter. It was all pretty depressing for them.
2022-12-07 23:07:04 @medievalsilence How so?
2022-12-07 22:33:51 @GaryMarcus @hayabhay @ZDNET @TiernanRayTech So, no paper then.
2022-12-07 22:21:56 @GaryMarcus @hayabhay @ZDNET @TiernanRayTech Please list all of your published papers that describe an algorithm.
2022-12-07 22:01:27 @hayabhay @GaryMarcus Gary has an algorithm now? Clearly this chatbot is hallucinating
2022-12-07 21:58:43 RT @stevejarrett: Yes…and I think there’s a clear need for expert moderation as ChatGPT can be very compelling BUT sometimes wrong or contr…
2022-12-07 19:54:13 @MaximZiatdinov Yes. Proteins, materials, optics, optoelectronics, energy storage, carbon capture,...
2022-12-07 19:49:21 Given Elon's positive response to Emmanuel Macron demanding hate speech moderation, I surmise that Elon is hoping to reach Level 11 of this platform game. https://t.co/aznneWwYTv
2022-12-07 19:27:48 RT @togelius: Extremely unpopular opinion: earning a living through artistry in any domain (paintings, movies, dance, novels, etc) has alwa…
2022-12-07 19:08:22 Humanity has hundreds of millennia of experience in designing reward functions to train intelligent learning agents. It's called children education. Also law and justice. https://t.co/cnPZnzUGJc
2022-12-07 19:05:06 RT @KyleCranmer: @itamblyn This quote from Mark Twain is quite relevant to the human vs AI/transformer discussion https://t.co/oPr4kIaEmI
2022-12-07 13:50:13 RT @DrJimFan: Excited to go to NeurIPS conference tomorrow! It's an annual gala for AI. Many revolutionary ideas debuted here, like AlexNet…
2022-12-07 13:38:06 @48_65_6c_70 FAIR practices open science: everything is published and open sourced. Much of it gets developed and deployed in products by Meta and many others. OpenAI mostly makes flashy demos.
2022-12-07 12:16:09 @ttnguyenho Unlimited supply of naïveté at the Pessemist Archive. One of numerous examples: photography was going to destroy the fabric of society. https://t.co/fQC2LjYwS3
2022-12-09 02:48:14 @primrecur @GaryMarcus @ZDNET @kenneth0stanley Stochastic Gradient Descent
2022-12-09 01:56:34 Curiouser and curiouser: Said psychologist says on LinkedIn that his Twitter account was hacked 4h ago and that he is locked out of Twitter, suspecting a failure of 2FA or a Twitter inside job.
2022-12-09 01:50:01 @vayuvegula @GaryMarcus No idea. But the suggestion that I have anything to do with it is ridiculous. And his suggestion that it was some sort of Twitter inside job is squarely in conspiracy theory territory.
2022-12-09 01:34:11 @grbradsk Haha! Everyone needs a little bit of Gary in their threads.
2022-12-09 01:27:01 Err https://t.co/9NoM8Xhaop I mean.
2022-12-09 01:26:43 @Money17251696 Oh, that's just because it's actually https://t.co/9NoM8Xhaop
2022-12-09 00:07:35 RT @DKThomp: In 2022, we - reversed organ death in pigs - made the first embryo from stem cells - made a pan-influenza vaccine - saw the b…
2022-12-08 23:57:15 One thing https://t.co/4qtFrzcULW could help with. https://t.co/Pzz9RcRN4w
2022-12-08 23:47:26 @GaryMarcus @ZDNET @kenneth0stanley If I wrote a book entitled "Rebooting Neurology" in which I explained how neurology has "run into a wall", I would not expect it to be taken seriously by the neurology community.
2022-12-08 23:18:21 @roydanroy @ESYudkowsky That one was fixed years before the CA scandal surfaced. The FB Social Graph API was shut down precisely because of privacy concerns. Turns out developers, and even academics like Aleksandr Kogan, could not be trusted to not breach their contract and misuse user data.
2022-12-08 22:30:08 @notSoJunkDNA @GaryMarcus @ZDNET @kenneth0stanley Your inference is correct, despite the incorrect causality assumption.
2022-12-08 22:27:21 @ubiquity75 These problems are never just "solved", because (1) they evolve all the time &
2022-12-08 22:09:34 @blamblamtheman @ESYudkowsky That only happens in countries that allow their political process to be corrupted by money. Like the United States of America.
2022-12-08 22:07:10 @roydanroy @ESYudkowsky When you find problems, you fix them. I'm not entirely sure which "debacle" you are referring to, but if it's attempts by Russia and others to corrupt the electoral process, this was quickly fixed to avoid a repeat during the French and German elections a few months later.
2022-12-08 22:02:54 @joe_shabadoo You might be on to something here
2022-12-08 21:56:43 @GaryMarcus @ZDNET @kenneth0stanley I co-authored papers in - Physical Review Letters. But that doesn't make me a physicist. - Noema. I'm not a philosopher. - Cell. No biologist. - SIAM. No mathematician. - Genome biology. No geneticist. - NBER. No economist. - Clinical Neurophysiology &
2022-12-08 21:27:48 @Namenode5 It's just Twitter. You can have nice things on Facebook and LinkedIn.
2022-12-08 18:54:12 Weirdest Twitter Drama of the Day: CS/AI credentials of MIT AI researcher &
2022-12-08 13:40:41 @ESYudkowsky We make laws and regulations for corporation (call it reward shaping), which are organized to have superhuman collective intelligence.
2022-12-08 13:23:01 RT @ValaAfshar: In 1983, two professors debated the future relevance of home computers. Both experts shared valid talking points