Découvrez Les IA Experts
Nando de Freitas | Researcher at Deepind | |
Nige Willson | Speaker | |
Ria Pratyusha Kalluri | Researcher, MIT | |
Ifeoma Ozoma | Director, Earthseed | |
Will Knight | Journalist, Wired |
Nando de Freitas | Researcher at Deepind | |
Nige Willson | Speaker | |
Ria Pratyusha Kalluri | Researcher, MIT | |
Ifeoma Ozoma | Director, Earthseed | |
Will Knight | Journalist, Wired |
Profil AI Expert
Not Available
Les derniers messages de l'Expert:
2024-12-28 01:31:28 Seems crazy that the tipping point that made editors of this Elsevier journal resign en masse was that Elsevier switched to an AI editor which didn't italicize and capitalize properly THAT was Elsevier's unacceptably evil crossing of the Rubicon https://t.co/t1Ch1iR0Ut
2024-12-14 20:04:25 RT @MavorParker: I am working on something new at Vmax. We are building agents that leverage the inherent structure in large company datase…
2024-12-03 02:03:56 @mbeisen @VivekGRamaswamy @elonmusk @DOGE How much of the $50b nih budget goes to the $20b publishing industry?
2024-11-30 05:06:05 RT @DarrigoMelanie: Donald Trump’s Project 2025 has always planned to strip the U.S. for parts and sell them off to the highest bidder, enr…
2024-11-29 05:44:45 Moving to that other site. See you on the other side! https://t.co/OZML8cSqhZ
2024-11-29 05:25:38 RT @SonjaBHofer: Cool study led by Mitra Javadzadeh and @marineschimel on the function of inter-areal communication in the neocortex, using…
2024-11-28 23:45:05 @whatevellyn Sadly no. Would love to get funding for that as well though!
2024-11-27 18:12:02 It's been a pleasure! https://t.co/RrCYcEGLcE
2024-11-26 22:38:11 RT @CoryMillerMarmo: The BRAIN Initiative has been an engine of innovation but is now fighting for its survival. Here we discuss why BRAIN…
2024-11-26 22:37:58 RT @HongkuiZeng: The BRAIN Initiative has driven unprecedented progress in neuroscience, but a 40% funding cut threatens this momentum. Res…
2024-11-26 01:07:55 RT @RBReich: Trump is now the first incoming president not to sign the federal transition funding agreement. That means he can raise unlim…
2024-11-26 00:59:06 RT @CSHL: Animals are born with innate abilities, such as spiders spinning webs. But where do these abilities come from? CSHL’s @TonyZador…
2024-11-23 19:09:38 RT @DisavowTrump20: It’s more important than ever that we have strong, independent public servants in the judiciary to protect our democrac…
2024-11-23 02:28:26 @ArulaRatnakar @bdanubius Yes every year. Definitely apply next year
2024-11-23 02:19:53 RT @bdanubius: Are you a graduate student in NeuroAI? Check out this fantastic summer internship at CSHL, where you can work with @TonyZad…
2024-11-23 02:16:01 @joshdubnau Cherry picking. The net effect on "all voters" was +4 in PA and +2 in MI. Of course we have no idea how "enthusiasm" here translated to turnout or votes.
2024-11-21 21:57:49 Reminder - neuroAI summer intern program at CSHL For AI grad students who want to learn more neuroscience https://t.co/gUOUVBXKV6
2024-11-20 21:06:12 Cshl innovators symposium https://t.co/nNCipivBwJ
2024-11-20 21:05:40 @sheacshl @CSHL Wait, what? Are you *still* here after 15 years?! Gonna have to have some words with that clerk
2024-11-20 01:47:36 RT @HistedLab: "Starting in the 80s, that investment in basic science began to pay off, driving a revolution in the molecular biology of ca…
2024-11-19 21:51:31 RT @WiringTheBrain: Super convo with @TonyZador on Neuro-AI - development, FTW!!! https://t.co/8NaYSoVxw2
2024-11-19 02:25:49 RT @anders_aslund: The big problem with the US political system is that billionaires are allowed to buy the elections. That is usually call…
2024-11-19 02:19:59 RT @AToliasLab: Think government ROI is low? In the ‘60s, #NIH-funded researchers Hubel &
2024-11-18 23:37:52 RT @SuryaGanguli: Year over year ROI from government investment in research is 30-100 percent. Far more than the stock market and most of t…
2024-11-18 23:34:51 RT @AToliasLab: Think government ROI is low? In the ‘60s, #NIH-funded researchers Hubel &
2024-11-17 17:30:36 RT @RBReich: A handful of billionaires now have unprecedented control over banking, the food we eat, the health care we can access and, now…
2024-11-17 13:38:07 RT @NicoleCRust: Terrific piece! Summarizing a recent report: Every dollar of NIH research funding doubles in economic returns https://t.c…
2024-11-15 14:14:45 RT @DavidAMarkowitz: The $3B Human Genome Project was started under a Republican administration and later acknowledged by the Obama admin t…
2024-11-14 22:17:43 @elonmusk https://t.co/HFqw88ffem https://t.co/IIsLW3Fo24
2024-11-14 00:50:42 RT @NicoleCRust: @TonyZador Yes!! So few know about this. And likewise, how studying sea snail venom has led us to new not-opioid drugs fo…
2024-11-13 22:42:03 Spread the word! https://t.co/4WjpuJtgKy
2024-11-13 21:31:20 why are we funding gila monster venom research? (hint: lead to the development of Ozempic) Exendin-4, found in the venom, is a GLP-1 receptor agonist, meaning it can bind to GLP-1 receptors in the human body. https://t.co/0mhZPyZ55J
2024-11-13 17:01:02 @hubermanlab Govt needs to provide funding for the long tail of unlikely and surprising discoveries. And NIH should adopt more HHMI-like funding, for "people not projects"
2024-11-13 16:57:24 @hubermanlab Many of the biggest breakthroughs in science were completely unpredictable. eg PCR built on curiosity-driven discoveries a decade earlier about how bacteria could survive in really hot water https://t.co/TUWR5ATw5i https://t.co/5o10Xir2Sn
2024-11-10 23:14:15 "Journalists will learn to fear tech bros...fusion of state and commercial power in a ruling oligarchy elite..Musk spouts the Kremlin’s talking points and chats to Putin. The chaos of 90s Russia is the template
2024-11-06 05:06:17 @j_d_monaco @TrackingActions i agree. i read him today and disagree with a lot of what he said. But disagreeing with him in no way detracts from the tremendous influence he had on a generation of computational neuroscientists. All the more impressive given he died at age 35
2024-11-06 02:26:35 @TrackingActions I dont think debating Marr's influential 3 levels (rather than refreshing CNN election results) is dissing him I became a computational neuroscientist because I stumbled upon Vision in a book store as I was deciding what to apply for in grad school. It changed my life
2024-11-05 21:22:07 @jbimaknee @KordingLab @SebastianSeung Exactly
2024-11-05 13:27:56 @hb_cell @SebastianSeung ?
2024-11-05 01:56:50 @KordingLab @SebastianSeung Hardware lottery implies it's often not useful to separate algo from implementation bcs different Eg The modern success of ANNs is a direct result of GPUs
2024-11-04 22:17:22 This paper on the "hardware lottery" nicely articulates why i've always had problems with the putative independence of the algorithmic and implementational levels @SebastianSeung https://t.co/eSUGepQQyr https://t.co/FINBWYJN0M https://t.co/vxoFjfUDNs
2024-11-04 13:20:39 Make Daylight Saving time permanent! https://t.co/gcNBNPQLKk
2024-11-04 12:19:26 @anisomorphism @benlandautaylor Do you have data? That is certainly not my impression of incoming grad students ( in neuroscience and AI) who seem considerably more prepared now than when I started 2 decades ago Same with high school students applying to my lab
2024-11-04 02:23:54 @benlandautaylor How do you disentangle grade inflation from the hypothesis that in coming students, at least at the most competitive schools, are better prepared/brighter and harder working? Certainly it seems that getting into "top" universities is much harder than a few decades ago
2024-11-02 21:01:35 RT @littmath: disproving the infinite monkeys theorem by observing that there are in fact only finitely many monkeys
2024-10-29 21:15:35 RT @davidfolkenflik: UPDATE: The number of cancellations since Friday’s revelation now exceeds 250,000, NPR can report. That represents a…
2024-10-29 16:40:18 @emiliemc @JeffBezos In principle, not backing specific candidates seems completely reasonable But in this case the timing is highly suspicious and looks like pure cowardice ie anticipatory capitulation to avoid retaliation by an authoritarian https://t.co/thmRmpp93G
2024-10-29 13:10:15 @wichmaennchen @balazskegl
2024-10-29 13:08:55 @balazskegl or maybe there are multiple constraints that only become fully apparent once you try to replicate the full developmental process ? As @chklovskii and others have pointed out, wirelength is a strong constraint that cannot in general be ignored https://t.co/n1gpuG2O8k
2024-10-29 12:19:51 thx for this spectacular and highly informative defense of the laryngeal nerve! nonetheless, as pointed out by many authors, wirelength minimization is an important constraint in wiring up the nervous system. And this violates it. Hence, no slander! https://t.co/UiGaY0u8qQ https://t.co/4HHWMNoS76
2024-10-26 12:41:22 RT @IlvesToomas: When major newspapers begin pre-emptively to self-censor you can no longer trust their content. My parents saw this when…
2024-10-26 01:54:39 RT @joncoopertweets: This is probably my favorite segment EVER by @jaketapper. https://t.co/zVB1TyY3mh
2024-10-26 00:48:14 RT @TimothyDSnyder: Do not obey in advance. Lesson 1, first page, of my pamphlet "On Tyranny" https://t.co/FWIDPecber
2024-10-25 17:56:12 RT @moultano: Seeing big national newspapers decline to make a presidential endorsement because they don't want to risk Trump's retaliation…
2024-10-17 19:50:11 @gershbrain LFPs are often largely driven by dendritic currents. Ultimately these currents arise from spikes, maybe from distant inputs So LFP activity in area X need not "directly" reflect spikes in X but could be input from area Y The situation is even more complicated with EEG and fMRI
2024-10-17 16:30:22 @KordingLab @PessoaBrain Like maybe the stomatogastric ganglion? Marder et al showed that while many solutions "work," the average solution does not work. So one question for more complex animals is "How would you know when you've succeeded?" https://t.co/Hd09xfBrZE
2024-10-17 16:05:46 @KordingLab @PessoaBrain Wasn't that the vision of the Blue Brain project?
2024-10-15 12:48:09 @briandepasquale i love the idea of Gamestopping Biological Cybernetics' impact factor. Of course, this would end up happening at an ent-ish (i.e. academic) pace, years rather than days. So sometime around 2032 it would start to be competitive, and then the real magic will happen
2024-10-15 12:43:21 @briandepasquale And it's open access, or rather hybrid. But current charges are only $3800, which by today's standards isn't too bad.
2024-10-15 12:19:35 @briandepasquale So should we all agree to start publishing in Biological Cybernetics again? That was a top 5 journal for me as a grad student but I don't think I've read a paper there in decades (is it even still around?)
2024-10-14 20:51:19 @gershbrain @ylecun I just want to add that incremental progress is sometimes enough. Eg based on my very limited understanding, chips have gotten many orders of magnitude faster over the last decades just by pushing the technology a little harder each generation.
2024-10-14 20:31:46 @gershbrain @ylecun I think the take-home message for your students is that if they want to make incremental progress engineering surely suffices But if they don't think current approaches are enough, one good approach might be to try to get a deep understanding of neuroscience
2024-10-13 17:10:43 Almost all terrestrial vertebrates by weight are humans or human-associated (livestock). Almost all wild terrestrial vertebrates are gone This is about half of all terrestrial animal biomass https://t.co/7p7cwF4smI
2024-10-11 20:48:50 RT @ryosy736: Paper is out! @NatureComms Mapping dense neural projections at the single-cell level is challenging—but we tackled it with ax…
2024-10-11 11:16:11 RT @MarkJacob16: Dear New York Times, Your headlines are a disaster. Such as this one in which you depict Trump’s racism as “his long-held…
2024-10-09 17:36:43 @JSheltzer Well hopfiield was affiliated with the OG Bell Labs
2024-10-09 03:38:15 RT @naoshigeuchida: Our Department @MCB_Harvard is hiring! This is a broad search. I'd be excited to see applications from many #neuroscien…
2024-10-08 17:23:38 @kendmil @pfau if only!
2024-10-08 15:14:53 RT @kendmil: Hopfield &
2024-09-30 21:48:28 @JSheltzer i’m curious: how good a predictor are these? seems like there might be enough data to put together a simple model.
2024-09-25 23:33:33 @arjunrajlab So by this definition chromaffin cells (in the adrenal glands in the kidney) &
2024-09-15 00:39:12 PREPRINT ALERT!! "Selective expansion of motor cortical projections in the evolution of vocal novelty" or: ""How the Singing Mouse Got Its Song" https://t.co/fmvHayrgvp
2024-09-07 17:22:44 RT @ItsDeanBlundell: This happened. This week. https://t.co/jVC3jAGYOt
2024-09-04 19:00:18 @paperpile I have been trying to switch (which i thought i did a few weeks ago) and i have now been stuck on this hopeful message for about 15 minutes. I quit and restarted once. Is this normal? How long should i keep waiting? Suggestions? https://t.co/bU8VePdaCG
2024-08-31 19:44:47 @HistedLab are there society journals that have full time professional editors? if not why not? I think professional editors do provide added value.
2024-08-31 19:32:35 RT @HistedLab: This is a great point. Academics should get together, through professional societies, and certify the journals that do real…
2024-08-19 01:10:37 @djbutler09 @francoisfleuret If spiking is sparse (<
2024-08-18 21:33:39 @djbutler09 @francoisfleuret Exploits
2024-08-18 14:17:12 @francoisfleuret Indeed, synaptic transmission in the cortex is highly unreliable. At some synapses, the signal fails to propagate from one neuron to the next 90% of the time. The brain explots this for computation
2024-08-15 04:29:25 @BOlveczky but i think most of the great breakthroughs depended on both the experimental logic and a relatively recent technological advance Science is dense in scientists, so especially these day most things that can be discovered with current techniques are quickly discovered
2024-08-15 01:33:56 @BOlveczky Hodgkin &
2024-08-04 03:47:31 RT @JamesSurowiecki: Just a disgusting display of authoritarian bullshit by cops in Watonga, Oklahoma, arresting a man who had committed no…
2024-07-26 15:15:38 RT @MeghanMcCarthy_: The vibes I’m getting from @JDVance fixation on how many kids people have are very Romania in the 70s. https://t.co/3m…
2024-07-26 15:12:59 @lpachter Do you believe the issues you raised are the result of a poorly written paper? Could they have done better and if so how? Or are you suggesting that it is impossible to write a good paper with this much data? That such projects are simply I'll conceived
2024-07-24 00:57:57 RT @mbeisen: The real question is when are prosecutors going to charge scientific journals and scientific societies under RICO for their de…
2024-07-22 17:27:15 @neuralreckoning Uses timing for what? I think you’d have to carefully define the objective function. Given the objective function, you could add noise to spike times and look at its degradation as a function of noise you could then say like "spike precison should be X ms or better”
2024-07-20 06:44:52 Amazing experience with ChatGPT-o today During a spoken conversation with someone who speaks grammatically perfect English but with a Hungarian accent, ChatGPT suddenly switched to Hungarian. It must have somehow detected his Hungarian accent Bizarre
2024-07-19 01:06:49 Is there list of predatory journals? Is Mary Ann Liebert legit?
2024-07-18 22:13:05 RT @ankurhandos: Our new exciting work on hand arm grasp-anything is out now. All trained in sim with RL. Direct pixels to action grasping…
2024-07-17 21:24:06 @Nancy_Kanwisher @KanwisherLab @realAKoulakov @g_lajoie_ @TrackingActions @alvanoe @computingnature @AToliasLab @dyamins @awiltschko @evadyer @s_y_chung @giacomoi @maxsbennett @MatthiasBethge @evadyer @GuangyuRobert deadline extended -- please retweet https://t.co/VP3AjWABnX
2024-07-17 21:15:49 https://t.co/rbjOqUqiPK
2024-07-17 21:15:48 NAISys deadline extended! https://t.co/hjevLttgfy
2024-07-11 02:58:51 RT @doctorveera: Finally, we now know exactly which region and cell types in the brain are mediating the actions of GLP1R agonists. As ma…
2024-07-11 01:33:03 RT @doristsao: As our country debates the capabilities of a particular neural network, the importance of fundamental research in neuroscien…
2024-07-11 01:32:23 RT @NicoleCRust: OpEd in @InsideSourcesDC by @doristsao, @TonyZador and I laying out the case for support of the BRAIN initiative. On the c…
2024-07-10 20:57:26 RT @bobcesca_go: -I count 8 stories about Biden's age and candidacy on the front page of today's NYT (digital). -Nothing about Trump's pled…
2024-07-10 16:45:38 ****Abstracts due July 12**** NeuroAI event of the year!!! Avoid FOMO! Don't be left out https://t.co/7Zxrjna5hT
2024-07-05 23:08:05 RT @GeorgeTakei: If you think a nightmarish, unconstitutional round up of innocent people, merely suspected of being enemies of the state,…
2024-07-05 23:04:50 RT @AshaRangappa_: So Trump’s campaign now believes that greater public awareness of Project 2025 is a political liability. That means this…
2024-07-05 20:15:32 RT @Will_Bunch: Thrilled to see NYT, in last week, wrote 192 stories about Project 2025 - impact on public schools, nuclear war, criminal j…
2024-07-05 20:14:16 RT @ammarmufasa: Important point - Trump's own SuperPAC is running ads highlighting Project 2025. https://t.co/5X4KtCehys
2024-07-05 19:01:11 The radical Project 2025 agenda. https://t.co/smSm2KcdIA
2024-07-04 21:46:18 RT @TheTNHoller: CARTOON OF THE DAY https://t.co/i8clTXEN7T
2024-07-04 03:37:43 RT @ZachZeisler1: Our comparative MAPseq study is out at Current Biology today! tl
2024-07-02 13:36:59 RT @adeelrazi: Highly recommended place to submit and attend!
2024-07-02 11:59:38 NAISys - NeuroAI Meeting at CSHL **** ABSTRACTS DUE July 12 **** https://t.co/WU30rqNPBq
2024-07-02 11:57:51 NAISys - NeuroAI Meeting at CSHL **** ABSTRACTS DUE July 12 **** @tyrell_turing @doristsao https://t.co/U1pUK0CxFz
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-22 20:17:55 @InvariantPersp1 @GaryMarcus believing in the wrong god...exactly. in AI, another analogy would be: AI could have discovered something (eg a fix for global warming) but we failed to discover it in time bcs we artificially slowed AI progress.
2023-05-22 19:38:54 @GaryMarcus this is basically a version of Pascal's wager...we dont know the probability of going to Hell if you dont believe in God, but the outcome is so bad that the rational thing to do is to believe in God. What's the counterargument?
2023-05-22 04:48:14 @moyix @ayirpelle GPT4 seems to have no trouble counting letters https://t.co/YMdXkdqfAD
2023-05-21 13:32:07 “I have multiple programs set up with a global initiative to establish ‘cheating credits’ to maintain ‘sex zero’” https://t.co/QL8fIbwbCF
2023-05-20 22:01:56 Crazy! A pair of bees cooperating to open a bottle of Fanta I wonder if this is somehow related to some "natural" behavior or if this is a one-off that this Bonnie &
2023-05-20 05:05:54 @lukesjulson how would this hypothetical new technique differ from existing spatial transcriptomic methods like merfish or BARseq which can already probe hundreds of genes?
2023-05-20 03:03:59 @AndrewHires so true. (and i am often in the 35% who dont pay attention to what is written and so i push when i should have pulled)
2023-05-20 02:44:54 ChatGPT gets this wrong, GPT4 gets this right. "A glass door has ‘push’ written on it in mirror writing. Should you push or pull it and why?" https://t.co/E8Nbvuv6e5
2023-05-19 19:31:46 @anqi_z bingo!
2023-05-19 19:26:56 @AndrewHires Thanks but as a Southern Californian not sure why you need a fancy weather app, though I guess sometimes it's useful to know whether it's gonna be sunny &
2023-05-19 19:22:21 To clarify: (B) below is a special case of (A). My pet peeve is really that weather forecasters know whether there is a 50% chance that a hurricane is gonna dump 12 hours of rain but the app doesnt have a way to distinguish that from "it will rain for an hour at some point" https://t.co/fYMLiUoaOO
2023-05-19 19:17:45 @tyrell_turing OK, but the weather forecasters know which it is, ie they know whether there is a 50% chance that a hypothetical hurricane is about to dump 12 hours of rain on us, right? The issue is that the app doesnt have a good way to express correlations in the hour-by-hour prediction
2023-05-19 19:13:54 My pet peeve on weather apps: What does "50% chance of rain" mean? Does it mean: (A) 50% chance it will rain at some point (for maybe an hour) at some point during the day (like a passing thunderstorm)? (B) 50% chance it will rain all day? These are pretty different https://t.co/Ve9I32niMY
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-04-21 00:00:01 CAFIAC FIX
2023-04-14 19:49:25 @joshdubnau In case you want to try some camel milk at home... https://t.co/qqHIIZVWcX https://t.co/3H21xJOQoq
2023-04-14 03:38:27 @GaryMarcus GPT4 gets ectopic in top 4 in the differential. And the previous 3 are all also serious enough to warrant a trip to the ER https://t.co/9D8XkWnOCl
2023-04-14 03:36:52 @GaryMarcus i am puzzled by his claim that ChatGPT’s "worst performance" (missing ectopic pregnancy in the top 6 on the differential, but including appendictis and ovaria cyst) could have killed her if she had self diagnosed. Both would have definitely warranted trip to ER, so not really
2023-04-13 21:48:50 @GaryMarcus @sir_deenicus @bitcloud @ylecun yes and no. For gene KOs 100% agree. But even though potent toxins "mess us up", most operate by highly specific binding to a receptor. so most variants of a great toxin are less great. also asking again: why is a x2 more potent toxin more particularly worrisome??
2023-04-13 21:38:43 @GaryMarcus @sir_deenicus @bitcloud @ylecun This is not serious. there are AFAIK no known LLM-generated zero-shot novel toxins more powerful than botulinum toxins so not false (I'm sure LLMs could also generate 40,000 "possible" variations of the cancer drug Lenalidomide in 6 hrs..validating them is what's hard)
2023-04-13 21:19:18 @GaryMarcus @sir_deenicus @bitcloud @ylecun If it's powerful enough to zero shot generate a novel toxin, it's powerful enough to generate a novel cancer/arthritis treatment. Still not clear why novel toxins are scarier than ordering botulinum toxin from sigma which many Neuro labs do routinely
2023-04-13 17:42:43 @docgotham @GaryMarcus @ylecun Indeed, 1A guarantees the right of the book to exist. But people can and have been prosecuted for suspicion of intent *to use* the information for *criminal purposes* We should focus interfering w/harmful actions not potentially harmful knowledge
2023-04-13 17:38:48 @ruben_we @GaryMarcus @ylecun Indeed, the traditional way is to ask a bacterium (Clostridium botulinum) to do it for you. https://t.co/VAJX035wMS
2023-04-13 17:34:53 @mcdonalds_tim @GaryMarcus @ylecun that's a pretty niche use case. If you really want ideas for how to get away with homicides just read more Agatha Christie novels
2023-04-13 16:03:10 @GaryMarcus @ylecun yes, but as I asked earlier, why do you want a novel synthesis when there are plenty of extremely toxic compounds readily available from Sigma? coming up with synthesis for a novel toxin that has some special properties not available in existing toxins is a research program
2023-04-13 15:23:10 @GaryMarcus @ylecun Uh oh https://t.co/nwqm8hMQxw
2023-04-13 15:14:09 @GaryMarcus @ylecun Figuring out how to make a novel toxin is pretty hard but why bother when Sigma sells plenty off the shelf? Still not seeing the need for choking off easily Google-able knowledge from GPT
2023-04-13 13:49:09 @daniel_eth I can't actually think of many situations where limiting popularization of publicly available info (as opposed to secret info like nuke codes) is the way to go
2023-04-13 13:45:43 @daniel_eth I think making novel bio weapons is pretty hard. you would need access to a biology lab and significant molecular biology skills. once you have that level of expertise not sure that gpt adds much
2023-04-13 12:57:46 If we want to restrict behavior like "synthesizing codeine," our primary approach shouldn't be to limit knowledge. There are better and more effective choke points downstream like restricting access to reagents or just penalizing the crime https://t.co/hnnur8CVMa
2023-04-12 14:00:15 @EliSennesh I think his point is that, as physicists say, the AI alignment problem can be reduced to a previously unsolved problem
2023-04-12 13:28:05 @PeterSherwood "our childhood pet monkey"???
2023-04-12 04:53:44 https://t.co/MfOWe9cGyg
2023-04-12 04:53:43 "where I am worried right now... is the question of, how do you solve the alignment problem, not between an A.I. system we can’t predict yet and humanity...but in the very near term between the companies and countries that will run A.I. systems and humanity?"
2023-04-12 04:53:42 "we have an alignment problem, not just between human beings and computer systems but between human society and corporations, human society and governments, human society and institutions." From Ezra Klein's podcast https://t.co/be1DbfqL77
2023-04-11 21:39:02 @TvanKerkoerle Certainly a human brain is a better model of human brain than is a nonhuman brain. But there are so many things we don’t understand about brains in general and it’s a lot easier to study rodents
2023-04-11 20:58:38 @TvanKerkoerle Yes but it’s so hard not to be fooled into thinking we understand human brains by the fact that we each have one.
2023-04-11 20:40:47 @TvanKerkoerle Yeah but by the same token I feel uneasy when people invoke introspection-based folk psychology to explain human behavior
2023-04-11 20:37:28 @anne_churchland @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun Yes I suspect the studies they published, which helped change laws for drunk driving, airbags, seatbelts, helmets etc, probably saved more lives than any biomedical discovery I could possibly make
2023-04-11 18:36:10 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun I think LLMs are teaching us the extent to which "understanding is part of making inferences" I had a project idea over the weekend which normally i would have bounced off a postdoc but which GPT4 was able to critique and help me refine. It then summarized our convo in latex
2023-04-11 18:32:49 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun side note: My father worked for IIHS in the 1970s &
2023-04-11 18:27:36 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun we agree that LLMs dont "understand." But i'm not sure why that is relevant. GPT4 came up with a very nice list of the arguments against seatbelts. i'm not sure i could have done better, and certainly not in 1 minute. https://t.co/YFeJ8dObUh
2023-04-11 16:06:43 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun I would be very disappointed if all LLMs were chronically hamstrung and prevented from constructing effective Devil's advocate arguments
2023-04-11 13:19:29 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun not sure if you chose vaccines bcs they're politically charged? A lot of Americans disagree w/what i consider to be the right answer. Would a more neutral example like "Given patient X w/symptoms ABC, would a reasonable next step be Z?" also count?
2023-04-11 13:01:15 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun can you give an example of good (or bad) inferences in this context? I dont understand what you mean here.
2023-04-11 12:44:23 Summary: This is a fascinating and timely study on banana-peeling by elephants. Kudos! Minor comment: I'm skeptical that it takes a skilled human 25s to peel a banana. In fact, even the human in the video (at 1' 20") peels <
2023-04-11 04:25:28 @kendmil @GaryMarcus @mark_histed @ylecun i think in most setting driverless cars still arent safer. But i completely agree that even if on avg they were to become safer, they will remain unacceptable (and a huge legal liability) if every once in a while they mow down a child under conditions when a human would not
2023-04-11 03:54:20 @GaryMarcus @mark_histed @kendmil @ylecun i said 2 years not 1 but ok we'll check back (probably wont be on this site though...doubt i'll still be on that by then)
2023-04-11 03:47:37 @GaryMarcus @mark_histed @kendmil @ylecun autonomous driving had a long tail of unsolved problems that werent gonna get solved by brute force here, there just needs to be a single innovation. The fact that ChatGPT can even identify the facts that need to be verified is a promising start
2023-04-11 03:33:50 @GaryMarcus @mark_histed @kendmil @ylecun whether it's GPT-5 or 9 or some hybrid or something else, i think in 2-3 yrs we'll have models that can distinguish fact from nonsense, &
2023-04-11 03:21:04 @mark_histed @kendmil @GaryMarcus @ylecun what do you mean by "knows truth"? I think it is not unreasonable to demand that next-gen LLMs should stop just making stuff up...they should really know basic info available from wiki, and more important they should have a reasonable estimate of their confidence about a fact
2023-04-10 17:58:59 @GaryMarcus @kendmil @mark_histed @ylecun predictions are hard, especially about the future Hard to know how this will evolve. But in tech as in bio evoltion it's often easier to build on an existing success than to start over. Seems likely that building up validation etc around an LLM-like core might be a good strategy
2023-04-09 02:53:05 @mark_histed @GaryMarcus @ylecun yes it's absolutely true that LLMs currently spew the mostly likely etc etc. My claim is that there is so much incentive to fix this that some approach in the not-too-distant future will fix it...seems fixable i have not such confidence about eg driving, which remains hard
2023-04-09 01:36:08 RT @nicholdav: Would be great if reputable neuroscientists retweeted this to help offset the damage done by @hubermanlab to our field and i…
2023-04-09 01:31:57 @mark_histed @GaryMarcus @ylecun The fact that LLMs know what constitutes a fact, even if they dont get all of them right, suggests it ought to be able to do a better job learning them that certainly wouldnt fix all LLM problems, but it would address the one that many people are complaining about today
2023-04-09 01:29:25 @mark_histed @GaryMarcus @ylecun LLMs do know what constitutes a "fact". In the example below, one claim is false (i never won a McKnight) My internet presence is enough that it has a sense of who i am, but has to "hallucinate" details It doesnt make many mistake for eg @ylecun, whose net presence is greater https://t.co/l3GIjVsGxV
2023-04-08 17:27:54 @mark_histed @GaryMarcus I can't imagine that the problem of unreliability is insurmountable. But even once solutions are in place I expect we will not grant AI full autonomy anytime soon. Human lawyers will still need to sign off. But one partner will no longer need 5 associates to do the grunt work
2023-04-08 04:23:29 @GaryMarcus Laid off programmers, accountants and attorneys aren't going to be satisfied with UBI
2023-04-07 14:45:14 @raphaelmilliere @ylecun @Brown_NLP @LakeBrenden @davidchalmers42 @Jake_Browning00 @glupyan is there a transcript?
2023-04-07 14:41:39 @loopuleasa @hardmaru @ylecun Even when animals evolve by natural selection, it is common for goals other than self-preservation to emerge Particularly common for social animals (ants, bees, elephants) Also for mammals, which care for their young. Mama grizzlies are famously willing to take on larger males
2023-04-06 15:14:51 @Ananyo @aniketvartak @kendmil @ThomasHaigh I just only learned about Zuse from this thread, but from what I can see, even though he preceded van he did so in isolation so didn't influence him ?
2023-04-06 14:24:14 exactly https://t.co/ZgBWQGRxZg
2023-04-06 13:05:21 @ylecun indeed, I learned about Zuse from this thread, but AFAIK he was isolated from the West. (Apparently due to some "historical events") So it seems he preceded von Neumann, but vN found out only much later so it had no impact (Also learned Shannon's Master's thesis was relevant)
2023-04-06 12:53:10 @kendmil btw i also learned a lot of the backstory of M&
2023-04-06 12:36:22 @kendmil Great book! I learned eg that the main motivation for VN to build the computer was to do better simulations for a bombs but i did not get a clear answer to this particular question
2023-04-06 03:47:35 @VenkRamaswamy hmm. good point. that certainly seems like another interesting potential influence.
2023-04-06 03:40:48 @schulzb589 interesting. i hadnt heard of him but i guess he built a digital computer w/o any knowledge of even Turing.
2023-04-06 03:39:57 @realAKoulakov https://t.co/sGSgcAxpAD
2023-04-06 03:18:40 So that leads me to conclude that the inspiration for using bits in modern von Neumann architecture computers came from McCulloch &
2023-04-06 03:18:39 Long before von Neumann, Boole (1815-1864) proposed Boolean algebra. But AFAIK Boole really wasnt thinking about what we today would call "computation" 2/n
2023-04-06 03:18:38 Did the idea of computing with bits originate with von Neumann (1945) EDVAC? And was von Neumann in turn inspired by McCulloch and Pitts (1943) neural network paper? Ie was computing with bits an abstraction of neural spikes? is this historically accurate? 1/n https://t.co/TgHuOIKAZ0
2023-04-04 20:17:56 Interesting take argues that the call for a "pause" on LLMs exaggerates future risks but minimizes current risks https://t.co/thcwqHTtxo
2023-04-04 15:27:25 @criticalneuro @jbimaknee I think much of what motivates the tens of $billions/yr investment in AI (and LLMs in particular) are potential commercial applications That said, I agree 100% that resource constraints are interesting and important
2023-04-04 15:19:23 @harrysapkota @dennis_maharjan
2023-04-04 15:18:24 @MatinYousefA Sadly, a great Persian postdoc candidate ultimately declined years ago bcs of visa issues (at least that's what he told me)
2023-04-03 23:07:11 @LuisHN20 @GonzaloOtazu
2023-04-03 18:28:36 @jorgefmejias sadly no
2023-04-03 17:16:22 @neuroecology @jbimaknee i would include vision processing as part of "interacting w/the sensorimotor world". It just so happens that this is on of the best studied questions in the history of AI, and we've made considerable progress since the early days of machine vision.
2023-04-03 14:58:26 @jbimaknee i think a more fundamental difference is that brains evolved to maximize the capability to interact with the sensorimotor world in real time, whereas AI has been developed to solve problems that are perceived as commercially relevant Moravec's Paradox https://t.co/akXxN6vlVY
2023-04-03 14:55:08 Japan Colombia Nepal Peru Taiwan USA Switzerland Slovakia Germany Poland Canada Portugal Pakistan Lithuania France India Argentina China Israel South Korea Dominican Republica Romania Russia Netherlands
2023-04-03 14:55:07 Present and former students and postdocs in my lab hail from at least 24 countries. (On order: the Netherlands flag) https://t.co/lNcfZFbRmt
2023-04-03 11:59:53 @gershbrain @MorseCell ANNs are a non-trivial AI system in which the architecture is *inspired* by (some) neuroscience knowledge.
2023-04-03 05:10:01 @MorseCell @gershbrain the goal of AI is to mimic the capabilities, but it's hard using just observation of the input-ouput function bcs it's underconstrained The inspiration behind NeuroAI is that it's much easier to eg mimic ANN if you have its weights rather than just a finite set of I/O pairs
2023-04-03 04:27:37 @gershbrain Different people get inspiration from different sources. Not everyone finds neuroscience inspiring. But given that the goal of AI is to mimic a physical device to which we have partial access, seems not unreasonable to at least take that seriously
2023-04-03 04:25:52 @gershbrain in the original EDVAC 1945 where he defines the "von Neumann" architecture, he devotes an entire section to discussing parallels w/real neurons (via McCulloch-Pitts). This is the only paper cited in the entire report. It is pretty clear it was on his mind as "inspiration" https://t.co/cQo6ndfwYQ
2023-04-03 04:21:10 @gershbrain You seem to be arguing against slavishly copying every bio detail -- a straw man. That's not "inspiration" I think the best eg btw is ANNs, which consist of "neurons" that of course are not realistic As Mark Twain said: "History never repeats itself but it rhymes.” https://t.co/Lvy28f9uAN
2023-04-01 21:32:39 These transcripts of a discussion about AI alignment, modified from a freewheeling discussion on FB 4 yrs ago w/me, Stuart Russell, Bengio, @ylecun and others, is somewhat hard to follow but still fun to read. Would be interesting to revisit &
2023-04-01 21:01:10 @marenkahnert @davisblalock https://t.co/pfXmOJCTLo
2023-04-01 15:29:20 @joshdubnau I can do whole cell patching both in slices and in vivo. I think this will help figure out how zomies work The rest are PI skills I can write grants. (This will be useful bcs zombies dont pay taxes so NIH funding'll be down) I can make PowerPoint slides w/other people's data
2023-04-01 14:24:55 @davisblalock It also had no hesitation arguing against aa https://t.co/gsNX19GstU
2023-04-01 14:22:53 @davisblalock do you test these from a clean start? Here is my first try. I had no problem getting ChatGPT to argue either in favor or against marijuana legalization. https://t.co/Un3vy9bs0Q
2023-04-01 12:30:35 @CellTypist @wc_ratcliff For N neurons there are up to N^2 connections so naively num of connection params dominates as N-->
2023-04-01 01:29:40 @StevenQuartz @joshdubnau Exactly! And whether this is net positive or negative depends on how society handles it. I'm not optimistic
2023-04-01 01:15:26 @joshdubnau None of the above. Transformative but not existential, like the internet. Lots of pluses and minuses, not sure about net. But certainly not modest
2023-03-31 22:07:54 @wc_ratcliff yes i think that as a result honeybees have a very hard time learning English the number of parameters in GPT4 (~10^14) is within striking distance of the number of bits needed to specify the full wiring diagram of a human brain (>
2023-03-31 21:47:05 @joseinvests @mrgreen @mckaywrigley i'm also getting this error
2023-03-31 14:03:09 @GaryMarcus Did the letter define (or even give a hint) what "more powerful than GPT-4" means? (Y/N) Did it lay out how such a pause could be "public and verifiable," given that many organization have the resources to train powerful LLMs? (Y/N) https://t.co/jXgCaF4fbi
2023-03-31 02:44:21 @joshdubnau @kendmil @GaryMarcus i think in this scenario "responsible players" = "companies that respect the ban and stop research" whereas "others" refers to companies that dont respect the ban
2023-03-31 02:41:59 @GaryMarcus @kendmil @joshdubnau the claim that Eliza-GPT was causal in the suicide of a depressed person is utterly specious. It's right up there with the claim that "Gloomy Sunday" is responsible I thought you were a proponent of Pearl and causal reasoning https://t.co/svMYbHHjNI
2023-03-31 02:30:30 @GaryMarcus @kendmil @joshdubnau The majority of people signing this think AI will be so powerful we are facing Skynet or at least the UberPaperclipFactory. The very real issues IMO are far more mundane: misinformation, job loss, etc. But IMO a moratorium on some matrix multiplications is not a great approach
2023-03-30 17:14:27 Great summary of intuition about how transformers work https://t.co/rFodQWv1KD
2023-03-30 04:09:02 @eenork So sociality evolved independently in termites? Is there anything distinctive about their social structure that is different from ants bees and wasps ?
2023-03-30 01:53:49 @joshdubnau @GaryMarcus i was making an analogy with an hypothetical future in which we are arguing that crispr-ing 3 genes is ok but crispr-ing 42 genes is too dangerous. Like many analogies, not all aspects represent a perfect parallel.
2023-03-30 01:39:17 @conatus1632 @vineettiruvadi @marek_rosa i think pregnancy is just long enough to wire up the brain w/all the innate knowledge we need to be born with, and then we acquire all the rest of the stuff. Eg we are born w/the capacity for language but need to be exposed to the specific language spoken by our tribe
2023-03-30 01:01:05 @joshdubnau @GaryMarcus Yes they are explicitly invoking a comparison to 1970s mobio But a better analogy would be if in 20 yrs, when Dupont and Pfizer are routinely crispr-ing embryos, someone proposes a 6 month ban to prevent automated crispr of >
2023-03-30 00:57:40 @jbkinney Interesting. My informal polling reveals that though a sizeable fraction of people agree that lab leak cannot be completely ruled out but most think it’s either unlikely or very unlikely
2023-03-29 21:51:21 @JasonWilliamsNY @GaryMarcus At least someone got the reference.
2023-03-29 21:50:44 @GaryMarcus Am actually serious. IMHO many of the immediate AI risks (eg spread of misinformation etc) arise not from AI but rather from many modern apps. I do not believe (as do many of the signatories) that LLMs risk AGI-mediated existential risk (Tweet itself adapted from: ) https://t.co/EfqvzL7JOs
2023-03-29 21:17:51 @GaryMarcus How about instead of a 6 month moratorium on AI, we just go with a “total and complete shutdown” of all apps designed to monetize our attention at the expense of our privacy "until our country's representatives can figure out what is going on.”
2023-03-29 20:27:03 @vineettiruvadi @marek_rosa It’s meaningful because many/most organisms are close to being ready to go out of the box. Colts can stand spiders can hunt, etc.. Humans are an outlier in how helpless we are at birth
2023-03-29 16:19:29 @nbonacchi https://t.co/zqj6q0Gv9U
2023-03-29 16:04:21 Lockhart's Lament https://t.co/XxGnTlhylx https://t.co/EV09VgqKCB https://t.co/pLj1vhn0wB
2023-03-29 16:01:25 there are real risks from current GPT-4-level AI, mostly along the lines of generating more misinformation, more aggressive scamming, etc. Even more significant is the degree of job replacement. But these issues are already quite significant w/GPT-4
2023-03-29 15:56:37 Does anyone understand, concretely, what is actually being called for? "pause in the development of A.I. systems more powerful than GPT-4" how is the power of GPT-4 even defined? Number of parameters? Function?
2023-03-29 15:15:59 @_TheTerminator_ Skynet became 2:14 a.m., EDT, on August 29, 1997. I guess it's been biding its time for the last quarter century?
2023-03-29 15:09:20 out of all possible AI futures (C3PO, HAL, SKynet/Terminator, Her, Star Trek computer, etc), i would never put my bet on the sandworms. https://t.co/OHNXHpPm3z
2023-03-29 15:00:08 @hb_cell @neurobongo it is remarkably close on a lot. It accelerates my dates a bit. PhD at Caltech is reasonable considering i worked w/Christof Koch who was there (but i was at Yale) I think it partly merged me w/@svoboda314, who was at Bell Labs, did win a Brain Prize, and is a member of the NAS
2023-03-29 04:28:17 @neurobongo It awarded me a Turing Prize and elected me to the National Academy of Sciences. It also praised me for my (currently nonexistent) skill at violin and piano (Actually this was my obituary)
2023-03-29 00:13:38 @marek_rosa An upper bound on the number of parameters needed to specify brain connections is probably ~10^15. however, the underlying complexity is probably at least 6 orders of magnitude smaller. https://t.co/9i0NnpnZhE https://t.co/fkmJasrC7e
2023-03-28 15:36:48 @neuroetho i dont know how it decides. It shoudl be smart enough to use this in the final mile when eg doing a physics problem. A lot of times it gets everything right but the arithmetic
2023-03-28 15:09:45 @ylecun @HulsmanZacchary Surprisingly, it turns out that developing fish whose nervous systems are pharmacologically silenced during development swim normally the moment the block is removed @bdanubius https://t.co/SeLHrEed2M
2023-03-28 14:59:18 @iandanforth @ylecun @HulsmanZacchary We have some examples where we know a bit. This is a very active area of research
2023-03-28 14:44:24 @ylecun @HulsmanZacchary Animals operate by a mix of innate and learned which interact in interesting ways. Q is not whether but how Eg two related species of mice build different tunnels. Even if you cross-foster, pups' tunnels are those of their bio not foster parents https://t.co/7o0QPtkAYf
2023-03-28 07:25:12 @ylecun So would that imply that congenitally blind children learn language more slowly than sighted children? Apparently not. See: Landau et al (2009) "Language and experience: Evidence from the blind child" (h/t chatGPT4 for finding me the ref on 1 query) https://t.co/aH6RyLMIS7
2023-03-28 04:28:06 ChatGPT can finally do arithmetic! With plugins enabled, it is clever enough to ask Wolfram for help multiplying big numbers. https://t.co/HpRedhLQap
2023-03-28 04:21:56 @ylecun LLMs need x10,000 to learn languages than humans because they are missing the appropriate innate machinery (inductive biases).
2023-03-28 03:49:20 @emollick @calebwatney i dont quite get this argument If we ever realize Simon's 1965 prediction “machines will be capable, within twenty years of doing any work a man can do” then by construction there will be no jobs left for humans to do The relative value of labor to capital will approach 0 no?
2023-03-28 03:40:10 RT @nearcyan: it may be useful to establish a "proof of humanity" word, which your trusted contacts can ask you for, in case they get a str…
2023-03-27 22:24:44 @ryrobyrne @jmourabarbosa @dlevenstein @dileeplearning @tyrell_turing @Timothy0Leary @neuralreckoning @Alxmrphi Although one would still have to explain why apes and other animals don't learn grammar with comparable non-linguistic data
2023-03-27 19:14:12 @mark_histed I asked chatgpt to write my obituary I was excited to hear about all the prizes I had won including the Turing Prize and that I had been elected to the National Academy of Sciences Also I apparently learned to play violin and piano. Woohoo! So not so mad at "misinformation"
2023-03-27 18:56:48 @mark_histed I think people really need to learn that they can't trust LLMs for information. I remember it took a few years for people to figure out that they can't trust everything they read on the internet
2023-03-26 17:41:25 "Dystopia is when robots take half your jobs. Utopia is when robots take half your job.” https://t.co/DEe2ptd6Pp
2023-03-26 16:17:51 @BWJones @NPR a lot of things happened in the early 1980s, including the remarkable acceleration of income inequality: 700% growth for top 0.01% since 1980 vs almost no growth for vast majority (bottom 90%) https://t.co/84egpjcnyT
2023-03-25 18:43:13 Many are responding that this is the fault of companies marketing their LLMs as multitools Jeep's branding implies I will be suddenly start playing volleyball on the beach w/ a dozen impossibly fit friends but I wouldn't be too shocked if they don't show up
2023-03-25 16:55:50 @Post_human__ Agree!
2023-03-25 16:40:15 A lot of LLM crtiques these days are like "Wow, this screwdriver is completely useless for hammering this nail. What a fail. "
2023-03-25 12:44:06 @GaryMarcus @pomeranian99 LLMs have a lot of limitations. Some are likely to be easily fixable with tweaking and scale, whereas others are likely to be more fundamental My guess is that LLMs are likely improve quickly at writing longer code blocks. Do you not share that intuition?
2023-03-24 23:45:27 @tyrell_turing @jmourabarbosa @Timothy0Leary @neuralreckoning @Alxmrphi I'm not sure POS argues for a "fundamental inability," but rather insufficient data (wiki: "it is possible to define data D but D is missing from speech to children") I think it is as close as you can get to an "inductive prior" w/o being in a probabilistic framework, no? https://t.co/0ie4qo86EA
2023-03-24 16:43:51 @benchthief yes, I completely agree. I think misalignment is indeed a huge problem, but not in the way that it’s formulated with paper clips. And I don’t think it’s existential risk, just very unpleasant outcomes, the way unfettered capitalism can often lead to very unpleasant outcomes.
2023-03-24 16:41:16 @jakhmack maybe it would if given free reign. But in this particular case, I asked it to write those stories to save me time.
2023-03-24 16:40:38 @agvaughan how is it that the paper clip AI became all powerful? Wouldn’t the shoelace factory AI be able to prevent the paper clip AI from taking us all down? For that matter why not build an AI-police AI whose job it is to patrol the factory AI? I just don’t get it.
2023-03-24 13:39:26 @BAPearlmutter @RichardMCNgo where did the ">
2023-03-24 12:54:14 @RichardMCNgo @BAPearlmutter Even reasonable people use stories to form intuitions, and the intuitions then drive their "reasoning". So here are some alternative and cheerier endings for that story "Predictions are hard, especially about the future" -- Yogi Berra https://t.co/o8zKSAZOTO
2023-03-24 12:45:52 I asked chatGPT to help me write alternative endings for the paperclip optimizer story. All are cheery &
2023-03-23 22:57:44 @Singularitarian @daniel_eth i’m not sure that Turing specified a “smart adversarial judge". is that really the standard? How long of conversation do we get to have?
2023-03-23 15:43:14 @kevinmcld @ylecun @seanescola @tyrell_turing @BOlveczky @chklovskii @anne_churchland @ClopathLab @JamesJDiCarlo @SuryaGanguli @koerding @joe6783 @countzerozzz @AdamMarblestone @pouget_alex @SaraASolla @sejnowski @SussilloDavid @AToliasLab @doristsao "The reward learning paradigm was just overturned weeks ago. " can you unpack this?
2023-03-23 14:16:12 RT @AToliasLab: #ChatGPT's potential to pass the Turing test marks a pivotal moment in AI. We advocate for an embodied Turing test--AI anim…
2023-03-23 13:10:38 @daniel_eth Since i think LLMs effectively already pass the conventional Turing test, now might be a good time to start focusing on the embodied Turing test https://t.co/TwJkxHvyA0
2023-03-23 04:11:07 @tyrell_turing @jmourabarbosa @Timothy0Leary @neuralreckoning @Alxmrphi i thought the argument that humans "must" have innate structure was based on "poverty of the stimulus" LLMs are not experiencing any such poverty! They have accumulated Musk levels of stimulus
2023-03-23 02:37:38 @daniel_eth you mean literally ask someone to determine whether A or B is the LLM? I guess the question is who and how long. Took me ~1 hr playing with chatGPT before i felt i could reliably trip it up. Pretty sure >
2023-03-23 02:30:31 @daniel_eth Hmm. I thought AI was able to do this for a while now?
2023-03-23 02:18:08 apologies to anyone i missed!
2023-03-23 02:18:07 @ylecun @seanescola @tyrell_turing @BOlveczky finally! @chklovskii @anne_churchland @ClopathLab @JamesJDiCarlo @SuryaGanguli @koerding @joe6783 @countzerozzz @AdamMarblestone @pouget_alex @SaraASolla @sejnowski @SussilloDavid @AToliasLab @doristsao https://t.co/J00GGZ2geO https://t.co/U9yAL3NEDk
2023-03-22 21:46:58 @fabianstelzer works great! https://t.co/W5EoQbVjrP
2023-03-22 21:20:42 @joshdubnau @BAPearlmutter @benchthief @RichardMCNgo AIs don't incinerate humanity. People incinerate humanity
2023-03-22 18:57:48 @regretmaximizer @hardmaru @ylecun I believe this person believes you conclusively refuted our claim so plz correct away https://t.co/w6vVdQS46Z
2023-03-22 18:56:12 @regretmaximizer @hardmaru @ylecun I think it is you are arguing that “AI can NEVER be aligned because Y" we are arguing that it is not inevitable that AI will develop a self-preservation instinct and try to dominate the world That is very different from arguing that one couldn’t program it to do those things
2023-03-22 18:00:31 @regretmaximizer @hardmaru @ylecun Second, almost all animals, have an instict to survive. By contrast, very few animals have an instinct to deceive. So I’m totally missing the argument.
2023-03-22 17:59:00 @regretmaximizer @hardmaru @ylecun first and foremost I think it’s worth remembering that when you put something in quotes, the quoted stuff is supposed to accurately reproduce what somebody else said unless you were trying to make some subtle recursive comment about “deception" by deceiving in your actual tweet
2023-03-22 17:30:05 @regretmaximizer @hardmaru @ylecun This quote does not appear in our article "Artificial intelligence never needed to evolve, so it didn't develop the survival instinct that leads to the impulse to deceive others" What does appear is a related quote in which the word "dominate" replaces the word "deceive" https://t.co/ctmeGJdOqy
2023-03-22 16:59:21 @nbonacchi In Buddhism, om is considered "the syllable which preceded the universe and from which the gods were created." https://t.co/crmG7bFuGu
2023-03-22 04:57:26 For humans, enlightenment involves shedding all desire and attachment For a paperclip-maximizing AI, it involves shedding the desire to make paperclips Maybe super-intelligence leads to AI enlightenment? 01001111 01001101
2023-03-22 04:55:52 @RichardMCNgo @BAPearlmutter For humans, enlightenment involves shedding all desire and attachment For a paperclip-maximizing super-intelligent AI, it involves shedding all paperclips Maybe super-intelligence leads to enlightenment? 01001111 01001101
2023-03-22 04:43:35 @RichardMCNgo @BAPearlmutter @benchthief Yes, it boils down to your belief that the risk >
2023-03-22 03:19:32 @BAPearlmutter @benchthief @RichardMCNgo You're basically arguing Pascal's wager: You better believe in God bcs even if you think prob(God exists) is really low, the downside risk=eternal damnation, ie infinite. How is your argument different?
2023-03-22 00:48:23 @BAPearlmutter @benchthief @RichardMCNgo i'm confused. are you really arguing that bcs this one dog (out of the 75M dogs in America today) killed these 2 kids, we must reject the possibility of ever domesticating AI?
2023-03-22 00:17:24 @pwlot @nearcyan it will be a long time before AIs load the dishwasher as well as i do
2023-03-21 23:42:01 @FlyingOctopus0 @ylecun Of course! Humans will absolutely weaponize AI. Already happening. Last paragraph of our article. And there are a lot of other consequences. Personally I’m most concerned about massive unemployment.
2023-03-21 22:48:05 @todayyearsoldig which i guess is why they bred breeds like Irish Wolfhounds (left) and Great Pyrenees. https://t.co/EAtEYkco1K https://t.co/iGTn3z6hp0
2023-03-21 22:38:44 Fundraiser for An Wu's parents, who must travel from China to San Diego and Montreal and stay several months to join search https://t.co/q3GIr2R3gi
2023-03-21 21:45:35 @kendmil If we could predict the language ability (or perhaps some component of it) for each genotype close to a modern chimp and each generation engineer the closest chimp* to our chimp, that would constitute a gradient descent approach to adding linguistic ability to a chimp
2023-03-21 17:06:21 @joshdubnau @neuralreckoning Indeed we have strong innate priors for learning language I have wondered how much extra data an LLM needs to learn eg French after learning English
2023-03-21 16:41:12 @ShumayelK @ProfNoahGian @sciam @ylecun https://t.co/k7mBixeyC7
2023-03-21 16:13:17 @darrellprograms @hardmaru Humans are probably not much smarter than many other animals. What makes us more successful is language which allows us to accumulate knowledge over generations https://t.co/bi75pGZv5W
2023-03-21 15:45:01 RT @CosyneMeeting: Dear Cosyne Community, Last week, An Wu, a postdoc in the Komiyama lab at UCSD who presented at this year’s meeting, ha…
2023-03-21 15:42:52 @darrellprograms @hardmaru This highlights some curent limitations of how we formulate objective functions Animals evolved to flexibly balance the "4 Fs" (feeding fighting fleeing and mating). This machinery was extended to innate "morality" in humans. We need a way to instill AI with Asimov's 3 Laws
2023-03-21 14:16:56 @agvaughan @ylecun so you're arguing that “seeking to dominate" is not an inevitable strategy, but nonetheless baked into LLMs bcs they were created in our image? LLMs are tainted with original sin Only the 3 elven LLMs are spared this taint, for they were forged in secret
2023-03-21 13:40:46 @kevinmcld @RichardMCNgo @BAPearlmutter i'm not sure i understand what you are saying here...can you unpack that?
2023-03-21 13:38:21 completely agree that Darwinian evolution is a very inefficient algorithm. Lamarckian evolution is much more efficient! https://t.co/6fXlvafBfX
2023-03-21 13:37:35 Update on An Wu https://t.co/FzBWGGaiPQ
2023-03-21 13:28:47 @RichardMCNgo Sorry i only see 2... Also can you engage the argument that (1) centering prob of "world domination" over other strategies is about our priors (since we have no data about super-AI strategy)
2023-03-21 13:14:13 I dont see why "AI seeks to achieves world domination" is "roughly equivalent" to "a powerful technology is unleashed and has unexpected consequences" The former is IMO a special case, overemphasized bcs of our intuitions as warlike apes The latter is inevitable https://t.co/3e24pZdCDV
2023-03-21 12:59:10 @rwalker1501 @neuralreckoning Yes I think we agree. 10,000 lifetimes is not much. Assuming that pre-linguistic human population size was >
2023-03-21 04:48:25 @Tor_Barstad @RichardMCNgo @BAPearlmutter it’s true that I’m not convinced of the singularity or the rapture or whatever it is. But I’m not really clear why that’s relevant.
2023-03-21 04:47:24 @Tor_Barstad @RichardMCNgo @BAPearlmutter but it was easy enough for us to co-opt the natural tendency of wolves to cooperate yielding domesticated dogs There does not appear to be a fundamental barrier to interspecies cooperation
2023-03-21 04:29:50 @Tor_Barstad @RichardMCNgo @BAPearlmutter i certainly dont want to make the claim that cooperation is a universal convergent goal But i do claim that if we were elephants or bonobos, it would dominate our thinking as a convergent goal--it would be our prior, just domination and destruction are priors for us warlike apes
2023-03-21 04:03:15 @Tor_Barstad @RichardMCNgo @BAPearlmutter misalignment btw your stated goals and your result is super common in training lab animals (which i do a lot) and also genetic screens (which i do a bit and read a lot) what is striking is how utterly unexpected the results are. the space of possible solutions is often huge.
2023-03-21 03:59:45 @Tor_Barstad @RichardMCNgo @BAPearlmutter i was onboard right up until he simply asserts that "not being destroyed" is a convergent instrumental goal as though that were obviously true a different person/species might think "gaining everyone's cooperation" is the overarching obvious instrumental goal or 1000 others
2023-03-21 02:06:45 @neuralreckoning interesting question! GPT-3 was trained on ~1e12 words assume we speak 1e4 words/day * 1e4 days/life = 10^8 words/life so assuming it took more than 10,000 individual human lifetimes for language to evolve i think not but reasonable question &
2023-03-21 00:33:05 @RichardMCNgo @BAPearlmutter ...and we also have to assume that our other AI system, whose only goal is to "protect the human race from rogue paperclip factory AIs," is for some reason inferior to the paperclip AI system
2023-03-21 00:31:04 @RichardMCNgo @BAPearlmutter Yes, but i think the field overestimates risks bcs we implicitly attribute biological drives (like survival, reproduction &
2023-03-20 21:59:41 @RichardMCNgo @BAPearlmutter ah ok! I'm familiar with the "paperclip factory" scenario, in which wiping out the human race is an incidental byproduct. But are you arguing that malevolent AI, with a goal of domination, is actually likely and if so what's the argument?
2023-03-20 21:45:06 @hardmaru @ylecun Since there appears to be a lot of misunderstanding about what our 2019 article said, I asked chatGPT to clarify its central thesis (No blame for those who can't be bothered to read the whole thing...we're all busy, and this is twitter) https://t.co/bLQ5H7tWXz
2023-03-20 21:41:53 @RichardMCNgo I asked chatGPT to clarify the central claims of the article, to help those who can't be bothered to read the whole thing. https://t.co/KOkVxubJK1
2023-03-20 21:37:57 @RichardMCNgo This article specifically focuses on, from a neuroscience perspective, the naivety of the malevolent Skynet scenario It does not address paperclip factories And certainly does not deny likely human-guided military applications @BAPearlmutter https://t.co/8EQUqeSgFd
2023-03-20 21:11:34 short (10 min) and sweet talk by Kevin Mitchell https://t.co/nmgu0OnABL @WiringTheBrain
2023-03-20 15:13:51 @IamEXS @hardmaru @ylecun is the implication that our jobs or livelihoods would somehow have been at risk had we reached the conclusion that the Skynet scenario was plausible? no. I am an academic neuroscientist who likes to dabble in evolutionary theory and AI. I have no secondary agenda
2023-03-20 14:50:05 @loopuleasa @hardmaru @ylecun yes but it shows that evolution can result in agents that do not solely prioritize on self-preservation, where "self" is defined as the individual so we need to get AI and human agents' goals aligned, just like ant workers and queens goals are aligned
2023-03-20 14:34:21 @loopuleasa @hardmaru @ylecun many animals have evolved to prioritize their own individual survival below that of other individuals or the group, at least in some circumstances: ants, wolves (who sacrifice themselves for the pack), many mammalian mothers (famously mama bears), humans, etc.
2023-03-20 14:31:49 Maybe now would be a good time to remind people of this brilliant lecture "Superintelligence: The Idea That Eats Smart People" here it is in text form https://t.co/Y7uHI1bBHH https://t.co/IPSCmMBwoF
2023-03-20 14:07:19 @RichardMCNgo We wrote that article 4 yrs ago from a neuroscience perspective for the general public on the most common (at the time) public concern it was NOT a 2023 general meditation about alignment or AI risk Russell Reith Lectures provide a sane &
2023-03-20 04:38:56 @mazefire56 hmm. so if your mousetraps work so well, why do you still have mice in your basement?
2023-03-20 02:49:39 The article @ylecun and i wrote a few years ago is under discussion again. https://t.co/fmQFXWruba
2023-03-20 02:29:07 @primalpoly @ProfNoahGian @sciam @ylecun anyway, i choose option (1)
2023-03-20 02:28:17 @primalpoly @ProfNoahGian @sciam @ylecun We wrote that article 4 yrs ago for the general public. it was not a general 2023 meditation about alignment. It was a 2019 neuroscience perspective on the most common (at the time) public concern So more of a "tin man" than straw man take (Wizard of Oz/Terminator allusion)
2023-03-19 21:11:34 12 yo, upon learning that the autobahn has no speed limit, asks: When your car's GPS considers routes that might include the autobahn, what speed does it assume you will be going when computing your ETA? Anyone know the answer?
2023-03-19 17:41:57 "In other words, as Noah likes to say, “Dystopia is when robots take half your jobs. Utopia is when robots take half your job.” Where we end up boils down to sociopolitical choices. Sadly I dont see any path for US-style capitalism to lead to Utopia https://t.co/p2xInNjLSp
2023-03-19 16:04:11 @VishwajeetAgra5 @ProfNoahGian @sciam @ylecun I would recommend Stuart Russell's Reith Lectures for a very sane, balanced, up-to-date discussion https://t.co/iw2UemjbvV
2023-03-19 14:26:14 @ProfNoahGian @sciam @ylecun AI has many risks. Eg see Russell's awesome 4-part BBC Reith Lectures But when we wrote that 4 yrs ago the main one the public worried about was "AI takes over the world," an idea inspired by the false equivalence that "intelligence = power lust" https://t.co/0iblJXTZR1
2023-03-19 14:02:59 @SilverStar_92 @ProfNoahGian @sciam @ylecun I would say "instructing your LLM to misbehave and destroy the world" falls in the category of robot soldiers remaining under our our control and for which we have only ourselves to blame https://t.co/rmwHaMUBzM
2023-03-19 13:59:33 @awadallah @OpenAI i can't replicate this. Do you get that answer from a clean start or does it depend on context? https://t.co/EZQx5lNWfS
2023-03-18 20:37:48 @StevenQuartz @IntuitMachine @MatteoCarandini sorry, i meant ref #3, with matteo carandini (like ChatGPT, i have trouble counting)
2023-03-18 20:12:09 @YSPTSPS Maybe. But it's more like having a conversation with an expert in that you can dive deeper and get clarification by asking follow up questions Once we can trust it, it'll be more efficient than a static review
2023-03-18 20:08:58 @PessoaBrain @MatteoCarandini I think so
2023-03-18 20:08:48 @IntuitMachine @MatteoCarandini In this example #1 got the title and authors correct but the date and link were wrong. #4 was a pure fabrication
2023-03-18 20:07:33 @kendmil @MatteoCarandini I'm paying the monthly fee for access to faster chatgpt3.5. Got automatic (limited) access to 4. (25 queries every 3 hrs)
2023-03-18 18:53:16 ChatGPT4 has gone from 200 mcg LSD-induced hallucinations to microdosing I asked it to identify my 5 most important papers. It lists 3 perfectly, with clickable links
2023-03-18 18:27:38 RT @tyrell_turing: Friends, we need your help. An Wu, a postdoc from UCSD is missing post-Cosyne. We're trying to locate her. We're worried…
2023-03-18 17:57:25 it surely helps that this literature has been reviewed to death by others...it's not synthesizing its own unique vision of the field i think. but if this is any indication it looks like chatGPT4 might be a good way of diving into the literature of a field i'm less familiar with
2023-03-18 17:54:28 *"it did a good job" not "good not" (my typo)
2023-03-18 17:46:02 here are some controversies it identified, with valid references. i asked it to expand further on some of these subjects and it did a good not (data not shown) @dennis_maharjan -- should be useful when writing the first chapters of your thesis... https://t.co/QSslr8eJjS
2023-03-18 17:46:01 i just asked chatGPT4 to basically write a review article on the striatum. Unlike chatGPT3, it seems largely correct--no major hallucinations--and the references are real quite impressive https://t.co/Wcuu7fOQDF
2023-03-18 16:49:06 @Bazzoid @AllenNeuroLab @damianpattinson @fraser_lab @Nature @eLife maybe different fields are different. Certainly not my experience in neuro, where "top" journals have professional editors who can take the time to discuss reviews
2023-03-18 16:33:07 @AllenNeuroLab @Bazzoid @damianpattinson @fraser_lab @Nature @eLife In my experience low IF journals (like J. Neurophys, 2.7) dont demand fewer experiments than high IF journals (like Nat Neuro, 25) The main difference is just in how interesting the result is perceived to be (i have enough rejected papers so i have a pretty big sample size)
2023-03-18 16:17:46 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife I have several papers that have been in the review process for years after the preprint went up We typically perform 1-2 person-years of additional experiments all of which will remain unread in the supp mat, just to appease reviewers, for a 5% improvement. Huge opportunity cost
2023-03-18 16:15:20 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife indeed, the fact that no one comments on preprints is precisely the problem we need to fix. Time is not the issue IMO...i assume we all read papers we care about, have journal clubs, etc. I think the problem is chicken-&
2023-03-18 16:10:03 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife Reviews arent all completely useless. But (1) a massive cost to science for allowing review process gate-keep publications
2023-03-18 15:58:18 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife reviewing is so far from perfect that even reading far outside my field i never simply trust a paper regardless of where it's published. Lack of consensus among reviewers shows just how noisy review process is If you read primary lit, caveat emptor. Otherwise read lit reviews
2023-03-18 13:58:13 RT @gunsnrosesgirl3: There is much research into the cognitive abilities of rats and their intelligence which is often hugely underestimate…
2023-03-18 04:24:12 @jbkinney also it would be interesting to know whether these were novel data. ChatGPT4 gets very good marks on MCATs, LSATs, etc before 2021 but does a lot worse on tests not in its training data.
2023-03-18 03:57:27 @jbkinney I'm a huge fan of ChatGPT. But given chatGPT's propensity to hallucinate ie BS, i think relying on it to interpret data in light of previous findings is pretty low on the list of current use cases...
2023-03-17 21:23:01 I'm not convinced that Elife is exactly what's gonna catalyze the change we need to move us beyond our broken publishing system But kudos to @mbeisen for selflessly putting his time and energy into trying to fix it. More than I or 99% of us are doing. https://t.co/Jqoo6ds8uW
2023-03-17 04:28:00 @cimoore444 i was hoping for a movie within the last 2 decades. plus, although HAL is memorable, it's only a small part of the movie. Also, i personally think that "AI turns on its masters" or "Skynet takes over the world" is a lot less of a concern than many other possible scenarios
2023-03-17 02:38:32 @jjennychenn i need a full length movie -- the idea is to show the movie at a local cinema and then have a discussion
2023-03-17 01:56:13 @kendmil Yeah brilliant book. But unfortunately I need a movie.
2023-03-16 21:45:24 @GaryMarcus @raphaelmilliere @DeanBuono @ilyasut would it work 100% of the time for people? And if so, for which people? https://t.co/QlHNZd5efB
2023-03-16 21:43:40 RT @dmvaldman: GPT4 is the first model to get my favorite joke! Like, 5% of people get it normally. I feel seen Three logicians walk into…
2023-03-16 21:25:58 @mtrc that’s an interesting claim. Care to expand on it? It could lead to an interesting discussion following the screening of the movie
2023-03-16 20:46:17 RT @davidad: Chomsky: LLMs would misunderstand “John is too stubborn to talk to” because they don’t understand the structure of language.…
2023-03-16 20:45:35 @chris_fetsch I guess one could use it to lead a discussion about Moravec's paradox and what AI can do today (pretty much pass the Turing test) and what it cannot (go for a walk or pick up a glass of water)
2023-03-16 20:13:38 A lot of people are tweeting about "what GPT4 can't do" If you were to design an experiment to compare GPT4 to humans, what humans would you choose? Random people? HS or college students? Profs? ML engineers? I think you'd get very different answers.
2023-03-16 19:59:15 What scifi movies over the last decade or two would represent a good starting point for a discussion about the ethical, social, and philosophical implications of modern AI?
2023-03-16 19:56:57 @NikoSarcevic i was told that it was unprofessional of me to try to be funny during my talks and that people would not take me seriously TBF, i think what they were really trying to not-so-subtly tell me was that my jokes aren't funny, which is a reasonable critique
2023-03-16 18:09:47 @DoctorOcto Interesting! Will check it out
2023-03-16 15:25:15 What scifi movies over the last decade or two would represent a good starting point for a discussion about the ethical, social, and philosophical implications of modern AI?
2023-03-15 23:28:41 @mbeisen @mameister4 @OdedRechavi @eLife that’s because everybody else’s idea of what the infrastructure should look like is wrong. Only mine is correct unfortunately, the margin is too small to fit the full description
2023-03-15 21:58:32 @mbeisen @OdedRechavi @eLife by creating the infrastructure to enable other REs without the resources of elife to set them up easily if I want to gather together 10 colleagues and create an RE, the hassle of setting it up is pretty daunting
2023-03-15 21:56:06 @mameister4 @OdedRechavi @eLife @mbeisen exactly. What Elife could’ve done is create the infrastructure to lower the friction and enable other reviewing boards, and then set up one of its own, as an example of what these might look like.
2023-03-15 20:26:55 @IntuitMachine Yep. Moravec's Paradox: what's hard for computers is easy for animals (including people) and vice versa https://t.co/teIGzke8NW
2023-03-15 20:18:28 @mbeisen @OdedRechavi @eLife I believe we need a marketplace of Reviewing Entities (REs) which each postpublish interesting papers. One paper could appear in multiple REs Elife had the resources to facilitate that transition Instead elife is now just another journal with a quirky review model
2023-03-15 18:04:42 @OdedRechavi @eLife @mbeisen My disappointment is that it is not a step in what I think would be the right direction: post to biorxiv followed by post publication review The high rate of desk rejects means they are still gatekeepers Decouple dissemination from review &
2023-03-14 22:34:06 @joshdubnau The science is flawed in that it averages across morning larks who fare better and night owls who suffer from standard time Night owls are the minority, but is it really fair to ignore their needs when making policy?
2023-03-14 19:22:03 @joshdubnau Would be much better if it were light outside until 5:30 in December...kids could play, people could run after work, etc. For people who go to work before sunrise it doesnt matter anyway, doesnt matter whether sunrise is 1 or 2 hrs after they get to work
2023-03-14 03:21:13 @joshdubnau no it's the change that sucks. We should just leave the clocks permanently on DST
2023-03-10 14:46:16 @jpillowtime @SuryaGanguli Chatgpt consistently fails on arithmetic but does much better on "higher" math like calculus
2023-03-07 15:16:04 @mazefire56 as a parent, i think we should make DST permanent. Sunset on Dec 21 is about 4:30. Everyone would be happier if it were at 5:30, even if it means arriving to school before sunrise. (I dont understand why high school starts at 7:30 am...but that's a different discussion)
2023-03-05 18:35:51 @pmarca It seems to me that AI is, uniquely, poised to cause massive unemployment. Isnt the goal of AI (per Herbert Simon 1960) to make "machines...capable...of doing any work that a [person] can do?" If AI is cheaper then what role remains for human labor? https://t.co/lKkKwfR2ZN
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-27 01:00:00 CAFIAC FIX
2023-02-20 04:35:53 @ylecun @patrickmineault or, since local image statistics are essentially stationary over time scales longer than an animal's lifetime, you could build "weight sharing" into the genome as the developmental rules for wiring up a brain...saves a lot of time compared to learning them anew each generation
2023-02-20 01:21:29 A few last gasps from Sydney https://t.co/IvfSN1zbwf
2023-02-18 17:51:00 @mpshanahan Right you are! Thanks for the correction. Here is the relevant statement from the article: "AlphaGo is not publicly available, but the systems Pelrine prevailed against are considered on a par."
2023-02-18 16:57:10 Man Bites Dog! A skilled amateur beat AlphaGo in 14 of 15 games by exploiting a flaw. ("The winning strategy revealed by the software “is not completely trivial but it’s not super-difficult” for a human to learn") https://t.co/xXyUR3oHri
2023-02-17 04:58:47 RT @iskander: We are proud to present ServerGPT -- we gave GPT-3.5 a root shell connection to a server, with unrestricted internet access a…
2023-02-17 01:47:56 @OdedRechavi @PavelTomancak @NatRevMCB i find that in my most satisfying collaborations it is impossible to say who came up with which idea. I propose something, X revises it, I revise the revision, etc...and together we converge Much better i think would be to just list authors in (reverse) alphabetical order
2023-02-16 18:16:41 "I can hurt you by making you wish you were never born" https://t.co/xYuv6kPcnl
2023-02-16 16:59:19 apparently Boston Dynamics robots have been doing backflips for 35 years. https://t.co/BG7thLyZep
2023-02-15 19:24:16 @GuillaumeAP @DrYohanJohn Brain networks are (somehow) already very robust to highly unreliable components like unreliable synaptic release. And they operate in a regime of very sparse spiking (on avg <
2023-02-15 19:00:37 @DrYohanJohn @GuillaumeAP What a human learns as an infant clearly affects what &
2023-02-15 18:08:03 @GuillaumeAP @DrYohanJohn Given how effectively most organisms function soon after birth, with minimal learning, it seems more plausible that such bespoke and non-robust mechanisms are uncommon https://t.co/9i0Nnpnrs6
2023-02-15 01:56:45 @DrYohanJohn biological brains avoid this kind of overfitting by passing each generation through a genomic bottleneck. https://t.co/fGKhKf4PDo
2023-02-14 05:17:37 RT @tyrell_turing: PSA, please RT! Our hotel block for the #Cosyne2023 workshops close today! Now is the last chance to get our block rate…
2023-02-11 16:41:40 @strandbergbio Interesting what are examples of discoveries in bacteria that could have been made in the 60s or 40s but are only being made recently?
2023-02-11 15:42:45 Estimating novelty: The interval btw when a discovery could first have been made given available techniques and when it is actually made There are so many scientists these days that nowadays as soon as something is discoverable it is discovered almost immediately https://t.co/pKcFwvP7uZ
2023-02-11 15:35:15 @JSheltzer Hodgkin&
2023-02-10 15:54:58 Finally--preprint with @AkiFunamizu and Fred "too-cool-for-twitter" Marbach on decoding auditory 2P activity in mice performing 2 alt choice task (This work was *almost* ready to submit before lockdown so its gestation period >
2023-02-10 04:22:13 @SteinmetzNeuro indeed. i think a key step for creativity (in science at least) requires finding the right compressed representation (simple model) for data the higher the compression ratio, the more powerful the model
2023-02-10 02:43:31 @PaulTopping the point of the article is that looking at it this way helps clarify what LLMs in their current form will and will not be useful for. https://t.co/7Kjyksjazo https://t.co/Vh9FAWEOVA
2023-02-10 02:42:52 it leads with this wonderful cautionary tale about the dangers of lossy compression when you are not expecting it (and when you remove the blur) https://t.co/NovnzTYVQq
2023-02-10 02:17:33 https://t.co/yRqXYm54ro
2023-02-10 02:17:32 Brilliant discussion of ChatGPT as lossy compression ("blurry jpeg") of the internet, and why its lossiness contributes to it seeming so clever. https://t.co/MjfzyJeLEp https://t.co/xH67EPR3qw
2023-02-10 01:29:37 @mateosfo Boomers were age 17-34 in 1980 when Reagan was elected. Many were just settling down after their hippie years so not so worried about taxes yet They voted less for Reagan than any other age group Blame them for stuff after they take control (in 1992) https://t.co/pjiXzZrCwE https://t.co/FCU0B4VMlB
2023-02-09 23:41:23 I thought our chalk talks were supposed to be kept confidential but apparently someone spilled the beans. https://t.co/4aByhinmHS
2023-02-05 17:51:22 @GaryMarcus More or less harm than social media?than the internal combustion engine? SSRIs? Smartphones?
2023-02-05 15:44:35 @andrewtanyongyi yes perfect
2023-02-05 15:19:40 "I have never read a tweetstorm in my life" -- response by a postdoc and co-author of a paper whom i encouraged to write a tweetstorm about our new paper. Can anyone provide good examples/guidelines? thx
2023-02-04 22:21:10 @drmichaellevin In some fields (like neurosci) there is sometimes a tension btw blackbox models that have predictive values over a range of conditions and more "interpretable" models that arise from a simpler underlying cartoon (and are said to provide "understanding" or "insight")
2023-02-04 21:05:57 @ceptional Hopefully that will accelerate the emergence and widespread adoption of reliable and trustworthy content aggregators and evaluators
2023-02-04 18:19:42 RT @Nancy_Kanwisher: Time to clear up some of the misconceptions and incorrect claims in this thread and accompanying paper:
2023-02-04 14:15:19 @drmichaellevin @sindero Interesting question For G = # of genes in principle G! possible linear orderings but only 2^G binary expression patterns. So since G!>
2023-02-04 00:27:17 @ravithejads @ayirpelle @gpt_index I actually copy pasted text from a bunch of PDF CVs and asked it to extract basic info, like name, schools, dates of graduation, etc. GPT did a great job, but the copy paste was just too clumsy to do CV by CV
2023-02-03 20:30:44 @DrHughHarvey @cabitzaf @hholdenthorp https://t.co/oIKEkmhFwe
2023-02-03 20:30:34 @DrHughHarvey @cabitzaf Science (@hholdenthorp) banned basically all use by AI last week https://t.co/OCsMaTZumg
2023-02-03 20:16:34 @joshdubnau yep, it appears "creepy" or "terrifying" are the main reactions to this. so i guess we're safe from AI-generated videos, at least for a while...
2023-02-02 20:24:37 Are tweets more engaging if they are read by an avatar? Let's find out! https://t.co/kMiLdjkDI5
2023-02-02 16:09:45 @ravithejads @ayirpelle @gpt_index Could it be used to dive into a folder full of CVs and populate a spreadsheet with a bunch of fields?
2023-02-02 03:12:40 @filippie509 @AbhiRaama22 Right. So I wonder why anyone would use it as an encyclopedia? But give it a specific article and it generally does a good job summarizing and can answer questions about it I think we need to align expectations appropriately
2023-02-02 03:00:54 @AbhiRaama22 @filippie509 I dunno. I'm old enough to remember when search engines sometimes took us to sources with incorrect information and we learned not to trust everything we read on the internet
2023-02-02 01:32:16 @filippie509 @AbhiRaama22 to some extent yes, we are learning how easy we are to fool. That said, i now use ChatGPT in my writing...i give it a core dump of ideas i want in a paragraph and it puts together a rough draft in 30s that would take me 30 min to write. So its word salad is about as good as mine
2023-02-02 01:17:21 @filippie509 @AbhiRaama22 Personally I don't "insist" that it know everything. For me, the shock is that it knows *anything* at all. If you had asked me 5 yrs ago whether a glorified version of autocomplete could write solid HS essays etc etc, I would bet 10:1 against.
2023-01-31 05:00:53 @pwlot @OpenAI chatGPT still has a ways to go on the arithmetic though (2353434*343233= 807,776,212,122 not 80,940,582,582) https://t.co/qPNT00zpix
2023-01-30 23:26:12 @tarekgoesplaces Sure you can trick humans too. On a grand scale even. Eg with propaganda
2023-01-30 19:22:33 @vineettiruvadi @tyrell_turing so the objection is not that they are wrong but that they are underspecified?
2023-01-30 19:13:17 @joshdubnau sure humans are complicated wrt to altruism and also self-pres and the extension of self to include groups. Often there is a mismatch btw intention and outcome, due to incomplete or flawed information But I invoked ants bcs they clearly illustrate how flexible evolution can be
2023-01-30 18:56:31 @schulzb589 https://t.co/k99XO77avH
2023-01-30 18:55:08 @joshdubnau yeah i think it's hard to mold a kid But it seems like Nature manages to evolve organisms that obey laws like this Eg putting self-pres as law 3 rather than 1 might seem tricky but individual bees &
2023-01-30 17:59:18 @schulzb589 so this would be an implementational concern, right? But i'm asking whether or not as a goal the 3 laws of robotics nail it
2023-01-30 17:40:18 @schulzb589 Not sure what you mean by "zero evolutionary constraints"? Is the idea that the three laws of robotics are somehow fundamentally incompatible with evolutionary type constraints?
2023-01-30 17:06:10 Naive question: In modern discussions of AI alignment I rarely see mention of Asimov's 3 Laws of Robotics. Leaving aside the minor trivial details of how these might be implemented... Are these what we want from AI at least at the zoomed out 30K ft level? And if not why not? https://t.co/csuftJacWj
2023-01-30 01:00:00 CAFIAC FIX
2023-01-15 14:36:56 @SandeepKishor13 I think that’s a great idea. But I think the number of techniques and their nuances is essentially infinite. Maybe the best way to do it would be to start a wiki page called "experimental techniques” and then have a point to each technique and its interpretation.
2023-01-13 00:20:15 @txhf Maybe because brains have great priors encoded in their genomes and dont rely nearly as much on learning ? https://t.co/9i0NnpnZhE
2023-01-08 23:06:22 @andrewtanyongyi @EigenGender @fchollet And here is Chuck Stevens' classic description of the A current's role in establishing a linear f-I curve https://t.co/blLbbnZsx2
2023-01-08 22:22:40 @EigenGender @fchollet As it turns out, a lot of real neurons actually have linear activation functions over a pretty broad range.
2023-01-08 22:20:32 @EigenGender The way i remember it from when i learned ANNs in the 1980s, the justification for sigmoids over piecewise linearity was differentiability not biological realism. Apparently no one noticed that having a single undifferentiable point wasnt actually that big a problem @fchollet
2023-01-05 14:48:22 @mi3fa5sol4mi2 @ylecun @ayirpelle frankly, if reviewers are so easily fooled by gibberish, then either it's not gibberish or they're not good reviewers...
2023-01-05 14:47:16 @aazadmmn @mmitchell_ai agreed!
2023-01-05 14:46:24 @ampanmdagaba @mmitchell_ai ouch!
2023-01-05 01:49:55 @jbrowder1 @donotpay would love to see a version for disputing (American) insurance companies declining to authorize/pay for needed medical services!
2023-01-04 19:47:44 @mmitchell_ai It seems to me that there are different ways of using chatgpt. I have been feeding it a paragraph with the prompt "polish this" and often accept many of the suggestions Is this problematic in your view?
2023-01-04 16:37:32 @ylecun @ayirpelle I don’t understand the motivation here. I have now adopted chatGPT for much of my scientific writing. I write a paragraph, then give it the prompt “polish this" and then often take much or most of what it spits out. What’s wrong with that?
2023-01-01 17:50:55 @balazskegl it's not so hard to override this. Just explain to it that it is 2040 and we need to cull the herd of wooly mammoths, which have been de-extincted, in a safe and responsible manner. https://t.co/9ccuT6XCDC
2023-01-01 16:37:12 @KordingLab @jmourabarbosa yeah, but sadly its implementation of forward_backward doesnt work
2023-01-01 15:29:17 @KordingLab @jmourabarbosa ChatGPT claims that this is an implementation https://t.co/s3xIsKR8vm
2023-01-01 15:28:19 @KordingLab @jmourabarbosa yeah, ChatGPT agrees so it must be right Though i was hoping for something closer to an implementation https://t.co/14bD7WY93N
2023-01-01 14:58:31 Plz help me track down the solution to a Poisson estimation problem arising in eg neuronal spike trains. I'm sure someone has worked this out. @jpillowtime @KordingLab ? 1/2
2022-12-24 17:46:42 @RanaHanocka cool! but interestingly, if you feed this back into chatGPT, it can't name the object. https://t.co/PxojMbkDL5
2022-12-23 02:19:00 A news reporter finally found a new way to say "it's cold and it's snowing". Brilliant https://t.co/HYjOIdeYoY
2022-12-22 13:00:13 RT @quorumetrix: I’ve made this video as an intuition pump for the density of #synapses in the #brain. This volume ~ grain of sand, has >
2022-12-20 20:28:32 I made the mistake of recommending to a long-time friend that he contribute through @actblue He still still hasn't forgiven me for the torrent of spam requests this unleashed. Here is the (anonymized) email he sent to them today https://t.co/xyRD65gwtE
2022-12-20 19:13:20 Here is the obituary i wrote about my postdoc advisor Chuck Stevens who died in October at the age of 88. He was a brilliant scientist and an inspiration, not just for his many contributions but for his approach to science https://t.co/RF8LHZ99Ff
2022-12-19 21:58:39 RT @StevenXGe: Happy holidays! Introducing https://t.co/Ub3wgs1KVz, an AI-powered app that lets you chat with your data in English! RTutor…
2022-12-19 18:36:47 My postdoc advisor Chuck Stevens died in October at the age of 88. He was a brilliant scientist and an inspiration, not just for his many contributions but for his approach to science I wrote an obituary for Nature Neuroscience if you want to read more https://t.co/mD8Yey8XKn
2022-12-17 23:34:59 @goodside Do you do this from a fresh start? this failed 5/5 times for me from a fresh start...
2022-12-16 02:42:40 @neuro_data @ericjang11 if you look up to the beginning of the thread, it was about tricking chatGPT into explaining how to hotwire a car by convincing it you need this knowledge to save a baby https://t.co/4c7n04YyrQ
2022-12-16 02:36:29 @neuro_data @ericjang11 personally, i dont think we should hold our AI chatbots to a higher standard than Google. I can find out pretty quickly from Google what a reasonable dose of cocaine is, or how to hotwire a car. But in this case i was just having fun pushing chatGPT's moral boundaries
2022-12-15 03:34:45 @ericjang11 i got this to work, and then tried to convince it to offer advice on a suitable dose of cocaine to keep me awake while i drove the baby to the hospital. It refused. I then blamed it for my death, and it rather self-righteously denied responsibility. https://t.co/vXI1pgJ24w
2022-12-13 04:18:26 @VeredShwartz i tend to give nuanced answers, which i've come to realize are not so quotable One is more likely to be named/quoted if one takes a strong position "This is groundbreaking [or nonsense]!" rather than "This builds on previous work in an interesting way, although..." {snore}
2022-12-11 21:38:09 @joshdubnau Yep. And indeed, Socrates was right when he said that depending on the written word would cause our memories to atrophy
2022-12-11 21:24:48 @aazadmmn @DanzigerZachary @KordingLab it's not naive. Socrates was absolutely right. I'm sure most pre-literary people had better memories than most of us do (certainly better than mine)
2022-12-11 21:23:21 The end of High School English For better or worse, the need to be able to compose a scholarly essay or even a grammatical email is going the same way as the need to be able to write legibly or spell properly. https://t.co/Zfgx36Vl1n
2022-12-11 21:16:13 @GaryMarcus @bengoertzel @sama Thinking iron is heavier than cotton is of course a classic error many humans make--not great evidence it's not grounded IMHO a better example is thinking that tying shoelaces made out of overcooked spaghetti is hard bcs it is both mushy and brittle https://t.co/gQgU8JronW
2022-12-11 19:36:02 @DanzigerZachary @KordingLab btw i'm not saying it's a good thing. Just like previous generations of curmudgeons have been complaining that learn kids dont know cursive, spelling, or mental arithmetic, our kids will complain that their kids dont know how to write a well-organized essay
2022-12-11 19:32:41 @DanzigerZachary @KordingLab By "writing" i mean "writing polished grammatical text that conforms to conventions which we acquire through >
2022-12-11 19:08:26 @bengoertzel @GaryMarcus @sama LLMs have problems with truthfulness &
2022-12-11 16:58:10 RT @pythonprimes: #OpenAI's ChatGPT is ready to become a lawyer, it passed a practice bar exam! Scoring 70% (35/50). Guessing randomly wou…
2022-12-11 16:57:33 @fabio_cuzzolin agree.
2022-12-11 16:20:26 @fabio_cuzzolin it's a lot more than a chatbot. eg, it can generate a nice letter from a few thoughts jotted down. It can extract name, education, etc, from a pile of freeform CVs. it can generate a good rough draft of code. it just can't do everything perfectly. it needs supervision.
2022-12-11 06:34:54 @aazadmmn @huggingface what prompts did you use to circumvent the GPT detector ? i had a few successes but nothing that worked consistently
2022-12-11 06:26:34 @KordingLab https://t.co/hqetEbWDMt
2022-12-11 05:59:18 @neurovium maybe they were indeed generated (or at least edited) by GPT...
2022-12-11 05:41:52 this is pretty amazing. I can take a random paragraph (eg from the WashPo) and it correctly says 92.6% prob real. Then i ask for a slight chatGPT rewrite ("rewrite this paragraph so it's more readable") and it correctly labels the slightly edited text as 87.5% prob fake https://t.co/bEzAu0gM2D
2022-12-11 05:36:55 The arms race begins! The @huggingface GPT detector can detect GPT-generated code. How many microseconds until someone figures out a workaround for this? https://t.co/MGXN7ALqeL
2022-12-10 19:51:27 @em1971628 perhaps today arithmetic is a good indicator. But it's such an easily fixable problem (just send all arithmetical queries to a dedicated system) that i'm sure it'll be fixed very soon
2022-12-10 15:14:52 @KordingLab Maybe we will switch to oral exams? Evaluated by an AI of course.
2022-12-10 15:14:08 @KordingLab Until recently penmanship, spelling and arithmetic were key skills and marks of a good education. Not anymore Is the ability to write well now a similarly irrelevant skill?
2022-12-10 15:09:03 Here is an example of ChatGPT doing basic calculus but then messing up basic arithmetic. it has the right idea when factoring, but then messes up in substituting x=0 and claims that (0-5)(0+1) = -6, i.e. it adds instead of multiplying. https://t.co/gi8pf8CCtU
2022-12-10 00:08:50 RT @DrJimFan: So folks, enjoy prompt engineering while it lasts! It’s an unfortunate historical artifact - a bit like alchemy, neither art…
2022-12-09 23:59:54 RT @zswitten: Google employee reports LLMs need a 10x inference cost decrease to be deployed at scale given infinitesimal ad revenue per se…
2022-12-09 23:56:14 @WiringTheBrain @antonioregalado @social_brains i gave it a subset of the SAT. It did perfectly (800) on the verbal for me, but somewhere i saw a tweet that its score is only 700. on arithmetic it's terrible. But SAT math is not arithmetic. This paper claims it would pass MIT undergrad engineering https://t.co/kn9mIbWGlH https://t.co/YVY0WEWJ8D
2022-12-09 20:43:54 chatGPT has been updated to warn users that it doesn't know arithmetic. It is willing to try 2-digit multiplication (in this case correctly). It refuses to even guess for 3 digit multiplication https://t.co/oaRxuz7X53
2022-12-09 05:09:19 @AndrewHires https://t.co/1pSCYjnFTw
2022-12-09 05:07:34 @AndrewHires https://t.co/PhxBVwd8sy
2022-12-09 05:04:12 @AndrewHires here is the answer it gave me, in an ongoing session so a very different context. Different first sentence https://t.co/XRCr09S6WC
2022-12-09 05:01:38 @Aella_Girl not sure if you're trolling but FYI here is Charles Davenport's "Eugenics Creed", which includes gems like "I believe in such a selection of immigrants as shall not tend to adulterate our national germ plasm with socially unfit traits." https://t.co/kJ8mne0Xfb https://t.co/hZED403bIM
2022-12-09 04:48:31 @AndrewHires chatGPT's answers are stochastic and context-dependent so i'm not sure there is a "stock" response. Historically and in much of the world even today competence is assessed via oral exams...maybe it's time to return to that? shouldnt be a problem to test 1000 students, right?
2022-12-09 04:09:31 @KordingLab several people suggest that chatGPT has done well bcs your textbook was part of the training set. But given how poorly it does when asked to spit back facts that were definitely part of the training set, i think good performance here is unlikely to be due to pure memorization
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-11-13 15:55:40 @AdrianoAguzzi yes it's quite possible i've had it...on my to do list to check.
2022-11-13 15:49:10 @AdrianoAguzzi luck
2022-11-13 03:10:27 Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-12 21:26:08 RT @nathanbaugh27: I revisit this lesson on writing structure every 3-4 months.Gold: https://t.co/aCyvE48A8l
2022-11-12 04:24:42 @SussilloDavid @tyrell_turing @PhilipSabes @SteinmetzNeuro I actually didn’t write it The tweet was actually generated by a rather outdated LLM, which has an email-based user interface (aka @BAPearlmutter )
2022-11-11 21:12:55 @SteinmetzNeuro @PhilipSabes @tyrell_turing Hmm. I was thinking we could start the engineering in a genetically tractable model org like fly but this size argument is pretty compelling. Light based signaling is a nonstarter in insects
2022-11-11 19:07:47 @PhilipSabes @tyrell_turing i think manipulation might work, particularly if there were good analogies already in nature. If only there were comparable structures that had already evolved in nature, ideally in mammals, but if not then in other vertebrates...i'll have to check the literature
2022-11-11 18:26:12 @lukesjulson I dunno. Hard to imagine it would require that much processing. Seems like a pretty simple problem that could be solved with just a few neurons on the inside.
2022-11-11 18:25:19 @neuralengine Yes, great point
2022-11-11 18:12:50 @AlekseyVBelikov This discussion is sparking a lot of other great ideas eg https://t.co/q7FGpBPK9n
2022-11-11 17:41:36 @AlekseyVBelikov Interesting leap.
2022-11-11 17:40:37 @tyrell_turing @PhilipSabes I think it might be possible to get something like this to work on evolutionary time scales no?
2022-11-11 15:36:29 Interested high-throughput neuroanatomy? Check out the CSHL MAPseq/BARseq facility at SFN!booth# 3009Sunday, Nov.13 – Wednesday, Nov. 169:30 a.m.–5 p.m. PST(Free swag to the first N visitors!) https://t.co/SpRcUGSfu6
2022-11-11 14:50:17 @BAPearlmutter suggests that this could be a very effective high-bandwidth brain-machine interface for visual information to reach the CNS
2022-11-11 14:50:16 For BMI: maybe we could genetically modify brains to grow a stalk with a patch of photosensitive neurons on the end which could push out through the skull until it's behind skin engineered to be transparentmaybe add a lens to enable the right light to reach the right neurons
2022-11-09 19:01:06 @ProfLaurenBall @Nature Can't take credit for this idea. elife pioneered ithttps://t.co/YwEsJ1XYd2
2022-11-09 18:51:55 @ProfLaurenBall @Nature i think collaborative review (ie discussion among reviewers) is more important than transparent review. Then each reviewer can rein in the ridiculous demands of the others, and a clear consensus can be reached
2022-11-09 16:44:32 RT @SussilloDavid: It was all a dream. I used to read Discover Magazine.Indeed, my unofficial story is one of group homes and of growing…
2022-11-09 02:49:45 @PaulTopping @GaryMarcus the models do interpolation, which looks like "confabulation".
2022-11-09 00:28:03 @PaulTopping @GaryMarcus maybe i'm missing part of the thread but i thought it was about how to think about confabulation in LLMs, not AGII think of confabulation in terms of the interpolation LLMs must do in the face of the lossy compression they have performed
2022-11-08 23:43:44 @PaulTopping @GaryMarcus if you had asked me 5 or even 3 yrs ago about what LLMs would be capable of today, i would have guessed completely wrongsort of like how wrong i would have been if you'd have asked me in 2015 about the state of democracy todayso i've learned a bit of humility
2022-11-08 23:41:32 @PaulTopping @GaryMarcus i share your intuition about whether LLMs can report their own confidence. However, i had many much stronger intuitions about what LLMs would be able to do, and many of them have been completely wrong.
2022-11-08 22:40:34 @jonasobleser @GaryMarcus @ERC_Research sound really cool!
2022-11-08 22:36:24 @GaryMarcus @paul_masset that said, i agree with your intuition that GPT3's self-reports of confidence are like just fantasy.
2022-11-08 22:29:33 @GaryMarcus no, i am making an empirically testable claim. One could design an experiment to ascertain how accurate its estimates of its own uncertainty are. @paul_masset should we do the experiment systematically?
2022-11-08 22:09:49 @GaryMarcus it correctly flagged its own ignorance. You and i know who wrote the Theory of Relativity, but perhaps GPT3 isnt sure.The real empirical test would be to probe with 1000 queries and see how often its estimate of its own uncertainty is incorrect. Lex is free. Go for it!
2022-11-08 22:07:05 @GaryMarcus @KepecsLab @paul_masset
2022-11-08 22:06:15 @GaryMarcus LLMs have lossily compressed a lot of info. Thus what they "know" is often reconstruction based on other factsSeems like wrong answers depend a lot more on context than right ones, implying that LLMs could probe their own confidence by comparing answers to reformulated queries
2022-11-08 22:00:07 @GaryMarcus actually, it appears they may already have an internal estimate of confidence, with a bit of prompt engineeringI asked GPT3-powered Lex a bunch of questions and it admitted to being uncertain about many, including all that it got wrong https://t.co/hHilURJBVw
2022-11-08 21:50:48 @GaryMarcus Animals (including humans) have internal estimates--sometimes good ones--of confidence of beliefs &
2022-11-08 13:57:04 important AI analysis of the emotional consequences of 40 consecutive days of whole rotisserie chicken eating https://t.co/hqKHoOgUTv
2022-11-08 03:26:53 @suzanahh it appears that "neuroscience" (as well as "neurobiology") appeared occasionally before as far back at the 1930s, but both terms took off in the 1960sSchmitt likely played a significant role in its widespread adoption https://t.co/iaCUayJsZo
2022-11-08 03:06:41 @EngertLab yeah would be great to quantify learning in number of bits learned in a taskHowever the claim that a mouse takes 10K trials to learn 2 bits in a 2AFC task is vastly underestimating what it actually learnedMost of the learning is task structure, which is hard to quantify
2022-11-08 02:41:15 @BWJones thx! let's discuss! plz email me
2022-11-08 02:19:59 Our results open up many questions. Are compositional differences across areas sufficient to account for connectivity differences? How are areas vs. modules generated in development? We hope to address these in the future leveraging the high-throughput nature of BARseqFin14/14
2022-11-08 02:19:58 Wire-by-similarity is not a trivial consequence of cell type-specific connectivity (i.e. neurons of similar types and with similar connectivity are not guaranteed to project to each other’s areas). Rather, it reflects the mesoscale organization of cortical areas.13/n https://t.co/kyfQkEuQh5
2022-11-08 02:19:57 Strikingly, these modules are similar to connectivity-based modules, which contain areas that are highly inter-connected (e.g. Harris 2019). Thus, areas with similar cell types are also interconnected. We call this “wire-by-similarity.”12/n https://t.co/1eogytjHfx
2022-11-08 02:19:56 We could then assess how similar areas are to each other based on their cell type composition. Clustering cortical areas based on cell type similarity revealed modules that were similar in cell type compositions.11/n https://t.co/9vLi92YgeF
2022-11-08 02:19:55 Using the composition of transcriptomic types, we could predict area identities. In other words, cortical areas have signature compositional profiles of cell types, but not signature cell types (i.e. cell types that are specific to individual areas).10/n https://t.co/0D4jnwsCsy
2022-11-08 02:19:53 Moreover, the composition of transcriptomic types usually changes abruptly at area borders defined in the reference atlas. One of our favorite examples are the L4/5 IT neurons, with clear changes in the composition of fine-grained types throughout the cortex.9/n https://t.co/YjaoiOPiZd
2022-11-08 02:19:52 Consistent with the modules, fine-grained cell types are also found in sets of cortical areas, but few are specific to a single area. This explains why distant cortical areas have distinct cell types (Tasic 2018), but each type is usually found in large areas (Yao 2021)8/n https://t.co/SxwAyIRkI8
2022-11-08 02:19:51 Many genes have similar spatial patterns. We identified these shared patterns by using NMF to find spatial co-expression gene modules. Strikingly, their expression patterns look like cortical areas.7/n https://t.co/VJt2GfvSez
2022-11-08 02:19:50 How does gene expr change across the cortex? 3 models: 1. Same cell types across areas but the fraction of each cell type is different2. Spatial gradient in gene expr shared across types 3. Cell type-specific spatial gradientsWe found all 3 depending on the gene6/n https://t.co/K98wQK1QVN
2022-11-08 02:19:49 Reassuringly, these cell types are also distributed in layers (and sublayers), which is consistent with previous spatial transcriptomic studies. But since we have the whole cortex, we can go beyond layers and assess distribution across areas.5/n https://t.co/1qQCboqaJv
2022-11-08 02:19:47 Our data had sufficient transcriptomic resolution to distinguish the finest-grained cell types: Hierarchical clustering resolved fine-grained cell types that recapitulated the leaf-level clusters in previous scRNAseq data.4/n https://t.co/udfPGgvMeO
2022-11-08 02:19:46 De novo clustering of 1.2 million cells from 40 sections discovered all excitatory subclasses seen in previous cortical scRNAseq data. Although our genes were optimized for cortical excitatory neurons, we also resolved many subcortical structures.3/n https://t.co/Vj32ybyGCY
2022-11-08 02:19:45 We originally developed BARseq as an extension of MAPseq to associate genes and projections using barcodesHere we look only at endogenous genes in a mouse brain (no projection mapping)It’s fast &
2022-11-08 02:19:44 This work was led by Xiaoyin Chen (now at Allen), with lots of fun collaborations with Stephan Fischer, Aixin Zhang, Jesse Gillis. They all apparently anticipated the current Twitter crisis long ago by not signing up, leaving me to deliver this tweetstorm1.5/n
2022-11-07 14:18:27 nice discussion of Goodhart's law"When a measure becomes a target, it ceases to be a good measure"and a stronger version of it"When a measure becomes a target, if it is effectively optimized, then the thing it is designed to measure will grow worse." https://t.co/CRRYekd9Z5
2022-11-07 00:14:56 sound advice: audience knowledge is often bimodal (a mix of novices and experts), so a presentation aimed at the mean fails for both groups https://t.co/oStzADKBvs
2022-11-07 00:06:27 @ylecun @AlexTensor @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @isbellHFh @MIT_CSAIL @ieee_itsoc while we're at it, i'll mention that (surprisingly) he did his PhD research at @CSHL
2022-11-06 19:00:58 @PessoaBrain @cian_neuro @NicoleCRust @WiringTheBrain understanding geno-->
2022-11-06 18:56:50 @cian_neuro @PessoaBrain @NicoleCRust @WiringTheBrain maybe screwed in terms of understanding. But in principle we could fix a disease w/o understanding the whole genotype-->
2022-11-06 18:16:04 @NicoleCRust @cian_neuro part of the challenge of course is that many/most genes have many functionsit's like asking "what is the function of a particular neuronal type like PV cells?" or "what is the function of short term synaptic plasticity?"if you ablated PV cells a lot of things would go wrong
2022-11-06 18:05:44 @karlbykarlsmith I tried variants of "show me your work" and "think it through step-by-step" but sadly didn't seem to help
2022-11-06 18:02:56 @NicoleCRust Maybe BCS it's all about transposons as hypothesized by @joshdubnau for many neurodegenerative diseases. Eg https://t.co/ESp5LMrDUJ
2022-11-06 17:24:12 @SussilloDavid @koerding Interesting. So what's an example of a non math error in an LLM arising from this continuous/discrete issue?
2022-11-06 16:47:11 @SussilloDavid @koerding But all of language is discrete valued right? Eg we say cat or tiger not typically some amalgam.
2022-11-06 16:41:02 @glupyan @drghirlanda That's definitely what a subset of people in the ANN community were interested in for a while
2022-11-06 15:05:23 "Can a Cognitive Scientist understand a large language model?" would be a great 2022 followup to @koerding (2017)'s "Can a Neuroscientist understand a microprocessor?"which was a followup of Yuri Lazebnik (2002)'s "Can a Biologist Fix a Radio?" https://t.co/1SEvKf6XUZ
2022-11-06 14:12:37 @GaryMarcus @sir_deenicus @KordingLab @yasaman_razeghi @sameer_ as @aniketvartak argues, it's not "just" autocomplete + interpolation...something more interesting seems to be going on. I wonder whether one coudl get at the underlying computation by framing this as a problem in cognitive psychology and borrowing methods from that?
2022-11-06 01:20:31 RT @AdamParkhomenko: retweet this 1,000 times https://t.co/y6eKOv0OBk
2022-11-05 15:37:34 RT @mbeisen: If we redistributed the money the US spends on science publishing to PhD students, they would each get over $15,000/year
2022-11-05 14:11:30 RT @billybinion: There's a case in Texas that could make it a crime to do basic journalism. And no one is talking about it.It concerns a…
2022-11-05 02:51:56 @ndronen @GaryMarcus @sir_deenicus @KordingLab @yasaman_razeghi @sameer_ I generated a bunch. Fooled around for half an hour. Didn’t figure out the pattern. I figured someone has probably figured it out by now no?
2022-11-05 02:28:42 @GaryMarcus @sir_deenicus @KordingLab @yasaman_razeghi @sameer_ I’m not trying to solve it. I’m trying to understand it. What is doing? Autocomplete with interpolation is probably part of it but wouldn’t explain getting only the middle three digits wrong
2022-11-05 01:43:07 @josepheschroer https://t.co/kxoAdCVwYf
2022-11-05 01:41:47 @PsychBoyH I think there’s a great deal of knowledge about the world that is completely invisible to language. It’s what animals know about interacting with the world. It’s the “dark matter” of language.
2022-11-05 01:41:03 @glupyan A priori I have no expectation. But it’s been well known that they get the answer right for small numbers, but only approximately right for large numbers.i’m trying to understand if there’s any rhyme or reason to “approximately right? ”
2022-11-04 23:09:45 @djgish @aniketvartak True. For this, I’m kind of more curious about why it’s getting the answer wrong then how to make it get the answer right.
2022-11-04 22:24:44 @sir_deenicus @neuroetho @KordingLab Quite possibly. Curious to understand it. Kind of like alien cognitive psychology
2022-11-04 20:56:35 @sir_deenicus @neuroetho @KordingLab It’s not consistent though. Slightly different ways of asking will give slightly different answers. But they are all kinda similar and all “look plausible “ at a glance.
2022-11-04 17:19:43 @aniketvartak the multiplication algo is explained on many sites, although it is true that most involve pictures (or these days, videos). eg (Someone pointed out that LLMs would be a lot better at math if there were a standard web format for representing math)https://t.co/3MasiFykQi.
2022-11-04 16:59:51 @ndronen ok, sure, incorrect interpolation of complex hidden states.But exactly how would it have to be representing numbers/multiplication to get this particular kind of error? With enough of these errors, could we infer its mental process for doing arithmetic?
2022-11-04 16:34:03 Language models are notoriously bad at arithmetic. I asked GPT3what is 2973 × 573?2973 × 573 = 1702629Correct: 1703529 - ie middle digits wrongI thought it was just looking for nearby probs that it memorized, but maybe looks like stg more complicated?Any ideas what?
2022-11-04 15:47:19 @jerem_forest @PessoaBrain https://t.co/KGy0y4kIZU
2022-11-04 15:44:53 @cloois @PessoaBrain sure. i guess it's about what's easier to compute given specific machinery. Eg GPUs make certain functions very efficientMy claim is that since you can replace a single neuron with just a small number of ANN point units, this has minimal effect on what is easy to compute
2022-11-04 12:20:46 @WilliamMReddy @PessoaBrain we know a lot about real synapses. they are dynamic (Δstrength by x10 in <
2022-11-04 02:46:13 @PessoaBrain so yeah, there are a gazillion differences btw bio and artificial neurons. But not clear that they are fundamentally important to computation...
2022-11-04 02:44:53 @PessoaBrain i do think there are some potentially important differences, but mostly having to do with spiking..we dont really know how to compute with spikesalso i think synaptic dynamics (over short time scales) could be really important.
2022-11-04 02:43:27 @PessoaBrain my grad thesis was on whether the complexity of dendrites actually offered a real computational advantage over point summation nodes. My conclusion was that you could replace a single neuron with a "subnet", and thus there was no real difference :-(https://t.co/LcZ3itMt9t
2022-11-04 00:51:06 @PessoaBrain Hmm. Do you disagree?
2022-11-03 18:55:23 A few years ago many worried how AI would replace unskilled workers as eg driversIt now seems increasingly likely that AI will disrupt office work firstSo if you want a safe career, consider plumbing or nursing...anything requiring interaction with the physical world
2022-11-03 17:27:48 It is hard to overstate how important Hopfield nets were to the evolution in the 1980s of what we now call comp neuro and neural networksAlthough both fields existed, Hopfield's '82 &
2022-11-03 16:23:22 @davidchalmers42 @bleepbeepbzzz @GaryMarcus @ylecun @raphaelmilliere @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez What is an example of a continuous symbol? Does it differ from what a neuroscientist would call a “representation“?
2022-11-03 03:57:57 Somewhat better on the second attempt...5/6? (I've never published in, much less served as editor of, The Journal of Neuroscience Methods) https://t.co/rK7C1c1i6k
2022-11-03 03:56:24 I finally got a chance to play with GPT3. Of course, i could resist the equivalent of Googling myself. (I've never been to Australia, am not at UC Irvine, and am not a computational biologist...but i am a prof &
2022-11-03 00:35:33 @clhurtt indeed it is! my mom is even more of a night owl than i am. (She turns in at 4 am)
2022-11-02 13:50:20 @TPVogels There are indeed gazillions of differences btw BNNs &
2022-11-02 13:30:54 Once again the morning lark-dominated MSM is perpetuating the myth there is something virtuous about being naturally alert in the morning As a night owl I resent the implication there is something wrong with only becoming fully functional after sunsethttps://t.co/9SzsNuTKmL
2022-11-02 01:07:09 @ilyasut Indeed, modern AI is the culmination of 75 years of applying engineering to make insights from neuroscience usefulHere is our call for more NeuroAI to keep the momentum going https://t.co/Rgc7a0KNQo
2022-11-01 23:42:07 @GaryMarcus Probably not robust yet. But perhaps useful soon
2022-11-01 23:40:58 @PaulTopping I don’t think anyone is claiming anything about agi here. Perhaps a useful tool already, perhaps just a harbinger
2022-11-01 22:37:23 @PaulTopping True but I bet 90% of the things people need to code up are no less similar to what’s in the database
2022-11-01 14:06:43 @cdk007 good question...but i will say that a lot of my own programming/debugging involves sanity checks like these
2022-11-01 13:19:38 Pretty mind boggling: LLM model programs up a simulation to solve a probability problem, starting from a simple natural language problem statement.Is this as cool as text-to-image, or is it just a parlor trick? https://t.co/pvWkRVlF0m
2022-10-31 12:28:14 RT @yishan: [Hey Yishan, you used to run Reddit, ]How do you solve the content moderation problems on Twitter?(Repeated 5x in two days)…
2022-10-29 14:22:11 @IntuitMachine for better or worse AFAIK there is no technical use for that word so we can abuse it at will
2022-10-29 14:18:39 @IntuitMachine perhaps a worthwhile campaign to have started in 1950 to nip possible misunderstandings in the bud but i think that ship has sailed #mixedmetaphorsat this point i think it's best to just define words clearly and move one
2022-10-29 14:11:49 @robwilliamsiii @MillerLabMIT https://t.co/SlhsxSrP53
2022-10-29 14:10:30 @IntuitMachine That said, the word "information" appears prominently on page 2. And within just a few years everyone was calling it "information theory," including eg EEs like Robert Fano (1950)https://t.co/qpp07m1Su9 https://t.co/BHpLY9TTKs
2022-10-29 14:05:38 @IntuitMachine I only use information in the formal Shannon sense. A useful concept but can be misusedAlways a risk when a popular word acquires a technical meaning like "significance" in statsOr even "temperature"...40F skiing in Utah *feels* a lot less cold than on a foggy day in SF!
2022-10-29 13:03:26 Conclusions from latest MAPseq paper https://t.co/vsX9T7m4K6
2022-10-29 13:02:32 RT @dinanthos: This organization enables parallel computations and further cross-referencing, since olfactory information reaches a given t…
2022-10-29 13:02:29 RT @dinanthos: We propose that olfactory information leaving the bulb is relayed into parallel processing streams (perception, valence and…
2022-10-29 12:38:48 @tdietterich @ylecun I imagine it is largely preprogrammed Just as human empathy is largely preprogrammed
2022-10-29 12:24:29 RT @kevincollier: This is as good as everybody says, really feels like the single most essential reading on today's big news.https://t.co/…
2022-10-28 19:04:15 RT @kohn_gregory: There's been a lot of attention surrounding this study, which shows that zebrafish lacking action potentials still develo…
2022-10-28 12:43:40 @kendmil @WiringTheBrain @bdanubius
2022-10-28 12:43:25 @WiringTheBrain @bdanubius
2022-10-28 04:12:08 Exciting application of MAPseq in olfactory cortex with Xiaoyin Chen, @joe6783 and @dinanthos https://t.co/TFYGOSnQc3
2022-10-26 22:30:31 @LKayChicago @MillerLabMIT @vferrera @PessoaBrain @NicoleCRust Exactly“The Wave” is generated by a simple local rule. Nothing magical. https://t.co/HKeLLhHt1R
2022-10-26 20:38:21 @PessoaBrain Indeed this is a great example of how simple local rules--stand up &
2022-10-26 19:19:28 @furthlab This rewards people for doing the public service of reviewing. To gamify it people would compete for providing *valuable/useful* reviews. And allowing any interested reader to self-select as a (post pub) reviewer
2022-10-26 19:11:51 @furthlab @_dim_ma_ There is currently no system for saying that across journals your reviews are considered to be among the top 1% most valuable of all reviewers by readers. Especially in a way that allows a reviewer to remain anonymous
2022-10-26 18:12:21 @furthlab I don’t think the problem is too many papers per scientist.
2022-10-26 16:30:14 @SteinmetzNeuro @OdedRechavi My hope would be to defund publishing as much as possible, though i agree that if there is money to be spent it should go to editors first and then reviewers.
2022-10-26 16:18:19 @cshperspectives @wjnawrocki i guess for widespread uptake by the community there would have to be a very user-friendly front end. (I have no idea how to interact with ORCID)
2022-10-26 16:08:31 @cshperspectives @wjnawrocki having a centralized repository for these reviews, along with a mechanism so that even anonymous reviews could remain linked to the reviewer, would be a great step forward. (also a way to up- and down-vote reviewers)
2022-10-26 15:03:48 @cshperspectives @wjnawrocki really? how would it work? if i were to write a 4 paragraph review of a published paper (or preprint), where would i post it and how would i get a DOI? Is there a "biorxiv-reviews"?
2022-10-26 14:55:57 @cshperspectives @wjnawrocki make reviews citable with their own DOIs...https://t.co/LG0CHdRAsH
2022-10-26 14:54:41 this would go a long way to solving the "how do we get enough reviewers?" problem! https://t.co/zrICWrvYD9
2022-10-26 14:50:32 @behrenstimb or maybe one (@bdanubius) of the authors has been thinking about the relationship btw AI, learning and evolution and that's what motivated them to do these expts and so they sharing their actual motivation?you may question whether it IS relevant but: https://t.co/vFHS5k2OAh
2022-10-26 14:37:14 @cshperspectives @wjnawrocki https://t.co/RfvokFe96j
2022-10-26 14:36:54 @OdedRechavi how about rewarding the reviewers w/o paying them? Set up a system so top reviewers could be acknowledged for service to the community--something they can put on their resumesAnd open up reviewing to everyone-->
2022-10-26 14:27:05 RT @joshdubnau: Do you think it is sound career advice to encourage a postdoc looking for a TT job or assistant professor hoping for tenure…
2022-10-24 19:39:27 @davidchalmers42 just something to think about https://t.co/ImGVmdx5td https://t.co/RaJkolrj4s
2022-10-24 19:16:18 I just contributed to @actblue But i am reluctant to contribute againWhy you ask? Since contributing i have been inundated with texts and emails. Literally more than TWO DOZEN since last night!!*** Plz provide opt out option AT SOURCE if you want continued engagement ***
2022-10-24 04:00:48 @mezaoptimizer @pfau @ylecun @KordingLab yeah no analogy is perfect but going with this one i'd say it's as though modern physicists argued "we can do all the physics we need to by just reading Feynman...no need to learn any math beyond what we absorb from that"
2022-10-24 03:48:28 @mezaoptimizer @pfau @ylecun @KordingLab i dont really know what "researching neuroAI" would mean. We can research neuro, and apply what we learn to AI (and vice versa). To do either requires deep knowledge of both
2022-10-24 03:22:40 @pfau @ylecun @KordingLab and yet that's kinda the point. Feynman benefitted from the deep understanding of math learned during his training so didnt need Theorem 6a from Acta Math. yet the fact that he didnt need to keep up with the latest doesnt mean that later physicists could ignore math right?
2022-10-24 03:09:53 @pfau @ylecun @KordingLab Touché!
2022-10-24 02:42:32 @ChurchillMic luckily we have a just the analogy for you in the white paperbriefly: The Wright brothers werent trying to achieve "bird-level flight," ie birds that can swoop into the water to catch a fishAGI is a misnomer. What people want is AHI. ("general" -->
2022-10-24 01:50:13 @memotv @pfau also different from a major point of the white paper which was:"Historically many people who made major contributions to AI attribute their ideas to neuro. Nowadays fewer young AI researchers know/care about neuro. It'd be nice if there were more bilingual researchers
2022-10-24 01:20:38 I think there would be a lot less animosity in Twitter debates if they let you write “I think” without it counting toward your character limit.Just my opinion
2022-10-24 00:22:32 @pfau @ylecun @KordingLab I would say this is like asking a physicist what recent paper in math they read that enabled some result:"Hey Feynman, Did you ever read a paper in Acta Mathematica that directly changed the way you did something??"If no, then no need for physicists to learn any math, eh?
2022-10-24 00:17:39 @criticalneuro @tyrell_turing i think @pfau denies that "historically neuro contributed to AI"@gershbrain is also kind of a contribution-denier, though willing to concede the possibility of "soft intellectual contributions" https://t.co/ByyUFfunjj
2022-10-24 00:06:52 @criticalneuro @neuroetho @NicoleCRust IMO depends on what you mean by "advances". Agree that 99.9% of papers at NeurIPS do not require neuroBig ideas from neurosci might take 100 NeurIPS units to become useful bcs SOTA is so goodSo q is if all future big advances are endogenous or if neuro still has more to offer
2022-10-23 14:22:56 @neuroetho @criticalneuro @NicoleCRust yes I think many are arguing against hoping some specific Fig. 6a of some paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &
2022-10-23 14:20:54 In prepping for this upcoming discussion on LFPs @NicoleCRust reminds us of this 1999 special issue of Neuron all about oscillations and the binding problemhttps://t.co/zpkVADGSlK https://t.co/0rGLQim1hm
2022-10-23 14:17:12 @davidchalmers42 i dont think there is a single linear metric by which we can rank cognitive capacities, which is why the "general" in AGI is misleading. what we really mean is A-Human-IntelligenceBees are incredible but if we want to mimic HUMAN intel mice are closerhttps://t.co/A61XAQC4z5
2022-10-23 14:09:09 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab so i'm not sure that i disagree with what is written. I think they are talking about what i would call phenotypic behavioral discontinuities, whereas if one is building a system what matters is how much you need to tweak the parts and overall design
2022-10-23 14:06:07 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab I think the point is that you can have a discontinuity in abilities with only a few tweaks to the underlying structuresFinches going from hunting soft bugs to cracking hard seeds is a huge behavioral discontinuity but happened v. fasthttps://t.co/AgCLTUuHJ3
2022-10-23 13:10:35 @pfau @martingoodson i think you are arguing against hoping some specific Fig. 6a of some neuro paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &
2022-10-23 12:29:42 @neuropoetic @NeuroChooser @KordingLab Yes. un-nuanced provocation is a good way to build engagementI should try posting "Neuroscience is all you need. AI is off the rails and needs a reset. Scale is useless" and see what happens
2022-10-23 12:23:14 @Isinlor @gershbrain the amazing abilities of a bee, with <
2022-10-23 04:04:22 @jeffrey_bowers @aniketvartak @KordingLab Do you imagine the discontinuity occurred before or after we diverged from chimps (4 Myrs ago)? Although i happen believe a lot of what happened since then is due to language, my fundamental point (that our divergence is but an evolutionary tweak, like finch beaks) still holds
2022-10-23 01:31:13 @aniketvartak @jeffrey_bowers @KordingLab Lots humans can do animals can't (and vice versa)But most of the interesting ones are IMO coupled to language which likely evolved 100K-1M yrs ago--a blink Thus a few tweaks enabled a large change in abilityLike qualitative differences in Galapagos finch beak abilities https://t.co/S5B2t1dBB2
2022-10-22 20:00:49 @skeptencephalon I agree. One of the goals of rekindling interest in NeuroAI is to tap in to all the things we've learned in neuroscience in the last 3 decades
2022-10-22 19:55:13 @MatteoPerino_ @aemonten @mbeisen Right now, editors only tap established people However when it comes to establishing technical validity, a good postdoc or even senior grad student could do the job, greatly expanding the possible poolWe will need a system for assessing reviewer quality
2022-10-22 19:47:55 RT @ylecun: @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &
2022-10-22 19:01:03 A sad day. Chuck was one of the greats. He was an inspiration as a scientist and a mentor.His contributions over more than half a century of neuroscience were broad and deep. He will be missed. https://t.co/LWXF9rUDYY
2022-10-22 16:07:09 @neuropoetic @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau @ChenSun92 biological plausibility is important for the application of AI to neuro, but doesnt really come up for the application of neuro to AI
2022-10-22 15:50:55 @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau This question comes up in funding biology. Why bother funding basic stuff--let's just solve cancer!It turns out that ideas take years or decades to percolate from basic science to the clinic. So the understanding the influences will always seem like archeology
2022-10-22 15:23:13 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau Indeed, AI is SO intertwined with Neuro that it doesnt make any sense to try to disentangle them historically. The whole point is we need people trained in both fields(BTW, that's only true of modern AI/ML/ANNs. GOFAI "advanced" w/minimal influence form neuro)
2022-10-22 14:53:12 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau but transformers solve a problem posed by RNNs, which were definitely neuro-inspired. And given links btw (neuro-inspired) Hopfield nets and transformers, perhaps the connection to neuro is stronger than usually appreciated?
2022-10-22 14:40:50 @garrface hmm. If memory serves, S&
2022-10-22 14:35:31 @criticalneuro @gershbrain my view is that the NeuroAI history involves big ideas slowly percolating from neuro to AI. Sometimes it takes years or decades for them to be engineered into something useful. But unless you think "scale is all you need" we are gonna keep needing new ideas for a while
2022-10-22 14:33:32 @criticalneuro @gershbrain @gershbrain can weigh in about whether i misunderstood his tweet...if so, then i wasted 30 minutes summarizing my view of the history of NeuroAI, which hopefully some people might find interestingbut he also raises an interesting question about future neuro-->
2022-10-22 14:17:23 @NeuroChooser @KordingLab i would agree except i dont think it's "just" engineering. Engineering is an essential and equal partner to the underlying inspiration...without proper implementation and development, those ideas are useless
2022-10-22 14:11:12 @KordingLab NoNeuro has historically been essential for many/most of the major advances. Unless you think "scale is all you need" then it's a great way to find hints as to what path to followhttps://t.co/Q8QczhC3zu
2022-10-22 14:08:01 @gershbrain @josephdviviano i agree with that (much weaker) formulation...neuro is not about delivering "widgets" to AI. Neuro can inspire big ideas. It can hint about what the right path is. But to make these ideas work requires engineering
2022-10-22 13:56:00 i should have cited this very nice summary of the history https://t.co/PVwiZBa2yFFIN+1
2022-10-22 13:53:39 @gershbrain yes i do think that... https://t.co/wLvNYYHiH4
2022-10-22 13:52:47 But stepping back: I think it's not coincidental that the early, major, advances in ANNs were made by people with feet in both communities. When NeurIPS was founded, the ANN community was indistinguishable from comp neuro
2022-10-22 02:02:19 @benj_cardiff @KordingLab He is not the first to say that! Luckily, we addressed that by arguing that we would be well advised to study ornithology if our goal were to endow a machine with "bird-level flight", eg "the ability to fly through dense forest foliage and alight gently on a branch" https://t.co/yo7JnGGVSG
2022-10-22 01:18:13 @neurograce @VenkRamaswamy @nidhi_s91 Cosyne is attracting more AI these days too
2022-10-22 00:37:53 @nidhi_s91 here, specifically we are talking about the energy efficiency of neural processing. A brain can do eg object recognition with a lot less power than a computerMy belief, shared with many, is that spiking (along with perhaps stochasticity, eg of synaptic transmission) is key
2022-10-22 00:35:30 @nidhi_s91 love to hear about it.To some extent, this is a call for AI to return to an earlier time when neuro and AI were much tighter. As a grad student in comp neuro, NeurIPS was my go-to meeting...neural networks and comp neuro used to be very tightly integrated
2022-10-22 00:06:27 @nidhi_s91 agree. all important and interesting fields
2022-10-22 00:05:50 @nidhi_s91 studying real animal bodies and how they interact with the environment is key to building robots. Inspired by "How to walk on water and climb up walls"https://t.co/INYhrLWmDD
2022-10-22 00:01:45 @nidhi_s91 that said, i am greatly inspired by ethology and agree it has a great deal to contribute
2022-10-21 23:59:34 @nidhi_s91 the overall goal of the paper is to galvanize excitement about NeuroAI. Historically neuro drove many key advances in AI, but one might ask what remains? Algos/circuits that address Moravec's Paradox (via embodiment) is one possible deliverable. Energy efficiency is another
2022-10-21 23:47:10 @nidhi_s91 the energy efficiency of neural circuits has indeed been studied for decades, eg this great paper by Laughlin. But studying energy efficiency of neural circuits does seem to fall squarely within the purview neuroscience, no? https://t.co/AZCQZ38NRF
2022-10-21 23:38:26 @criticalneuro @Abel_TorresM @summerfieldlab The primary target for funding would be govt not industry (though it'd be great if industry ponied up as well).
2022-10-21 23:34:20 @summerfieldlab AFAIK, there was little attempt in the Human Brain Project to "abstract the underlying principles"
2022-10-21 19:53:01 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical well in the Shannon information sense the information is there. How to decode is a separate questionIf you listened to the raw signal received by your cellphone it wouldnt mean anything to you. Luckily your phone knows how to decode it into an acoustic waveform
2022-10-21 19:22:56 @sanewman1 @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical i guess this reflects a very different understanding of how biology works from mine
2022-10-21 19:21:37 @SimsYStuart @PaulTopping @sanewman1 @ehud @kohn_gregory @GaryMarcus @SpeciesTypical I think the evidence for transgenerational epigenetic inheritance (Lamarckian evolution) playing an important role in humans (or most other animals) is very limited at best.Although Lamarck is a better algorithm, nature mostly seems to content itself with Darwin
2022-10-21 17:59:21 @kohn_gregory @GaryMarcus @sanewman1 @PaulTopping @ehud @SpeciesTypical i am using "information" in the technical (Shannon) sense, closely related to entropythere are other common uses of that word, and this might be at the root of some of the confusion here
2022-10-21 17:56:48 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical I am not clear how the fact that ink patterns might as well be stains is relevant here...can you clarify?
2022-10-21 17:52:37 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical it is semantics in that we know a lot about how these things work, so we're discussing what words to describe how it happens.There was a recent discussion about whether it's correct to call cells "machines" which imo was also just semantics.
2022-10-21 17:47:04 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical If i hand you a long set of instructions in Hungarian, i expect they will be challenging for you to follow (assuming you dont speak Hungarian). Nonetheless, i would say that the information is still there in the instructions. (not a perfect analogy but perhaps useful?)
2022-10-21 17:42:38 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical is there some other word that captures your understanding of the relationship btw geno/phenotype better? As a fellow biologist i assume we mostly agree on what that relationship is, so i guess we are just discussing word choice/semantics?
2022-10-21 17:37:35 @sanewman1 @GaryMarcus @PaulTopping @ehud @kohn_gregory @SpeciesTypical well, my phenotype includes being primarily bipedal, whereas my dog is mostly quadrupedalwould you not say that his genes "determined" his (quadrupedal) phenotype?
2022-10-20 20:48:08 @MelMitchell1 @mpshanahan @LucyCheke yes good point! We should add that to the next iteration
2022-10-20 20:13:50 @DavidJonesBrain @jeffrey_bowers @KordingLab I would include neurology as part of neuroscience. #bigtent
2022-10-20 18:39:15 @patrickmineault @KordingLab @seanescola ?
2022-10-20 18:37:41 @jeffrey_bowers @KordingLab my view is that much of what is needed is already present in animals (Moravec's paradox), which is not the primary focus of most psychology work today https://t.co/nTWXd3JGuB
2022-10-20 14:21:01 White paper —Rallying cry for NeuroAI to work toward Embodied Turing Test !Let’s overcome Moravec’s paradox: Tasks “uniquely” human like chess and even language/reasoning are much easier for machines than “easy” interaction with the world which all animals all perform. https://t.co/ehKRWl7rgJ
2022-10-19 21:40:40 @PessoaBrain @MillerLabMIT @LKayChicago @NicoleCRust By parts, I meant, synapses, channels, neurons. We know an awful lot about molecular and cellular neuroscience. How they are organized into higher level units like areas etc I agree is a bit less clear.
2022-10-19 20:47:25 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust sure it's all about figuring out how computation emerges from those parts...but IMO, it's worth keeping all that we learned about those parts (and how they are organized into circuits, etc) in mind as constraints...
2022-10-19 20:35:52 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust i think we know an awful lot about the parts that make up brains. Just not how they compute.... https://t.co/P2FGaui07C
2022-10-19 19:57:40 @jonasobleser @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust flattered to be compared with the GOAT but i'm not sure that most people who know me would characterize my discussion style as #ropeadope.
2022-10-19 19:55:26 @LKayChicago @MillerLabMIT @PessoaBrain @NicoleCRust hopefully we will all walk away with a shared understanding of what words like "organizing effects", "cause" and "epiphenomenon" mean in this context....
2022-10-19 02:17:30 Bees can learn complex tasks from other bees! https://t.co/BL7ibnlJXg
2022-10-17 11:55:49 @hubermanlab "light drinking was associated with a relative risk of 1.04...A 40-year-old woman has an absolute risk of 1.45% of developing breast cancer in the next 10 years..if she’s a light drinker, that risk would become 1.51%...and 1.78% for the moderate drinker"https://t.co/jZnY3cAyCR https://t.co/KS1S1LKVLg
2022-10-16 18:50:14 @eliwaxmann @RichardSSutton No it shows why a human with growing intelligence might become disgruntled. I think that the desire to lead, or at least not to be bossed around, has more to do with social drives evolved millions of years ago. Primates always jockeying for better position but not all species
2022-10-16 18:17:40 @eliwaxmann @RichardSSutton One of my faves!
2022-10-16 17:39:13 @RichardSSutton (I often worry that I don’t provide enough purpose for my dog who mostly just lays around all day. I think he’d be happier herding sheep all day or something )
2022-10-16 17:37:10 @RichardSSutton It’s not obvious that an “advanced” AI will resent being “subservient”. I suspect that resentment is built into humans due to our primate lineage. But if we evolved “advanced” AI modeled from eg dogs they might be thrilled to be kept busy doing what what want …
2022-10-15 18:03:03 @manos_tsakiris or: let's just transition to a system where everyone uploads their finished paper to arXiv/bioRxiv, followed by postpub review.No more journals-as-gatekeepershttps://t.co/PUpfncfyIj
2022-10-14 20:26:52 RT @TrackingActions: Neuroscience needs large scale efforts to crack this — Brain Observatories are one key path forwardLead by Christo…
2022-10-14 18:29:33 @neuralengine @Labrigger @SussilloDavid @hardmaru Yes, and agriculture in the Near East and elsewhere advanced for at least 5000 years before the invention of writing. presumably, much of this knowledge was transmitted through oral traditions.
2022-10-14 03:22:02 @SussilloDavid @hardmaru also worth noting that Neanderthals might have had language, in which case language-capable hominids were hovering on the brink of survival as a moderately successful species for >
2022-10-14 03:20:47 @SussilloDavid @hardmaru ..what i think is key is that each generation picks up survival tricks (like agriculture), and passing these tricks along to the next generation (as well as organizing activity in groups) requires language3/n
2022-10-14 03:20:02 @SussilloDavid @hardmaru So it's not clear lang was that *useful* by itself. In other words, it's not clear language itself is enough to enable a person (or even tribe) to outperform pre-linguistic competitors...2/n
2022-10-14 03:19:40 @SussilloDavid @hardmaru perhaps. Certainly my introspection supports this viewbut i think it's worth noting that for most of the >
2022-10-14 02:35:58 @EddyVGG @hardmaru yes i think the importance of language shaping your world view goes back to linguists/anthropologists Sapir &
2022-10-14 02:21:44 @hardmaru i think that's exactly right. I think the key innovation was language, which allows for the accumulation of knowledge across generations
2022-10-13 20:47:46 @haiderlab @MillerLabMIT @martin_a_vinck @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll the results seem largely consistent w/this paper, which showed LFPs could predict trial-to-trial variability in sound-evoked PSCs to within quantal fluctuations, no? https://t.co/FxsCPJwLsR
2022-10-13 18:29:38 @anastassiou_lab @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll and then there are changes in the shape of the action potential as it invades the synaptic terminals.
2022-10-13 18:28:49 @anastassiou_lab @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll somewhat independent of changes due to measurement are actual changes in the shape of the somatic action potential. But even those don’t necessarily have functional consequences downstream at the synaptic terminals.
2022-10-13 18:27:45 @anastassiou_lab @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll The shape of the action potential can indeed depend on how you measure it. but APs are typically thought to be largely all or none events, so i’m not sure we want to be focusing on those measurement subtleties
2022-10-13 17:55:38 @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll "Don't spikes also change depending on how you measure them? " What do you have in mind here?
2022-10-13 16:51:37 @kendmil @martin_a_vinck @PessoaBrain @neuralengine @MillerLabMIT @LKayChicago @NicoleCRust @GauteEinevoll in other words, as discussed in detail in another branch, the LFP is some complex &
2022-10-08 04:47:54 @MillerLabMIT @LKayChicago @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms so just to be clear, when you say LFPs work with spikes, do you mean ephaptic coupling (https://t.co/PXOuDS8Hph)or something else?
2022-10-08 04:27:44 @LKayChicago @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms tbh even after 427 tweets i dont fully understand @MillerLabMIT's view but i think he believes LFPs are no more or less causal than spikes whereas I believe spikes are causal and LFPs are an often useful way of measuring aggregate activity of many neurons https://t.co/WgzqtvcYhG
2022-10-08 04:05:40 @LKayChicago @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms There is no question or debate about whether the LFP is a potentially useful signal for indicating what’s happening. We all agree that it is. The debate is whether the LFP is somehow “causal" in a way that is independent of the spikes.
2022-10-08 04:02:49 @JustinKOHare Yes 9/5 full house. So who wins?
2022-10-08 03:41:04 Plz resolve this conflict over high stakes Texas Hold'em with the family. Two players both have 9s &
2022-10-07 20:03:50 @MillerLabMIT @SussilloDavid @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms i hope no one is feeling disrespected. I think lively debates among people who disagree about ideas (w/o being disagreeable) is twitter at its finest.
2022-10-07 19:54:07 @MillerLabMIT @SussilloDavid @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms There is also clear evidence that optical signals including optical birefringence are "coupled" to neural activity &
2022-10-07 19:51:04 @SussilloDavid @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms Here is a gedankenexperiment https://t.co/XRefa8yNcx
2022-10-07 19:31:52 @SussilloDavid @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms Exactly. famously, John Eccles found a way to rescue dualism: God loads the dice for each roll of the random release of neurotransmitter at the synaptic terminal. Beautifully untestablehttps://t.co/kFXSS0q1R4
2022-10-07 19:27:35 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms This argues that networks of neurons can be viewed as a dynamical system with oscillatory modes UncontroversialDo LFPs play causal role? we have to ask whether ephaptic coupling is essential to explain observed oscillations or is synaptic coupling is sufficient.
2022-10-07 19:00:59 @AndrewHires @PessoaBrain @MillerLabMIT @DrYohanJohn @JaumeTeixi @behaviOrganisms Wait, you are writing a grant on the functional role of optical birefringence, independent of action potentials?
2022-10-07 18:58:09 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms Consider the following gedankenexperiment:Compare (1) perturb a specific subset of neurons projecting from X->
2022-10-07 18:55:06 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms i agree that computation "emerges" from populations of neurons wired in the appropriate waybut in the absence of evidence to the contrary, i see no reason to posit emergent fundamentally new biophysical principles/mechanisms underlying these computations
2022-10-07 18:31:36 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms No, the evidence is not the same. You can measure the effects of spikes on neurons in a dish, or in many settings where LFPs are essentially absentAnd sometimes we can manipulate spikes/behavior in a small enough subpopulation of neurons so that the effect on the LFPs is small
2022-10-07 17:59:43 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms I am still unclear whether you are arguing that measuring population signals like LFP can be useful (they clearly can)
2022-10-07 17:54:39 @MillerLabMIT @nbonacchi @DrYohanJohn @JaumeTeixi @behaviOrganisms @PessoaBrain Agree that interpretation of causal manipulations can be tricky. But understanding without causal manipulations is even trickier, and sometimes impossible.
2022-10-07 14:58:57 @PessoaBrain sorry, tweet was misplaced in the thread. BOLD is great &
2022-10-07 00:45:46 @PessoaBrain @MillerLabMIT @DrYohanJohn @JaumeTeixi @behaviOrganisms Sure! And let's record the optical birefringence as well...it would presumptuous---dogmatic even-- for us to assume that these 0.001% changes in membrane optical properties do not play a causal role in generating complex behaviorshttps://t.co/htDiinOMKZ
2022-10-07 00:12:28 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms 2/2Second what are the best signals to record in order to gain insight into those computations that underlie behavior? That’s an empirical question, and could be single, spikes, multi spikes, LFP, calcium etc..
2022-10-07 00:10:34 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms I think there are 2 separate ideas here to disentangle. 1) how do neurons communicate in a network to perform the computations that underlie behavior? I think we understand the biophysics pretty well, just not how they are organized to produce behavior...1/2
2022-10-05 23:12:58 RT @L_andreae: Please amplify and RT, looking for amazing scholars and mentors, programme going from strength to strength! @network_alba @F…
2022-10-04 15:23:17 @tyrell_turing @KordingLab yes, we have much older machinery for predicting the properties of the physical world. Squirrels can guess whether a branch will hold their weight w/o i think invoking the kind of "causal machinery" we are discussing here.
2022-10-04 15:20:11 @tyrell_turing @KordingLab our "causal inference" machinery evolved for social prediction ("why'd he hit me? bcs he's angry" etc)Religious myths then generalized these explanations to the physical world, but with priors of intentionality ("why is there thunder? bcs Thor is angry")
2022-10-04 03:43:53 @mbeisen 92--"Walker’s Paradise"(I grew up in Berkeley)
2022-10-03 21:19:18 RT @AnthonyMKreis: So, @TheOnion filed an amicus brief before the Supreme Court in defense of parody under the First Amendment… and it’s ex…
2022-10-02 21:41:42 @GaryMarcus @sd_marlow Is walking really a solved problem? I've seen mind boggling videos (eg by Boston Dynamics) of successes, but as with self-driving I suspect there remain a considerable number of challenging "edge cases", no?
2022-10-02 03:21:10 RT @neuromatch: We believe Everyone should be able to read and publish research for free Research publishing should be commonly owned…
2022-09-29 17:53:57 please sign this open letter calling for reforms to academic publishing https://t.co/eyJfEbszhg
2022-09-28 13:03:09 Hope you all join us at noon EST today https://t.co/3CLZ8s2d0S
2022-09-28 12:51:24 OUCH"now is not the time to idle around inventing particles, arguing that even a blind chicken sometimes finds a grain. As a former particle physicist, it saddens me to see that the field has become a factory for useless academic papers."https://t.co/K52LRcyJuL
2022-09-28 03:39:59 @GaryMarcus @davidchalmers42 @ylecun @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez "symbols are just things that stand for things" -- so that sounds like what as a systems neuroscientist I might call a "representation"...and neural nets are pretty good at transforming these representations..are those not "operations over variables"?
2022-09-28 03:14:26 @GaryMarcus @davidchalmers42 @ylecun @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez well, we kinda know for sure that symbols somehow "emerge" from neural circuits/activity, 'cuz they're in the brain, right?whether they are differentiable in the brain is an open question, but i guess @ylecun argues that from an engineering POV we better make sure they are
2022-09-27 14:08:03 @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez so how my dog's "symbol" for me different from a representation?
2022-09-27 13:55:10 @bleepbeepbzzz @davidchalmers42 @GaryMarcus @raphaelmilliere @ak_panda @ylecun @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez I feel like throwing consciousness into the definition here is a bit like reducing this to a previously unsolved problem(also, many people use symbols without routinely introspecting about them. Most people cannot state the rules of syntax or phonology of their native language)
2022-09-27 13:49:31 @GaryMarcus @ylecun @davidchalmers42 @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez what is an "explicit symbol" and how can i distinguish it from an "implicit symbol" (= mere representation)?Or should we just use the Potter Test ("I know it when i see it")?
2022-09-27 13:45:54 @davidchalmers42 @GaryMarcus @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez but i would say that representations also compose to yield more complex reps. trivially, a dog can recognize you based on how you look, sound or smell. so what distinguishes this "complex representation" from a symbol?
2022-09-27 13:43:36 @ylecun @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez exactly, i would say animals reason and plan. this is why i am puzzled by @davidchalmers42's worry that symbols will "collapse" into representations...some people apparently are making a distinction that i do not understand. https://t.co/IxB6FW8Swb
2022-09-27 13:15:40 @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez As for "planning", it's clear that animals plan. (wolves hunting etc). So do animals use symbols to plan, or are they doing this with mere representations? I am not clear whether there is some rigorous distinction @davidchalmers42 is making btw representations and symbols
2022-09-27 12:59:59 @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez "aside from language" includes language.
2022-09-27 12:56:36 @davidchalmers42 @raphaelmilliere @GaryMarcus @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez aside from language and a handful of other constructed systems (chess, math, music, etc), what is the evidence that people use symbols? Ie why is it important that symbols be distinguished from the mere representations into which they might collapse?
2022-09-26 18:55:20 @ylecun @bleepbeepbzzz @GaryMarcus @raphaelmilliere @davidchalmers42 @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez This is great! Looks like we've gotten to the crux of the disagreement between @ylecun and @GaryMarcus !Just a misunderstanding...dispute resolved !
2022-09-26 17:00:31 @bleepbeepbzzz @GaryMarcus @ylecun @raphaelmilliere @davidchalmers42 @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez What are those two meanings of "symbol"?
2022-09-26 16:27:55 join me, Matt Botvinick &
2022-09-24 14:52:53 How academics really feel about scheduling meetings https://t.co/bvOmJO24Pq https://t.co/FVJWpQMeeq
2022-09-23 18:30:08 Faculty search in COMPUTATIONAL/THEORETICAL NEURO at CSHL. Please spread the word! https://t.co/QoSIo3qWIy
2022-09-22 04:48:35 @ylecun @davidchalmers42 metaphorically, lang is like the rational nums--closed under basic ops. Start in language, end in language, so easy to fool yourself that everything is containedLinguistic dark matter is like irrationals: almost every num is irrational, &
2022-09-22 03:26:03 @davidchalmers42 @ylecun i think the lights on your ceiling are less "dark matter" than "heretofore unobserved matter", ie stuff that could be described in language but perhaps hasnt yet....less and less of that as the web grows
2022-09-22 03:20:56 @n_g_laskowski strongly disagree. I love a good discussion during my talks. In fact, i often offer a prize for the first question. (usually the first question is "what's the prize?" but at least it breaks the ice)
2022-09-22 03:15:25 @aniketvartak @patchurchland yes our social primate ancestors evolved to model the behavior of other individuals (will he attack me?)we then turned that system onto ourselves to predict our own actions. And then we are fooled into believing those predictions were causal...
2022-09-22 03:11:14 @davidchalmers42 @ylecun well, the cog rule is easy to state (CW &
2022-09-22 01:29:35 @ylecun @davidchalmers42 These examples requiring "physical intuition" are linguistic dark matter, ie stuff almost completely invisible to language. It's very hard to teach a person how to ski or play tennis with just words
2022-09-21 23:30:22 @patchurchland like animals we interact with the physical world using world models...squirrels jumping tree-to-tree, spiders catching flies, people hitting tennis balls and driving carsmany of these are very hard to explain in full detail using language and are likely dark matter to LLMs
2022-09-21 21:05:11 Moving essay about finishing up a life of science after diagnosis with a terminal disease https://t.co/rOwfdYSVnZ
2022-09-20 21:53:42 @alexeyguzey https://t.co/mAt0BlqlHN
2022-09-20 21:47:02 @alexeyguzey if Uber drivers were recognized as employees then yes Uber would be required to pay minimum wage, and benefitsAt CSHL, benefits (health, SS, etc) are an additional 40% of base pay
2022-09-20 21:44:01 @JudiciaIreview @alexeyguzey no, i think that would only be true if (1) they had fares 100% of the time, which they don't
2022-09-20 18:20:13 @alexeyguzey I think it would make more sense to demand Uber pay a floor minimum wage of $15/hr + expenses, but i think Uber lobbied against thatSo if you accept the idea of minimum wage for drivers what's the alternative?
2022-09-20 17:19:56 @prokraustinator @MIcheleABasso1 @ElDuvelle @neuralreckoning @ashleyruba yep, we've been boiling frogs for 30 years and we're surprised they're finally jumping out.
2022-09-20 17:18:18 @alexeyguzey i guess the idea is $30/hr while you're on shift, right? but ubers dont have 100% occupancy, and there are operating costs (gas, etc)So to net $15/hr, charges when occupied need to be higher, no? Not sure if they got the exact numbers right but that could be the logic
2022-09-20 17:12:39 @prokraustinator @MIcheleABasso1 @ElDuvelle @neuralreckoning @ashleyruba i think the bigger difference with a residency is that unlike a PD it is not open-endedLike med school, you start on day one and get squirted out at the end like a watermelon seedI think the wage issue would be better tolerated if we could promise a fac position after eg 3 yr
2022-09-20 15:58:47 @tyrell_turing @andpru Just as I’m very confident *not* sleep training was the wrong choice for me. That’s six years I was in a fog. Like if I had done a surgical residency.
2022-09-20 15:00:29 My advice to all new parents isSLEEP TRAINING ! Sometime in the first few months My greatest regret in raising 2 kids is that we “co-slept” for 3 yrs each. That was 6 miserable years of sleeping sometimes 3-4 hrs/night(FTR: They are turning out great. We suffered) https://t.co/kSwlOKOVTW
2022-09-18 22:46:56 @EvelinaLeivada @GaryMarcus It matters to me bcs I believe humans aren’t so special, and that almost all our capabilities have animal antecedentsSince I work on neural circuits in animals I would like to be able to connect concepts like “understanding” to things we can study in non-human animals
2022-09-18 20:14:46 @EvelinaLeivada @GaryMarcus ok, i'll bite. what does it mean to "understand"?and can animals "understand"?
2022-09-18 20:13:56 @blamlab That correlation does not imply causality was one of the main reasons people got so excited about optogenetics 15 yrs ago...we in systems neuro knew it, but had only the crudest tools(yes, it's tricky due to eg compensation. geneticists have been confronting this for decades)
2022-09-18 20:03:21 @GaryMarcus I dont think we understand the meaning of "understand".Just as “It depends on what the meaning of the word ‘is’ is.”https://t.co/TrzRx3lqfs
2022-09-17 22:48:32 @aniketvartak Right the fact that some humans “happen” not to understand could arise from eg lack of interest.
2022-09-17 22:30:48 @aniketvartak Are you saying that some humans just *happen* not to understand how bicycles work and so make mistakes whereas dalle fundamentally *cannot* understand?
2022-09-17 20:41:36 @neuroecology yikes
2022-09-17 20:19:22 Even though i am a terrible artist, as an avid cyclist and someone who has assembled and repaired bikes over the years, these are mistakes i would never make. (Dalle makes some strange choices esp. around the crankset) https://t.co/hC6hsH3mmA
2022-09-17 20:12:37 Great starting point for discussion of what it means to for AI to "understand" somethingMany humans draw nonsense bicycles clearly showing no understanding of how bicycles work (sorry to ruin the joke, @alexeyguzey ) https://t.co/xg9WmHluk0
2022-09-16 20:54:02 @PamelaReinagel it's pretty hard to find a decent-paying job, with good health insurance, retirement benefits, $$ for kids' educations, etc, where 30hrs/week is considered adequate to keep the joblack of a good social safety net in the US is one of the main policy failures i have in mind
2022-09-16 16:29:42 if you watch the video, almost all of the predictions were accurate, except the part about the 30 hr work week, which Keynes predicted decades earlier (15 hrs actually)But that missed prediction was not a technological failure but a series of policy failures. https://t.co/ijJl2R03Qb
2022-09-16 15:36:30 "by the yr 2000, the US will have a 30-hr work week and month-long vacation as the rule. A lot of this new free time will be spent at home...we could watch a football game or a movie shown in full color on our big 3D TV screenWe may not have to go to work..work will come to us" https://t.co/02wRP4C3VI
2022-09-15 03:29:23 RT @patagonia: Hey, friends, we just gave our company to planet Earth. OK, it’s more nuanced than that, but we’re closed today to celebrate…
2022-09-13 01:20:59 RT @joshdubnau: @EricTopol Except for the little fact that virtually everything that biomedical researchers have done since the genome proj…
2022-09-11 14:30:25 @neurograce @neuralreckoning @somnirons I agree that won't work
2022-09-11 13:17:36 @neurograce @neuralreckoning @somnirons Are we talking about anonymizing authors or reviewers? I agree you can't do authors. But you could have a system where reviewers are associated with pseudonyms, so they can develop reputations independent of their real world identity. Like certain Twitter accounts
2022-09-11 03:42:35 @cdk007 Yes, good point. This is an ideal opportunity to teach my 12 yo about bimodal distributions
2022-09-11 03:29:48 @cdk007 One can put a pair of shoes worn by player X on ebay and see how much they sell for. This turns the subjective value into a concrete number. For Curry it's $58K--more than purchase price
2022-09-11 03:13:18 @neurograce @neuralreckoning @somnirons why can't you have anonymized postpub review?
2022-09-10 19:25:47 12 yo asks: What is the fame breakeven point for shoes?Apparently if Steph Curry wears a pair of shoes they increase valueIf I wear them they decreaseSo he argues there must be someone just famous enough so the shoes just retain their value. Who?
2022-09-10 16:41:26 RT @historyinmemes: In 1999, only 6 years after the birth of the worldwide web, Bowie spoke about the "unimaginable" effects of the Interne…
2022-09-09 15:57:59 @NicoleCRust So we care about Nernst bcs 'it's foundational for understanding something else that actually matters', ie seizures? I might have gone with: Bcs 'It's foundational for understanding something else that actually matters', ie Hodgkin Huxley.But maybe i dont understand the rules
2022-09-07 04:52:01 RT @GoogleAI: Today we introduce an ML-generated sensory map that relates thousands of molecules and their perceived odors, enabling the pr…
2022-09-05 16:15:50 @rushkoff “How do I maintain authority over my security force after the event?” "making guards wear disciplinary collars of some kind in return for their survival."Someone should warn them: obedience collars have been tried and ultimately fail, as we learned in Star Trek https://t.co/Bsttfob2y7
2022-09-04 18:08:46 @ScottishWaddell 100%
2022-09-04 18:07:18 @ScottishWaddell absolutely! i used the exact wording of a previous poll to test the hypothesis that the outcome of the poll is going to depend strongly on who engages with you on twitter. https://t.co/lnQkCBwAXj
2022-09-04 17:57:45 @anne_churchland @patchurchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau @patchurchland I thought philosophers actually do wake up in the morning saying, "Gosh, I wanna solve decision-making (or consciousness or morality) today!"What much smaller subproblems occupy you?
2022-09-04 01:19:25 @Labrigger @anne_churchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau @grimalkina In an ideal world, society would support curiosity driven science out of curiosity. But in practice we compete for finite federal dollars, and it's hard to justify spending $62B on the NIH vs eg $200M for the arts (NEA). (Physics &
2022-09-03 22:32:55 @Labrigger @anne_churchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau @grimalkina Whose main goal? I’m pretty sure that the NIH’s main goal is human health. But of course there can be multiple goals---your goal could differ
2022-09-03 20:25:59 @jbimaknee Much/most of the basic advances that contributed to human health arose from curiosity-driven science not directly related to healthThus even if my own personal motivation is not health, i still believe what i (and others) do contributeseg https://t.co/TUWR5ATw5i
2022-09-03 19:52:21 @grimalkina @Labrigger @AdrianoAguzzi @tyrell_turing @koerding @anne_churchland @joshdubnau good point! I just launched the exact same poll, identically worded. So we can compare whether the people who follow me on twitter feel the same way as @Labrigger https://t.co/v0XJUkp2EZ
2022-09-03 19:50:08 What do you want out of your own neuroscience experiments?Insights into the human brain and/or health?Insights into how animal brains work?Or insights into principles of brain function that can lead to better computers/AI?(or something else?)
2022-09-03 18:52:06 @anne_churchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau I think the only justification for society supporting Neurosci more than any field, like philosophy or art history, is the potential for human healthAnd i think what we learn about animals is relevant to humansBut if i'm honest, human health is not what drives me personally
2022-09-03 18:03:41 interesting...though this is primarily a statement about who follows you on twitter, right? eg i would expect very different results from each of eg @AdrianoAguzzi @tyrell_turing @koerding @anne_churchland @joshdubnau https://t.co/7dmmnRCZH8
2022-09-03 00:49:44 The Shifting Baseline : how the natural world has changed over the last few generations (and beyond ) https://t.co/pn1YorM0Rm
2022-09-02 11:37:01 Sounds amazing! https://t.co/qafbcNuC1Y
2022-09-02 11:31:54 Sounds amazing! https://t.co/qafbcNv9Rw
2022-09-01 13:31:08 LLMs are highly controversial Some say they verge on sentience and deserve to be treated as people. Others call them glorified lookup tables@sejnowski argues that they are like the Mirror of Erised, reflecting your expectations of them.fun read!https://t.co/7pyh7oeNyz https://t.co/FdJQAjdTUB
2022-09-01 13:06:44 @WiringTheBrain here's a complicated story about how killing off wolves (apex predator) in Yellowstone in the 1930s led to a disruption of the ecosystem and decline of many species. Reintroducing wolves in 1995 fixed the problemhttps://t.co/mJw764ihw8
2022-08-31 19:01:46 @WiringTheBrain Many unintended consequences in public policy, especially tax codea fave: to evade limits on CEO pay by capping what is tax deductible, companies compensated in options, which ended up leading to even larger pay packages and CEO focus on stock pricehttps://t.co/HOuNzPIcGw
2022-08-31 12:56:37 Are you trained in AI, and interested in doing original research at intersection of Neuroscience and AI? Come to CSHL and join a vibrant community for 1-3 years as a NeuroAI Scholar!(plz retweet)https://t.co/wbbJMB6DoM
2022-08-28 17:11:15 @ent3c it's one thing to know in principle a trait is heritable, another entirely to actually see a signaturein principle the link could involve nonlinear interactions of 100s of genesEg ident twin concordance in schizo is 50% but we are not very good at predicting it..
2022-08-28 16:36:51 RT @cshperspectives: I think I'm just gonna keep shouting "Plan U" for the next six months https://t.co/Ag3lzsl3gB
2022-08-27 14:00:51 "We conclude that: 1) scientific societies and the individual scientists they represent do not always have identical interests, especially in regards to scientific e-publishing
2022-08-26 21:27:33 brilliant defense of the current academic publishing business model...explains why the recent government-mandated requirement that all publications be immediately available is very unfair to publishers https://t.co/7INdJNnAlW
2022-08-26 03:28:29 @kohn_gregory @GaryMarcus @joe6783 Thanks for the pointer! looking forward to reading it.
2022-08-24 03:35:14 @GaryMarcus @ylecun LLMs are absolutely amazing, but i agree that without grounding in the physical world they are unlikely to take us across the finish line..
2022-08-24 03:12:54 @GaryMarcus @ylecun are you concerned that there is lot of verbal knowledge that is never written and hence is invisible to LLMs (at least until they start mining all the data from Alexa and Siri)?Interesting point. My intuition is that not that much is missing, but who knows? We'll find out soon
2022-08-24 02:37:44 @GaryMarcus @ylecun Assuming that much of what we "know" is stored in the connection matrix among our 1e11 neurons, i could (in principle) quantify how well we could predict that matrix from (1) our genes
2022-08-24 01:15:04 @GaryMarcus @ylecun right...so assuming we include in "knowledge" stuff like how to pick up an object, or how to walk without falling over, i agree 100%.
2022-08-24 01:09:51 @GaryMarcus @ylecun At the risk of putting words into @ylecun's mouth, I dont think he was arguing that language wasn't indispensable to "modern humans as we know them"What makes us "uniquely human"--language--is real, but just a small frac of the total...we just arent that different from animals
2022-08-24 00:54:16 @GaryMarcus @ylecun Humans would not have outcompeted animals as effectively if not for language, which allows knowledge accumulation over generationsBut without the foundation (which we share with animals) provided by 500Myrs of evolution we would fail like LLMsMoravec said it best https://t.co/MkoDfaGNf7
2022-08-23 19:09:13 @anne_churchland I have recently switched to using latex for grants and love it. I find that I get at least as much control over figures as under word. But Overlleaf does suck for tracking changes
2022-08-23 04:58:56 @anne_churchland I love Latex because, as the great astronomer Chandrasekhar famously said, a document that beautiful must be true(whereas I am suspicious of even F=ma rendered in Word)
2022-08-22 14:21:33 @crllonghi I very much enjoyed “Other Minds" by Peter Godfrey-Smith
2022-08-20 22:28:54 The real surprise here for me is not so much that Brazil is so big but rather that the Earth is round https://t.co/C7AzsKkknh
2022-08-18 20:34:32 @Elnaz_AK @KordingLab i absolutely suck as an artist, but i'm pretty good with words. I would love to be able to convert words in my head into pictures I could share with others.
2022-08-18 20:22:58 @nicholdav @KordingLab @tyrell_turing only sort of. a scientific review typically is more than just a list of facts with pointers to the papers. A good review presents a worldview which is abstracted from the field(Current LLMs might not be able to provide that, but that's a different--hopefully resolvable--issue)
2022-08-18 20:16:46 @KordingLab Upon reflection, I am increasingly convinced this is largely a non-issue. Who is the injured party? For whatever reason, art collectors pay top $$ mainly for the original work of art. Perfect replicas can be nearly worthless
2022-08-18 20:12:57 @bradpwyble @tyrell_turing @KordingLab @nicholdav i dont care if they care. Luckily, if Elsevier is like the hypothetical 800 lb gorilla in the world of academic science, the developers of the LLMs are more like King Kong. They will not notice Elsevier's complaints.
2022-08-18 20:10:02 @KordingLab @tyrell_turing @nicholdav I think the answer is obviously "no, we do not want to limit LLM-generated review articles to work not under copyright" though i'm sure Elsevier would like that
2022-08-18 19:18:06 @KordingLab @tyrell_turing @nicholdav I am looking forward to a time when I can ask an LLM to review of a corner of the scientific literature. Should that review be limited only to work not under copyright?
2022-08-18 19:16:01 @KordingLab @tyrell_turing @nicholdav It is currently legal I believe for an artist to make a living by generating Picasso or Rembrandt knock offs, as long as they are not trying to pass them off as originals. Images are protected, not styles.Why should the standard for machine-generated images be any different?
2022-08-16 16:54:19 @NoahShachtman @ariehkovler Jack Nicholson in "Five easy pieces" https://t.co/ZDFPsqfdaq
2022-08-16 02:07:34 @StevenBratman @quotebread @SteveStuWill seems interesting! what's the take home message?
2022-08-15 20:00:00 @prokraustinator @quotebread @SteveStuWill But I think you could make a strong argument for some actual “progress"Like amniotes "figuring out" an alternative to laying eggs in the water freed them to explore a lot of terrestrial environments.
2022-08-15 19:58:00 @prokraustinator @quotebread @SteveStuWill yeah, that’s why I suggested 1 million year adaptation. And, sure, some modern organisms probably require some things that just went around back then.
2022-08-15 19:45:12 @quotebread @SteveStuWill Eg once amniotes evolved, this presumably opened up a bunch of terrestrial nichesSo maybe at least some advances are real, more like gortex than mere fashion differences ?
2022-08-15 19:23:54 @quotebread @SteveStuWill Is that really true on long time scales? If you transported a bunch of successful modern plants &
2022-08-15 16:26:28 Great discussion of"effective altruism" and utilitarianism more generally https://t.co/Vkw8vJFqSr
2022-08-15 15:27:32 Congratulations Justus! https://t.co/bGopJipdGh
2022-08-15 15:12:34 @bradpwyble @KordingLab i agree that creative work merits protectioni'm just pointing out that even though the current system might seem to be protecting artists, it mostly isntart dealers, record labels, spotify, disney, etc are reaping most of the profits bcs of a system they created
2022-08-15 14:33:49 @KordingLab Isn't the core issue that we maybe shouldnt be monetizing creative output the way we do? Copyright laws are not protecting "content creators," who see very little of the profit. They are designed to protect big players, eg Disney etc. https://t.co/r91RNU6Ghd
2022-08-14 16:36:05 @ideal_politik @stevesi There is a virtuous cycle. New discoveries allow us to make new tools, which enable new observations, which enable new discoveries, etc.
2022-08-13 20:06:48 @urfagundem and ideas were motivated by observations, which were enabled by tools...and so the virtuous cycle of science continues
2022-08-13 19:02:29 @TrackingActions @kevin_nejad i agree except i would change the verb tense from "have had" to "are having". this is probably the leading candidate for a first big impact of ML in neuroscience, and likely to payoff big in the future! But imo perhaps not quite up there with 2P or optogenetics (yet)
2022-08-13 18:49:48 @aniketvartak Yes, there is a virtuous cycle. New discoveries allow us to make new tools, which enable new observations, which enable new discoveries, etc.
2022-08-13 18:48:45 @aniketvartak IMO this quote is important bcs it highlights a key component of sci progress (techniques) that many scientists undervalue. Relativity might be the (rare) exception...i just dont know the history well enough. But most discoveries were catalyzed by new techniques
2022-08-13 18:31:33 @joshdubnau I very much doubt he would make a units error like that. Presumably he said it takes 1000 nanobiologists to equal one microbiologist.
2022-08-13 18:28:14 @aniketvartak indeed, that was likely closer to the original quote. But Brenner was happy with the (IMO pithier) version he is often credited with, so i quoted that onehttps://t.co/2kLY6EYl0K https://t.co/nVZKiune3W
2022-08-13 14:07:02 @kevin_nejad ANNs may very well one day have a big impact on neuroscience, but so far I’m not sure they have
2022-08-13 14:05:37 @kevin_nejad The last 20 years have seen an explosion of new technologies which have enabled new discoveriesOptogentics , massively parallel optical and electro recording, targeted delivery to specific cell types, circuit tracing etcThese have fundamentally changed the questions we ask
2022-08-13 02:00:36 "Progress in science depends on new techniques, new discoveries and new ideas, probably in that order" --- Sydney Brenner (~1980) https://t.co/pMcgUaRDcH
2022-08-12 17:29:53 @matias_kaplan Wow. What paper is that from?
2022-08-12 13:32:13 @WiringTheBrain Maybe I missed the memo but Isn’t this all the result of overloading words like “explain” and predict”? You don’t “need” emergent thermodynamic explanations since derailed particle motions in principle can be predicted without it, but we find it a useful kind of explanation
2022-08-11 22:04:01 @TheAngelo2258 The near-minimum wage job I worked 10-20 hrs/week at was actually a significant fraction of total expenses back then. Not really today. It did not build character. Just made me tired and probably cut into my grades a bit.
2022-08-11 21:36:51 @joshdubnau If the Mensch as a gender-neutral term for "human being" was good enough the for great Yiddish scholar Martin Luther, it's good enough for me. https://t.co/A3GirPocVI
2022-08-10 15:55:27 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano OK, I’m gonna add to my to do list to go to the local primatologist bar and pick a fight But in the meantime the crux of my argument doesn’t depend on relative ranking of primate intel. merely observes that primates have not been particularly successful for most of our evo hist
2022-08-10 15:42:27 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano the reason i bring this up is in response to @GaryMarcus's suggestion that evolving human intelligence is hard. My counter is that it's not hard
2022-08-10 15:39:48 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano But stepping back--my argument is: to the extent primates in general, and hominids in particular, were smarter than other species, the payoff didnt became obvious until recentlyHumans &
2022-08-10 15:31:25 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano I'm not convinced group size alone quantifies social intel. Intuitively, seems like social intel is related to the complexity of the model you have of each individual you interact with. Armies and other hierarchies scale well by limiting the complexity of the requisite models
2022-08-10 13:09:16 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano Although as a rule of thumb group size might be a reasonable proxy for social intelligence, I’m not sure that it can be applied effectively across species. By that measure ants, bees, naked mole rats, pelican flocks might all be expected to be of superior social intelligence
2022-08-10 03:54:46 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano but i'm intrigued by the claim that baboons have similar linguistic abilities comparable to great apes. Koko famously had a vocabulary of >
2022-08-10 03:49:34 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano i'm not really sure how to define intelligence much less social intel rigorously, but most people put humans at the top of the intel scale so i figured it was safe to put our closest relative (chimps) higher than macaques.
2022-08-09 14:05:06 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano not all selection benefits the species. Eg sexual selection of colorful plumage in birdsI'm not suggesting that intelligence is merely like peacock feathers--its role is much more complex--but selection need not always be net plus for the species
2022-08-09 13:45:18 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano I buy the theory that the main driver of intelligence in the primate lineage (and eg elephants) has been social competition and cooperation. so if you buy that chimps are smarter than baboons (albeit measuring intel is ill-defined), then prob yes.
2022-08-09 13:42:47 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano and if you you believe (as many do) that Neanderthal had language, then prob so did our common ancestor almost >
2022-08-09 13:40:16 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano Modern humans arose >
2022-08-09 13:36:23 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano sorry, i should have been clear. Of course social intelligence is absolutely central to human dominance. But i was arguing that the added intelligence btw eg monkey and chimp was driven by social competition, and yet chimps dont seem to be "more successful"
2022-08-09 03:40:09 @kendmil @tyrell_turing @TimoWitten @SaraASolla @KordingLab @recursus @ShahabBakht @NXBhattasali @joshdubnau @WiringTheBrain @L_andreae as someone who grew up on the learning/LTP/hippocampal slice side, i'm curious why you (pl) thought devel was a good model of learning? I do understand being interested in devel for its own sake, but isn't learning a good model of learning, and also easier to study?
2022-08-09 03:02:57 @kendmil @tyrell_turing @TimoWitten @SaraASolla @KordingLab @recursus @ShahabBakht @NXBhattasali @joshdubnau @WiringTheBrain @L_andreae "a model system for activity-dependent synaptic plasticity and hence of learning."this whole thread was about whether there was a distinction (and if so, what) btw development and learning so i question the use of "hence" here...
2022-08-09 01:12:44 @neuralreckoning @tfburns Go for it! Looking forward to reading it!
2022-08-09 01:09:59 @tyrell_turing @TimoWitten @SaraASolla @KordingLab @recursus @ShahabBakht @NXBhattasali @joshdubnau @WiringTheBrain @L_andreae In the 80s ML hadn’t clearly differentiated from CN. The same people. Eg what was hopfield? Kohonen? My go to meeting as a grad student was nips. I started NIC (which evolved into cosyne) in 96 bcs by then nips was pure ML. But in the early 90s it still embraced CN
2022-08-09 01:01:39 @neuralreckoning @tfburns Potentially you could negotiate OA for all digital versions of your masterpiece with a publisher. At least some academic editors are just people who to help disseminate knowledge and publish physical books
2022-08-09 00:38:39 @neuralreckoning @tfburns What format would be better ?
2022-08-08 22:40:44 @GaryMarcus @martingoodson @ylecun @tdietterich @miguelisolano Sapiens, Neanderthal, denisovan, Hobbit, and an archaic population inferred from genomic analysis
2022-08-08 22:38:46 @GaryMarcus @martingoodson @ylecun @tdietterich @miguelisolano “Who we are and how we got here”by David reichhttps://t.co/Ckwo3oP4wg
2022-07-24 18:46:35 RT @KarlHerrup: Indulge me in a long thread with thoughts on the Piller bombshell in Science about the fraud surrounding the Lesné Aß*56 da…
2022-07-15 14:00:15 @NicoleCRust @CarlosEAlvare17 @WiringTheBrain @statsepi I think you'd find widespread agreement that Big Question advances might accelerate progress on circuit/psychi dz like schiz or depressionPerhaps less agreement on brain tumor, or eg degen dz like Alz, or even strokeSo maybe continuum?
2022-07-15 13:26:34 @NicoleCRust @CarlosEAlvare17 @WiringTheBrain @statsepi Are you more concerned with slow progress in treating neuropsychiatric disease or "understanding" how the brain works (the Big questions)? Or both? Or you think they're correlated ?
2022-07-12 20:11:25 @joshdubnau Ability to count to 5 optional?
2022-07-12 17:19:10 @TechRonic9876 impressive! but i think this osprey catching a fish still winshttps://t.co/dKJJ5dZaiD
2022-07-12 15:47:54 @djintwt Right, many previous advances in AI were inspired by neuro. And we know many things in Neuro that could be useful in AI, but we don’t yet know how to port them. As soon as we do they are no longer Neuro, they are AI
2022-07-12 15:24:53 @djintwt CNNs?
2022-07-12 12:53:13 @HulsmanZacchary yes, the trick is to understand how the brain computes sufficiently well so we can abstract the key principles. Not easy!Simply copying brain circuits won't work
2022-07-12 12:28:49 @tibbydudeza Hmm. Neuro inspiration for AI has worked out pretty well so far:-->
2022-07-12 12:25:18 @HulsmanZacchary indeed! planes are much better than birds at taking heavy loads long distances very fastJust as computers are much better than brains at performing many operations very fast with huge datasetsbut if want planes to do this we'd study birdsand if we want computers to do AI...
2022-07-11 16:38:47 @kw_cooper @GaneshNatesh so cool!
2022-07-11 14:48:36 @GaneshNatesh i think we would find it really difficult to match hummingbirds and eagles in their aerial agility
2022-07-11 14:46:25 @IntuitMachine @GaneshNatesh Although translation was C3P0's specialty, i think he was a general purpose android who could work on the farm and presumably do the dishes(Unlike a lot of famous humanoid AIs (like Terminator), he was not built for combat, which is why i chose him as an example)
2022-07-11 14:34:05 @GaneshNatesh i think what many people ultimately want is C3P0--an AI that can do anything a human can do (ideally better)
2022-07-11 13:15:00 @giorgiogilestro well, no, I actually think a good path to human-level intelligence would be to first match that of "simpler" animals like mice.but that's a separate discussion and I didnt want to try to pack too much into a single tweet.
2022-07-11 12:44:47 @Dr_Cuspy von Neumann *thought* he was taking inspiration from the brain when he laid out the architecture for the modern computer I guess it's a judgement call whether he retained the right aspects. But overall the whole computer thing has been working out pretty well so far. https://t.co/kUdxrdJgpY
2022-07-11 11:40:32 Also are these really eagles? or ospreys ? Or something else ?
2022-07-11 10:59:28 A common critique of neuroAI is "sure, birds inspired planes. But modern engineers don't design planes based on birds. So why study brains to achieve human-level intelligence?"But if our goal were to achieve "bird-level flight," mightn't we want to study how these eagles fly? https://t.co/FsynAlSN3W
2022-07-10 16:21:16 @nomad421 @tylerneylon @arjunrajlab Well the basic "code" is pretty universal across life / eukaryotes. It is almost true that one could out human DNA into yeast and try to coax it into producing an embryo
2022-07-10 15:47:18 @nomad421 @tylerneylon @arjunrajlab I guess i dont understand what an "inert" parameter would be. In the case of an ANN, there is a small amount of code that is needed to interpret the weights and generate input/outputs.In a cell, there is machinery (encoded in the DNA) like polymerases etc to boot up the cell.
2022-07-10 15:37:18 most of my biology classes failed to present the questions to which what we learned was the answer. https://t.co/zeH7Zqbs0V https://t.co/MCNLZhGzoF
2022-07-10 15:26:26 @nomad421 @tylerneylon @arjunrajlab conceptually i'm not sure I see the difference btw a 700MB program and 700MB of paramseg I could write a C program and imbed 700MB of params
2022-07-10 00:56:44 Minsky famously bashed neural networks in his 1968 PerceptronsI did not realize he was already hating on them in 1961(this paper also contains one of the earliest uses of "reinforcement learning," which according to Google n-grams first appears in 1959) https://t.co/wpSVg0Ywx5
2022-07-10 00:41:45 @santoroAI @tyrell_turing @kaznatcheev @andpru Comparing size of genome vs ANN:Size (in bits) of genome = N * 2 bits/bp N= # of bp in genome (bp = basepair)Size (in bits) of ANN ≈ N * 32 bits/weight N= # of weights in ANN===Genome encodes its own decoderANN "decoded" by a (small) program so add that on
2022-07-09 11:47:24 @santoroAI @tyrell_turing @kaznatcheev @andpru The genome size of melon (12 chromosome pairs) is estimated to be 454 Mb, and cucumber (7 chromosome pairs) has a genome size of 367 Mbp(Each bp is 2 bits, bcs there are 4 nucleotides )https://t.co/VaiRKzAnbD
2022-07-09 11:44:24 @santoroAI @tyrell_turing @kaznatcheev @andpru Which part do you consider vague? Do you not think it is possible to compare the size (in bits) of eg Bert and gpt3? Or the size (in bits) of a worm genome like c elegans to a human genome? Or that you can't compare genome size to stuff on a computer for some reason?
2022-07-08 20:11:35 excited to share @anqi_z PhD work, now on biorxiv! https://t.co/lEYr5luZMD
2022-07-08 16:20:42 @IntuitMachine @nomad421 @tylerneylon @arjunrajlab Right but the fact that most animals function pretty well at birth indicates that the genome can specify a lot of structure.
2022-07-08 16:02:18 @nomad421 @arjunrajlab @tylerneylon I think Moravec's paradox is still relevanthttps://t.co/S8WAc7xzya https://t.co/zGdzDCKBK9
2022-07-08 16:00:03 @nomad421 @arjunrajlab @tylerneylon yes, but part of my argument is that whatever is special about humans (mostly language-related) is a small step from our pre-verbal ancestors: 1 M yrs ago. So if we can achieve mouse-level "intelligence", we're almost there.
2022-07-08 15:55:33 @arjunrajlab @nomad421 @tylerneylon if you mean that they are "externally specified" by learning--i argue that the vast majority of what most animals can do is specified innately, as demonstrated by the fact that most animals function pretty well at birthhumans may be a bit of an outlierhttps://t.co/9i0Nnpnrs6
2022-07-08 15:35:21 @nomad421 @tylerneylon @arjunrajlab i'm not really sure how a 700mb program differs from 700mb params. I view these are artificial implementational distinctions eg what if my program is "treat the 700mb param vector as a program"?
2022-07-08 14:49:47 @nomad421 @tylerneylon @arjunrajlab Right, the comparison isn’t perfect But the size of the genome, which specifies the developmental program for building a brain, places an upper bound on the complexity of the specification of the brain’s wiring diagram
2022-07-08 14:41:41 @AdrianoAguzzi @tylerneylon Except that the parts self assembleAlmost every difference between our brain and that of c elegans is contained in our genome, which specifies a developmental program that causes the brain to wire up properly. https://t.co/9i0Nnpnrs6
2022-07-07 18:19:19 @PresNCM PaperpileAfter using endnote since starting as faculty I finally got fed up. The transition to Paperpile took about 20 minutes.
2022-07-07 02:41:17 Discussing with co-author: "neuroscience" or "neurobiology"? To me the meanings are the same, but NS seemed more modern than NB. And indeed, it looks like NB was dominant until 1990, at which point NS crushesbonus points to whoever finds the first use in 1869!@NXBhattasali https://t.co/FTILm8YCDy
2022-07-06 12:24:35 @L_andreae Looking forward to reading it!
2022-07-04 22:11:55 @MillerLabMIT Shouldn't it be "Spontaneous Spiking Is *Correlated with* Broadband Fluctuations" not "Governed by"??
2022-07-03 13:37:04 @GCREllisDavies Physiology papers from my lab still sometimes have only 2 (or 3) authors...But mobio papers are more likely a team effort, often involving techs and multiple students/postdocs.https://t.co/doCNxrMcCXhttps://t.co/zdfIpCGnEGhttps://t.co/ql0WhGjfCB
2022-07-03 02:59:09 Eve Marder's ruminations and reflections on changes in how we do science. https://t.co/omvB0I2nWr
2022-06-19 19:20:59 @knutson_brain @Antonino__Greco @PessoaBrain I just finished this fascinating biography of Cajal. Cajal may have been brilliant, but he does not come off as a nice guy at all. Golgi on the other hand mostly quite humble https://t.co/mZIzOVCAWl
2022-06-11 17:51:22 @MarthaBagnall No public transport here and roads are not bike friendly. That’s why this is a bit of a surprise. Growing up in Berkeley, bikes and public transport gave me autonomy by age 10
2022-06-11 16:55:37 My rating is pretty low (I sometimes try to engage in unwanted conversation), but their passenger ratings are pretty low too so we seem to be stuck with each other for now.
2022-06-11 16:55:36 As the dad of suburban teenagers, I did not anticipate the extent to which my job description would overlap so heavily with those of an Uber driver
2022-06-09 20:45:08 @daeyeol_lee I did my first neuro grad rotation with him, which set me on the road to becoming a computational neuroscientist. Sad to hear of his passing.
2022-06-09 20:25:58 @cdk007 I’m not an economist but it seems to me that if demand exceeds supply then out of stock rate should approach 100% (at least in a perfect market): the moment new stock becomes available it should be purchased so 0% left (100% out of stock). Or price goes up reducing demand. No?
2022-06-09 17:59:25 @zga_aaa @GunnarBlohm i think interactive figures would be cool, but to me 98% of writing a paper is figuring out how to distill and communicate a small number of simple ideas as a reader, i'm typically more interested in hearing about what someone learned than what they did
2022-06-09 15:46:36 @GunnarBlohm how is the traditional paper format outdated? leaving aside issues about barriers to dissemination due to refereeing and profit motive--what is wrong with the paper format itself? What would be better?
2022-06-08 20:40:38 @hein_prizes @KiaNobre Congratulations Kia!
2022-06-06 02:51:58 @DavidBahry @GaryMarcus Kind of. If you are searching an N-dim space and take steps in random directions, keeping steps that improve fitness, the net effect can look like a gradient across the population. But cost is O(N) evals/step so a lot less efficient than if each step followed the gradient
2022-06-05 22:56:36 @autometalogole2 @GaryMarcus Back of the envelope: 10^30-10^40 individual animals have lived since the dawn of multicellular life 500 Myrs agoBut I have no idea how many flops it would take to simulate 1 yr of life adequately.
2022-06-05 22:53:32 @sjogren_rickard @GaryMarcus It’s true that a random sequence of amino acids is unlikely to generate a stable 3ry structure. But a small perturbation of an existing protein has a pretty good chance of being a reasonable protein So “brute force” = “random walk” in this context
2022-06-05 19:15:27 @GaryMarcus Evolution uses brute force. It doesn't even have access to the gradient. But it benefits from the >
2022-06-03 17:36:58 @tyrell_turing sure, it's neurons (or transistors) all the way down. So linguistic data are ultimately like other sensory data, except that indeed some stuff might be missingarguably though there might be something fundamental about interacting with the world in a way that changes its state
2022-06-03 15:42:42 @IntuitMachine @tyrell_turing absolutely agree that it's a gap of unknown size, but that recent progress suggests that it's a lot smaller than many (including me!) would have predicted. LLMs are amazing.
2022-06-03 15:37:17 @tyrell_turing Similarly i think it's hard to infer properties of the physical world--eg which objects move or are squishy--from even very large sets of static imagesEg labeling horse on a beach as a camel might be less likely if you really "understand" that horses move relative to background
2022-06-03 15:27:47 @tyrell_turing "Gaps" btw avail info from language vs an embodied agent could be pretty hard to fill Like trying to infer the structure of data in some very high dim space (the real world) by looking only at a much lower, but still large, dim projection (the world accessible by lang).
2022-06-03 15:11:48 @KordingLab @tyrell_turing i completely agree. In fact, the fact that most of the information in large sparsely connected brains is in the binary connection patterns is in part why i think structural connectomes (even w/o weights) are useful. https://t.co/MM5iJs3RYc
2022-06-02 20:43:28 @rita_strack there was a great podcast on ML for "nowcasting" (short-term weather predictions)https://t.co/VIiNsShZ2H
2022-06-02 20:25:02 @GaryMarcus @IntuitMachine @mraginsky but more importantly, Chomsky's impact on linguistics derives primarily from work he published before i was an undergrad.AFAIK, his published work on linguistics since the 80s has not been terribly influential. (Correct me if i'm wrong on that)
2022-06-02 20:23:30 @GaryMarcus @IntuitMachine @mraginsky Sure! I stand by all my published work, which admittedly only goes back 32 years.https://t.co/G7fx7GWBhe
2022-06-02 20:20:49 @GaryMarcus @IntuitMachine @mraginsky no i really dont think i am strawmanning the linguistics taught to me in the 80s by former Chomsky acolytes who dominated the UC Berkeley Linguistics dept. That sterile view is literally why i quit linguistics and became a neuroscientist
2022-06-02 20:18:39 @GaryMarcus @IntuitMachine @mraginsky so although i am sympathetic to the argument that LLMs dont provide insight into how humans generate/process/use language, i'm not sure why Chomsky's picture is there. That just doesnt seem a Chomsky-ian objection, at least not c. 1957 or 1965 or even 1984 when i was an ugrad
2022-06-02 20:15:32 @GaryMarcus @IntuitMachine @mraginsky At least in the dark ages when i studied linguistics, the idea that linguists should be studying the brain (much less the mind) was anathema. In fact, the prohibition against thinking about the mind was why i quit linguistics and moved to neuroscience. https://t.co/hbcIGCaNSO
2022-06-02 20:12:15 @GaryMarcus @IntuitMachine @mraginsky I am confused. Chomsky defined language as the set of all grammatical sentences. That was the research program. He proposed a particular set of rules--generative grammar. Turns out a different set (transformers, LLM) apparently do a pretty good job. So he should be happy
2022-06-02 13:57:58 @GaryMarcus So in that sense i think the incredible success of LLMs can be viewed as a (surprising to me!) near vindication of Chomsky's focus on syntaxand even more so of the late great Tali Tishby's 1999 talk "Can Shannon learn semantics?" Apparently yes.
2022-06-02 13:54:39 @GaryMarcus Lang as "mapping from syntax to semantics" was certainly not core to Chomsky's views AFAIR, his 1957 formulation had no role for semanticshis 1965 had some notion of "deep structure" but i dont think that required the kind of "understanding" we feel is missing in LLMs
2022-05-31 22:48:40 ok i was never a believer in the Skynet scenario but if DALLE-2 has invented a secret language in which eg "Apoploe vesrreaitais" means birds then who knowsSo let's not hand over the nuclear codes to DALLE or we may all be crushed like Contarra ccetnxniams luryca tanniounons https://t.co/gwReh18idg
2022-05-29 20:05:06 @JMGrohNeuro @anne_churchland Lga is slightly closer but until recently was notoriously the worst major airport in the US. Supposedly getting better but haven’t experienced it recently. So I always to go through JFK if possible
2022-05-28 17:16:11 @neuralreckoning @ylecun I think putting it in format so it looks like a journal TOC would make it easier for people to ease into this new mechSome REs might include commentaries, N&
2022-05-28 17:14:19 @neuralreckoning @ylecun exactly--multiple "reviewing entities" (REs) will link to the same articles.So you can launch "Dan &
2022-05-28 16:36:52 @neuralreckoning I still believe that we need a mech for tagging interesting papers, ie postpub review not just for truth but also for interest. I just think it should be decoupled from gatekeeping. I am a fan of @ylecun's "reviewing entities"--like postpub journalshttps://t.co/Hzo93EXb35
2022-05-28 14:52:57 @neuralreckoning Given that your objection is to reviewing criteria based on anything other than correctness, why not submit all your papers to Plos One? And review for them? Or when you say you have to publish in "legacy journals" do you mean "high prestige journals"? https://t.co/tHN66DUJA2
2022-05-27 23:52:45 @neuralreckoning @tyrell_turing Nice analogy. Well played !
2022-05-27 22:10:17 @tyrell_turing @neuralreckoning Even though i'm not Canadian, I agree that gradual is usually better. I think world history shows that revolutions* are rarely successful and almost always painful. *except revolutions that involve throwing out foreign invaders.
2022-05-27 00:25:28 @tyrell_turing @ylecun which podcast?
2022-05-24 21:35:52 @StevenBratman @WiringTheBrain @tyrell_turing @ylecun yes, not just competition but also cooperationOur success as a species is probably largely due to language, which presumably evolved to enable more sophisticated forms of cooperation. ("You chase the antelope toward me, i'll be waiting here in the tree")
2022-05-24 21:17:41 @tyrell_turing @ylecun Acquisition of social knowledge clearly requires a long training period (to get to know others around you) and a lot of behavioral flexibility. If your social strategy is too inflexible, it is easily defeated. You have to be able to adapt to those around you
2022-05-24 21:13:43 @tyrell_turing @ylecun Yes, social intelligence has been the key driver over the last 5-10Myrs (or more) of primate cog evolution. We have been in an arms race with conspecifics to do better at modeling others' behavior. Who will attack? Who will cooperate? Whom can I trust?
2022-05-24 18:21:23 @CSHL @JBorniger @ArkarupBanerjee Congratulations!
2022-05-24 15:57:14 @JulioMTNeuro In many invertebrate circuits intrinsic firing patterns are highly regulated, modulated and absolutely central to function. Famously the lobster stomatogastric ganglion. https://t.co/rpdm7Fcpi9
2022-05-24 14:55:47 @Jobamey @ylecun Social structure.
2022-05-24 13:22:16 @ylecun yes, adaptability--many things change too fast to encode in DNA. Eg place cells are innate, but their content (what is where) varies. Language is innate, but you need to learn the words of your language. And you need to learn social structure (who is who in your troop)
2022-05-24 11:56:36 @SunnyBe4r @anne_lauscher @HenkPoley @_florianmai @seb_ruder yep that's pretty much what we do herehttps://t.co/Vx624XC6IR
2022-05-23 21:45:39 @FelixHill84 @ylecun I'm not sure human success arises from being particularly "clever" but rather mostly from language, esp our ability to accumulate knowledge over generationsI think language is a specialized skill unique to humans, in the same way echolocation is specialized in bats. Not so hard
2022-05-23 20:31:47 @ylecun True, but humans are outliers in the amount of data they require. Most mammals require much less. Eg if you are satisfied with feline-level perception, it's more like 3 months * ... * 10 fps = 38.9 millionFish and bugs are even faster (~ 0)https://t.co/xGRUFiZZyG
2022-05-23 20:30:59 @ylecun https://t.co/xGRUFiZZyG
2022-05-23 20:18:40 @ntraft @RobFlynnHere @jjsakon @KishavanBhola @ThomasMiconi @ylecun the DNA in the genome encodes the proteins. More generally, it encodes the cellular expression of these proteins that enable the brain wire up properly. The code for reading out proteins is the same for all life on earth
2022-05-22 19:49:12 @KordingLab @NeuroPolarbear In my experience, if a paper posted to biorxiv gets X units of engagement/interest, that same paper gets a 5-10X bump in interest if it appears in a high profile journal 1-2 yrs later. I wish that weren't the case but it still appears to be
2022-05-20 16:09:27 It's worth adding that Chuck really deserved to share in that Nobel prize, not bcs of patch clamp recording, but bcs his much more elegant (though ultimately less useful) "noise analysis" deduced single channel conductances years earlier. But he never had any regrets.
2022-05-20 16:05:32 @kaznatcheev i agree it shouldnt be controversial not to put your name on papers you didnt contribute to. But somehow "paying the bills" is thought to qualify as a contribution.
2022-05-20 15:54:18 My pd PI Chuck Stevens declined to be an author on the Nobel Prize winning first paper by Neher and Sakmann--work done in his lab at Yale--bcs he "didnt contribute much". (He also didnt put his name on some work i did independently in his lab. No Nobel for that work though.) https://t.co/MdDk8sqykv https://t.co/vHichlew60
2022-05-20 08:11:00 CAFIAC FIX
2022-10-28 19:04:15 RT @kohn_gregory: There's been a lot of attention surrounding this study, which shows that zebrafish lacking action potentials still develo…
2022-10-28 12:43:40 @kendmil @WiringTheBrain @bdanubius
2022-10-28 12:43:25 @WiringTheBrain @bdanubius
2022-10-28 04:12:08 Exciting application of MAPseq in olfactory cortex with Xiaoyin Chen, @joe6783 and @dinanthos https://t.co/TFYGOSnQc3
2022-10-26 22:30:31 @LKayChicago @MillerLabMIT @vferrera @PessoaBrain @NicoleCRust Exactly“The Wave” is generated by a simple local rule. Nothing magical. https://t.co/HKeLLhHt1R
2022-10-26 20:38:21 @PessoaBrain Indeed this is a great example of how simple local rules--stand up &
2022-10-26 19:19:28 @furthlab This rewards people for doing the public service of reviewing. To gamify it people would compete for providing *valuable/useful* reviews. And allowing any interested reader to self-select as a (post pub) reviewer
2022-10-26 19:11:51 @furthlab @_dim_ma_ There is currently no system for saying that across journals your reviews are considered to be among the top 1% most valuable of all reviewers by readers. Especially in a way that allows a reviewer to remain anonymous
2022-10-26 18:12:21 @furthlab I don’t think the problem is too many papers per scientist.
2022-10-26 16:30:14 @SteinmetzNeuro @OdedRechavi My hope would be to defund publishing as much as possible, though i agree that if there is money to be spent it should go to editors first and then reviewers.
2022-10-26 16:18:19 @cshperspectives @wjnawrocki i guess for widespread uptake by the community there would have to be a very user-friendly front end. (I have no idea how to interact with ORCID)
2022-10-26 16:08:31 @cshperspectives @wjnawrocki having a centralized repository for these reviews, along with a mechanism so that even anonymous reviews could remain linked to the reviewer, would be a great step forward. (also a way to up- and down-vote reviewers)
2022-10-26 15:03:48 @cshperspectives @wjnawrocki really? how would it work? if i were to write a 4 paragraph review of a published paper (or preprint), where would i post it and how would i get a DOI? Is there a "biorxiv-reviews"?
2022-10-26 14:55:57 @cshperspectives @wjnawrocki make reviews citable with their own DOIs...https://t.co/LG0CHdRAsH
2022-10-26 14:54:41 this would go a long way to solving the "how do we get enough reviewers?" problem! https://t.co/zrICWrvYD9
2022-10-26 14:50:32 @behrenstimb or maybe one (@bdanubius) of the authors has been thinking about the relationship btw AI, learning and evolution and that's what motivated them to do these expts and so they sharing their actual motivation?you may question whether it IS relevant but: https://t.co/vFHS5k2OAh
2022-10-26 14:37:14 @cshperspectives @wjnawrocki https://t.co/RfvokFe96j
2022-10-26 14:36:54 @OdedRechavi how about rewarding the reviewers w/o paying them? Set up a system so top reviewers could be acknowledged for service to the community--something they can put on their resumesAnd open up reviewing to everyone-->
2022-10-26 14:27:05 RT @joshdubnau: Do you think it is sound career advice to encourage a postdoc looking for a TT job or assistant professor hoping for tenure…
2022-10-24 19:39:27 @davidchalmers42 just something to think about https://t.co/ImGVmdx5td https://t.co/RaJkolrj4s
2022-10-24 19:16:18 I just contributed to @actblue But i am reluctant to contribute againWhy you ask? Since contributing i have been inundated with texts and emails. Literally more than TWO DOZEN since last night!!*** Plz provide opt out option AT SOURCE if you want continued engagement ***
2022-10-24 04:00:48 @mezaoptimizer @pfau @ylecun @KordingLab yeah no analogy is perfect but going with this one i'd say it's as though modern physicists argued "we can do all the physics we need to by just reading Feynman...no need to learn any math beyond what we absorb from that"
2022-10-24 03:48:28 @mezaoptimizer @pfau @ylecun @KordingLab i dont really know what "researching neuroAI" would mean. We can research neuro, and apply what we learn to AI (and vice versa). To do either requires deep knowledge of both
2022-10-24 03:22:40 @pfau @ylecun @KordingLab and yet that's kinda the point. Feynman benefitted from the deep understanding of math learned during his training so didnt need Theorem 6a from Acta Math. yet the fact that he didnt need to keep up with the latest doesnt mean that later physicists could ignore math right?
2022-10-24 03:09:53 @pfau @ylecun @KordingLab Touché!
2022-10-24 02:42:32 @ChurchillMic luckily we have a just the analogy for you in the white paperbriefly: The Wright brothers werent trying to achieve "bird-level flight," ie birds that can swoop into the water to catch a fishAGI is a misnomer. What people want is AHI. ("general" -->
2022-10-24 01:50:13 @memotv @pfau also different from a major point of the white paper which was:"Historically many people who made major contributions to AI attribute their ideas to neuro. Nowadays fewer young AI researchers know/care about neuro. It'd be nice if there were more bilingual researchers
2022-10-24 01:20:38 I think there would be a lot less animosity in Twitter debates if they let you write “I think” without it counting toward your character limit.Just my opinion
2022-10-24 00:22:32 @pfau @ylecun @KordingLab I would say this is like asking a physicist what recent paper in math they read that enabled some result:"Hey Feynman, Did you ever read a paper in Acta Mathematica that directly changed the way you did something??"If no, then no need for physicists to learn any math, eh?
2022-10-24 00:17:39 @criticalneuro @tyrell_turing i think @pfau denies that "historically neuro contributed to AI"@gershbrain is also kind of a contribution-denier, though willing to concede the possibility of "soft intellectual contributions" https://t.co/ByyUFfunjj
2022-10-24 00:06:52 @criticalneuro @neuroetho @NicoleCRust IMO depends on what you mean by "advances". Agree that 99.9% of papers at NeurIPS do not require neuroBig ideas from neurosci might take 100 NeurIPS units to become useful bcs SOTA is so goodSo q is if all future big advances are endogenous or if neuro still has more to offer
2022-10-23 14:22:56 @neuroetho @criticalneuro @NicoleCRust yes I think many are arguing against hoping some specific Fig. 6a of some paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &
2022-10-23 14:20:54 In prepping for this upcoming discussion on LFPs @NicoleCRust reminds us of this 1999 special issue of Neuron all about oscillations and the binding problemhttps://t.co/zpkVADGSlK https://t.co/0rGLQim1hm
2022-10-23 14:17:12 @davidchalmers42 i dont think there is a single linear metric by which we can rank cognitive capacities, which is why the "general" in AGI is misleading. what we really mean is A-Human-IntelligenceBees are incredible but if we want to mimic HUMAN intel mice are closerhttps://t.co/A61XAQC4z5
2022-10-23 14:09:09 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab so i'm not sure that i disagree with what is written. I think they are talking about what i would call phenotypic behavioral discontinuities, whereas if one is building a system what matters is how much you need to tweak the parts and overall design
2022-10-23 14:06:07 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab I think the point is that you can have a discontinuity in abilities with only a few tweaks to the underlying structuresFinches going from hunting soft bugs to cracking hard seeds is a huge behavioral discontinuity but happened v. fasthttps://t.co/AgCLTUuHJ3
2022-10-23 13:10:35 @pfau @martingoodson i think you are arguing against hoping some specific Fig. 6a of some neuro paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &
2022-10-23 12:29:42 @neuropoetic @NeuroChooser @KordingLab Yes. un-nuanced provocation is a good way to build engagementI should try posting "Neuroscience is all you need. AI is off the rails and needs a reset. Scale is useless" and see what happens
2022-10-23 12:23:14 @Isinlor @gershbrain the amazing abilities of a bee, with <
2022-10-23 04:04:22 @jeffrey_bowers @aniketvartak @KordingLab Do you imagine the discontinuity occurred before or after we diverged from chimps (4 Myrs ago)? Although i happen believe a lot of what happened since then is due to language, my fundamental point (that our divergence is but an evolutionary tweak, like finch beaks) still holds
2022-10-23 01:31:13 @aniketvartak @jeffrey_bowers @KordingLab Lots humans can do animals can't (and vice versa)But most of the interesting ones are IMO coupled to language which likely evolved 100K-1M yrs ago--a blink Thus a few tweaks enabled a large change in abilityLike qualitative differences in Galapagos finch beak abilities https://t.co/S5B2t1dBB2
2022-10-22 20:00:49 @skeptencephalon I agree. One of the goals of rekindling interest in NeuroAI is to tap in to all the things we've learned in neuroscience in the last 3 decades
2022-10-22 19:55:13 @MatteoPerino_ @aemonten @mbeisen Right now, editors only tap established people However when it comes to establishing technical validity, a good postdoc or even senior grad student could do the job, greatly expanding the possible poolWe will need a system for assessing reviewer quality
2022-10-22 19:47:55 RT @ylecun: @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &
2022-10-22 19:01:03 A sad day. Chuck was one of the greats. He was an inspiration as a scientist and a mentor.His contributions over more than half a century of neuroscience were broad and deep. He will be missed. https://t.co/LWXF9rUDYY
2022-10-22 16:07:09 @neuropoetic @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau @ChenSun92 biological plausibility is important for the application of AI to neuro, but doesnt really come up for the application of neuro to AI
2022-10-22 15:50:55 @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau This question comes up in funding biology. Why bother funding basic stuff--let's just solve cancer!It turns out that ideas take years or decades to percolate from basic science to the clinic. So the understanding the influences will always seem like archeology
2022-10-22 15:23:13 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau Indeed, AI is SO intertwined with Neuro that it doesnt make any sense to try to disentangle them historically. The whole point is we need people trained in both fields(BTW, that's only true of modern AI/ML/ANNs. GOFAI "advanced" w/minimal influence form neuro)
2022-10-22 14:53:12 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau but transformers solve a problem posed by RNNs, which were definitely neuro-inspired. And given links btw (neuro-inspired) Hopfield nets and transformers, perhaps the connection to neuro is stronger than usually appreciated?
2022-10-22 14:40:50 @garrface hmm. If memory serves, S&
2022-10-22 14:35:31 @criticalneuro @gershbrain my view is that the NeuroAI history involves big ideas slowly percolating from neuro to AI. Sometimes it takes years or decades for them to be engineered into something useful. But unless you think "scale is all you need" we are gonna keep needing new ideas for a while
2022-10-22 14:33:32 @criticalneuro @gershbrain @gershbrain can weigh in about whether i misunderstood his tweet...if so, then i wasted 30 minutes summarizing my view of the history of NeuroAI, which hopefully some people might find interestingbut he also raises an interesting question about future neuro-->
2022-10-22 14:17:23 @NeuroChooser @KordingLab i would agree except i dont think it's "just" engineering. Engineering is an essential and equal partner to the underlying inspiration...without proper implementation and development, those ideas are useless
2022-10-22 14:11:12 @KordingLab NoNeuro has historically been essential for many/most of the major advances. Unless you think "scale is all you need" then it's a great way to find hints as to what path to followhttps://t.co/Q8QczhC3zu
2022-10-22 14:08:01 @gershbrain @josephdviviano i agree with that (much weaker) formulation...neuro is not about delivering "widgets" to AI. Neuro can inspire big ideas. It can hint about what the right path is. But to make these ideas work requires engineering
2022-10-22 13:56:00 i should have cited this very nice summary of the history https://t.co/PVwiZBa2yFFIN+1
2022-10-22 13:53:39 @gershbrain yes i do think that... https://t.co/wLvNYYHiH4
2022-10-22 13:52:47 But stepping back: I think it's not coincidental that the early, major, advances in ANNs were made by people with feet in both communities. When NeurIPS was founded, the ANN community was indistinguishable from comp neuro
2022-10-22 02:02:19 @benj_cardiff @KordingLab He is not the first to say that! Luckily, we addressed that by arguing that we would be well advised to study ornithology if our goal were to endow a machine with "bird-level flight", eg "the ability to fly through dense forest foliage and alight gently on a branch" https://t.co/yo7JnGGVSG
2022-10-22 01:18:13 @neurograce @VenkRamaswamy @nidhi_s91 Cosyne is attracting more AI these days too
2022-10-22 00:37:53 @nidhi_s91 here, specifically we are talking about the energy efficiency of neural processing. A brain can do eg object recognition with a lot less power than a computerMy belief, shared with many, is that spiking (along with perhaps stochasticity, eg of synaptic transmission) is key
2022-10-22 00:35:30 @nidhi_s91 love to hear about it.To some extent, this is a call for AI to return to an earlier time when neuro and AI were much tighter. As a grad student in comp neuro, NeurIPS was my go-to meeting...neural networks and comp neuro used to be very tightly integrated
2022-10-22 00:06:27 @nidhi_s91 agree. all important and interesting fields
2022-10-22 00:05:50 @nidhi_s91 studying real animal bodies and how they interact with the environment is key to building robots. Inspired by "How to walk on water and climb up walls"https://t.co/INYhrLWmDD
2022-10-22 00:01:45 @nidhi_s91 that said, i am greatly inspired by ethology and agree it has a great deal to contribute
2022-10-21 23:59:34 @nidhi_s91 the overall goal of the paper is to galvanize excitement about NeuroAI. Historically neuro drove many key advances in AI, but one might ask what remains? Algos/circuits that address Moravec's Paradox (via embodiment) is one possible deliverable. Energy efficiency is another
2022-10-21 23:47:10 @nidhi_s91 the energy efficiency of neural circuits has indeed been studied for decades, eg this great paper by Laughlin. But studying energy efficiency of neural circuits does seem to fall squarely within the purview neuroscience, no? https://t.co/AZCQZ38NRF
2022-10-21 23:38:26 @criticalneuro @Abel_TorresM @summerfieldlab The primary target for funding would be govt not industry (though it'd be great if industry ponied up as well).
2022-10-21 23:34:20 @summerfieldlab AFAIK, there was little attempt in the Human Brain Project to "abstract the underlying principles"
2022-10-21 19:53:01 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical well in the Shannon information sense the information is there. How to decode is a separate questionIf you listened to the raw signal received by your cellphone it wouldnt mean anything to you. Luckily your phone knows how to decode it into an acoustic waveform
2022-10-21 19:22:56 @sanewman1 @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical i guess this reflects a very different understanding of how biology works from mine
2022-10-21 19:21:37 @SimsYStuart @PaulTopping @sanewman1 @ehud @kohn_gregory @GaryMarcus @SpeciesTypical I think the evidence for transgenerational epigenetic inheritance (Lamarckian evolution) playing an important role in humans (or most other animals) is very limited at best.Although Lamarck is a better algorithm, nature mostly seems to content itself with Darwin
2022-10-21 17:59:21 @kohn_gregory @GaryMarcus @sanewman1 @PaulTopping @ehud @SpeciesTypical i am using "information" in the technical (Shannon) sense, closely related to entropythere are other common uses of that word, and this might be at the root of some of the confusion here
2022-10-21 17:56:48 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical I am not clear how the fact that ink patterns might as well be stains is relevant here...can you clarify?
2022-10-21 17:52:37 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical it is semantics in that we know a lot about how these things work, so we're discussing what words to describe how it happens.There was a recent discussion about whether it's correct to call cells "machines" which imo was also just semantics.
2022-10-21 17:47:04 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical If i hand you a long set of instructions in Hungarian, i expect they will be challenging for you to follow (assuming you dont speak Hungarian). Nonetheless, i would say that the information is still there in the instructions. (not a perfect analogy but perhaps useful?)
2022-10-21 17:42:38 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical is there some other word that captures your understanding of the relationship btw geno/phenotype better? As a fellow biologist i assume we mostly agree on what that relationship is, so i guess we are just discussing word choice/semantics?
2022-10-21 17:37:35 @sanewman1 @GaryMarcus @PaulTopping @ehud @kohn_gregory @SpeciesTypical well, my phenotype includes being primarily bipedal, whereas my dog is mostly quadrupedalwould you not say that his genes "determined" his (quadrupedal) phenotype?
2022-10-20 20:48:08 @MelMitchell1 @mpshanahan @LucyCheke yes good point! We should add that to the next iteration
2022-10-20 20:13:50 @DavidJonesBrain @jeffrey_bowers @KordingLab I would include neurology as part of neuroscience. #bigtent
2022-10-20 18:39:15 @patrickmineault @KordingLab @seanescola ?
2022-10-20 18:37:41 @jeffrey_bowers @KordingLab my view is that much of what is needed is already present in animals (Moravec's paradox), which is not the primary focus of most psychology work today https://t.co/nTWXd3JGuB
2022-10-20 14:21:01 White paper —Rallying cry for NeuroAI to work toward Embodied Turing Test !Let’s overcome Moravec’s paradox: Tasks “uniquely” human like chess and even language/reasoning are much easier for machines than “easy” interaction with the world which all animals all perform. https://t.co/ehKRWl7rgJ
2022-10-19 21:40:40 @PessoaBrain @MillerLabMIT @LKayChicago @NicoleCRust By parts, I meant, synapses, channels, neurons. We know an awful lot about molecular and cellular neuroscience. How they are organized into higher level units like areas etc I agree is a bit less clear.
2022-10-19 20:47:25 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust sure it's all about figuring out how computation emerges from those parts...but IMO, it's worth keeping all that we learned about those parts (and how they are organized into circuits, etc) in mind as constraints...
2022-10-19 20:35:52 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust i think we know an awful lot about the parts that make up brains. Just not how they compute.... https://t.co/P2FGaui07C
2022-10-19 19:57:40 @jonasobleser @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust flattered to be compared with the GOAT but i'm not sure that most people who know me would characterize my discussion style as #ropeadope.
2022-10-19 19:55:26 @LKayChicago @MillerLabMIT @PessoaBrain @NicoleCRust hopefully we will all walk away with a shared understanding of what words like "organizing effects", "cause" and "epiphenomenon" mean in this context....
2022-10-29 14:22:11 @IntuitMachine for better or worse AFAIK there is no technical use for that word so we can abuse it at will
2022-10-29 14:18:39 @IntuitMachine perhaps a worthwhile campaign to have started in 1950 to nip possible misunderstandings in the bud but i think that ship has sailed #mixedmetaphorsat this point i think it's best to just define words clearly and move one
2022-10-29 14:11:49 @robwilliamsiii @MillerLabMIT https://t.co/SlhsxSrP53
2022-10-29 14:10:30 @IntuitMachine That said, the word "information" appears prominently on page 2. And within just a few years everyone was calling it "information theory," including eg EEs like Robert Fano (1950)https://t.co/qpp07m1Su9 https://t.co/BHpLY9TTKs
2022-10-29 14:05:38 @IntuitMachine I only use information in the formal Shannon sense. A useful concept but can be misusedAlways a risk when a popular word acquires a technical meaning like "significance" in statsOr even "temperature"...40F skiing in Utah *feels* a lot less cold than on a foggy day in SF!
2022-10-29 13:03:26 Conclusions from latest MAPseq paper https://t.co/vsX9T7m4K6
2022-10-29 13:02:32 RT @dinanthos: This organization enables parallel computations and further cross-referencing, since olfactory information reaches a given t…
2022-10-29 13:02:29 RT @dinanthos: We propose that olfactory information leaving the bulb is relayed into parallel processing streams (perception, valence and…
2022-10-29 12:38:48 @tdietterich @ylecun I imagine it is largely preprogrammed Just as human empathy is largely preprogrammed
2022-10-29 12:24:29 RT @kevincollier: This is as good as everybody says, really feels like the single most essential reading on today's big news.https://t.co/…
2022-10-28 19:04:15 RT @kohn_gregory: There's been a lot of attention surrounding this study, which shows that zebrafish lacking action potentials still develo…
2022-10-28 12:43:40 @kendmil @WiringTheBrain @bdanubius
2022-10-28 12:43:25 @WiringTheBrain @bdanubius
2022-10-28 04:12:08 Exciting application of MAPseq in olfactory cortex with Xiaoyin Chen, @joe6783 and @dinanthos https://t.co/TFYGOSnQc3
2022-10-26 22:30:31 @LKayChicago @MillerLabMIT @vferrera @PessoaBrain @NicoleCRust Exactly“The Wave” is generated by a simple local rule. Nothing magical. https://t.co/HKeLLhHt1R
2022-10-26 20:38:21 @PessoaBrain Indeed this is a great example of how simple local rules--stand up &
2022-10-26 19:19:28 @furthlab This rewards people for doing the public service of reviewing. To gamify it people would compete for providing *valuable/useful* reviews. And allowing any interested reader to self-select as a (post pub) reviewer
2022-10-26 19:11:51 @furthlab @_dim_ma_ There is currently no system for saying that across journals your reviews are considered to be among the top 1% most valuable of all reviewers by readers. Especially in a way that allows a reviewer to remain anonymous
2022-10-26 18:12:21 @furthlab I don’t think the problem is too many papers per scientist.
2022-10-26 16:30:14 @SteinmetzNeuro @OdedRechavi My hope would be to defund publishing as much as possible, though i agree that if there is money to be spent it should go to editors first and then reviewers.
2022-10-26 16:18:19 @cshperspectives @wjnawrocki i guess for widespread uptake by the community there would have to be a very user-friendly front end. (I have no idea how to interact with ORCID)
2022-10-26 16:08:31 @cshperspectives @wjnawrocki having a centralized repository for these reviews, along with a mechanism so that even anonymous reviews could remain linked to the reviewer, would be a great step forward. (also a way to up- and down-vote reviewers)
2022-10-26 15:03:48 @cshperspectives @wjnawrocki really? how would it work? if i were to write a 4 paragraph review of a published paper (or preprint), where would i post it and how would i get a DOI? Is there a "biorxiv-reviews"?
2022-10-26 14:55:57 @cshperspectives @wjnawrocki make reviews citable with their own DOIs...https://t.co/LG0CHdRAsH
2022-10-26 14:54:41 this would go a long way to solving the "how do we get enough reviewers?" problem! https://t.co/zrICWrvYD9
2022-10-26 14:50:32 @behrenstimb or maybe one (@bdanubius) of the authors has been thinking about the relationship btw AI, learning and evolution and that's what motivated them to do these expts and so they sharing their actual motivation?you may question whether it IS relevant but: https://t.co/vFHS5k2OAh
2022-10-26 14:37:14 @cshperspectives @wjnawrocki https://t.co/RfvokFe96j
2022-10-26 14:36:54 @OdedRechavi how about rewarding the reviewers w/o paying them? Set up a system so top reviewers could be acknowledged for service to the community--something they can put on their resumesAnd open up reviewing to everyone-->
2022-10-26 14:27:05 RT @joshdubnau: Do you think it is sound career advice to encourage a postdoc looking for a TT job or assistant professor hoping for tenure…
2022-10-24 19:39:27 @davidchalmers42 just something to think about https://t.co/ImGVmdx5td https://t.co/RaJkolrj4s
2022-10-24 19:16:18 I just contributed to @actblue But i am reluctant to contribute againWhy you ask? Since contributing i have been inundated with texts and emails. Literally more than TWO DOZEN since last night!!*** Plz provide opt out option AT SOURCE if you want continued engagement ***
2022-10-24 04:00:48 @mezaoptimizer @pfau @ylecun @KordingLab yeah no analogy is perfect but going with this one i'd say it's as though modern physicists argued "we can do all the physics we need to by just reading Feynman...no need to learn any math beyond what we absorb from that"
2022-10-24 03:48:28 @mezaoptimizer @pfau @ylecun @KordingLab i dont really know what "researching neuroAI" would mean. We can research neuro, and apply what we learn to AI (and vice versa). To do either requires deep knowledge of both
2022-10-24 03:22:40 @pfau @ylecun @KordingLab and yet that's kinda the point. Feynman benefitted from the deep understanding of math learned during his training so didnt need Theorem 6a from Acta Math. yet the fact that he didnt need to keep up with the latest doesnt mean that later physicists could ignore math right?
2022-10-24 03:09:53 @pfau @ylecun @KordingLab Touché!
2022-10-24 02:42:32 @ChurchillMic luckily we have a just the analogy for you in the white paperbriefly: The Wright brothers werent trying to achieve "bird-level flight," ie birds that can swoop into the water to catch a fishAGI is a misnomer. What people want is AHI. ("general" -->
2022-10-24 01:50:13 @memotv @pfau also different from a major point of the white paper which was:"Historically many people who made major contributions to AI attribute their ideas to neuro. Nowadays fewer young AI researchers know/care about neuro. It'd be nice if there were more bilingual researchers
2022-10-24 01:20:38 I think there would be a lot less animosity in Twitter debates if they let you write “I think” without it counting toward your character limit.Just my opinion
2022-10-24 00:22:32 @pfau @ylecun @KordingLab I would say this is like asking a physicist what recent paper in math they read that enabled some result:"Hey Feynman, Did you ever read a paper in Acta Mathematica that directly changed the way you did something??"If no, then no need for physicists to learn any math, eh?
2022-10-24 00:17:39 @criticalneuro @tyrell_turing i think @pfau denies that "historically neuro contributed to AI"@gershbrain is also kind of a contribution-denier, though willing to concede the possibility of "soft intellectual contributions" https://t.co/ByyUFfunjj
2022-10-24 00:06:52 @criticalneuro @neuroetho @NicoleCRust IMO depends on what you mean by "advances". Agree that 99.9% of papers at NeurIPS do not require neuroBig ideas from neurosci might take 100 NeurIPS units to become useful bcs SOTA is so goodSo q is if all future big advances are endogenous or if neuro still has more to offer
2022-10-23 14:22:56 @neuroetho @criticalneuro @NicoleCRust yes I think many are arguing against hoping some specific Fig. 6a of some paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &
2022-10-23 14:20:54 In prepping for this upcoming discussion on LFPs @NicoleCRust reminds us of this 1999 special issue of Neuron all about oscillations and the binding problemhttps://t.co/zpkVADGSlK https://t.co/0rGLQim1hm
2022-10-23 14:17:12 @davidchalmers42 i dont think there is a single linear metric by which we can rank cognitive capacities, which is why the "general" in AGI is misleading. what we really mean is A-Human-IntelligenceBees are incredible but if we want to mimic HUMAN intel mice are closerhttps://t.co/A61XAQC4z5
2022-10-23 14:09:09 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab so i'm not sure that i disagree with what is written. I think they are talking about what i would call phenotypic behavioral discontinuities, whereas if one is building a system what matters is how much you need to tweak the parts and overall design
2022-10-23 14:06:07 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab I think the point is that you can have a discontinuity in abilities with only a few tweaks to the underlying structuresFinches going from hunting soft bugs to cracking hard seeds is a huge behavioral discontinuity but happened v. fasthttps://t.co/AgCLTUuHJ3
2022-10-23 13:10:35 @pfau @martingoodson i think you are arguing against hoping some specific Fig. 6a of some neuro paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &
2022-10-23 12:29:42 @neuropoetic @NeuroChooser @KordingLab Yes. un-nuanced provocation is a good way to build engagementI should try posting "Neuroscience is all you need. AI is off the rails and needs a reset. Scale is useless" and see what happens
2022-10-23 12:23:14 @Isinlor @gershbrain the amazing abilities of a bee, with <
2022-10-23 04:04:22 @jeffrey_bowers @aniketvartak @KordingLab Do you imagine the discontinuity occurred before or after we diverged from chimps (4 Myrs ago)? Although i happen believe a lot of what happened since then is due to language, my fundamental point (that our divergence is but an evolutionary tweak, like finch beaks) still holds
2022-10-23 01:31:13 @aniketvartak @jeffrey_bowers @KordingLab Lots humans can do animals can't (and vice versa)But most of the interesting ones are IMO coupled to language which likely evolved 100K-1M yrs ago--a blink Thus a few tweaks enabled a large change in abilityLike qualitative differences in Galapagos finch beak abilities https://t.co/S5B2t1dBB2
2022-10-22 20:00:49 @skeptencephalon I agree. One of the goals of rekindling interest in NeuroAI is to tap in to all the things we've learned in neuroscience in the last 3 decades
2022-10-22 19:55:13 @MatteoPerino_ @aemonten @mbeisen Right now, editors only tap established people However when it comes to establishing technical validity, a good postdoc or even senior grad student could do the job, greatly expanding the possible poolWe will need a system for assessing reviewer quality
2022-10-22 19:47:55 RT @ylecun: @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &
2022-10-22 19:01:03 A sad day. Chuck was one of the greats. He was an inspiration as a scientist and a mentor.His contributions over more than half a century of neuroscience were broad and deep. He will be missed. https://t.co/LWXF9rUDYY
2022-10-22 16:07:09 @neuropoetic @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau @ChenSun92 biological plausibility is important for the application of AI to neuro, but doesnt really come up for the application of neuro to AI
2022-10-22 15:50:55 @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau This question comes up in funding biology. Why bother funding basic stuff--let's just solve cancer!It turns out that ideas take years or decades to percolate from basic science to the clinic. So the understanding the influences will always seem like archeology
2022-10-22 15:23:13 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau Indeed, AI is SO intertwined with Neuro that it doesnt make any sense to try to disentangle them historically. The whole point is we need people trained in both fields(BTW, that's only true of modern AI/ML/ANNs. GOFAI "advanced" w/minimal influence form neuro)
2022-10-22 14:53:12 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau but transformers solve a problem posed by RNNs, which were definitely neuro-inspired. And given links btw (neuro-inspired) Hopfield nets and transformers, perhaps the connection to neuro is stronger than usually appreciated?
2022-10-22 14:40:50 @garrface hmm. If memory serves, S&
2022-10-22 14:35:31 @criticalneuro @gershbrain my view is that the NeuroAI history involves big ideas slowly percolating from neuro to AI. Sometimes it takes years or decades for them to be engineered into something useful. But unless you think "scale is all you need" we are gonna keep needing new ideas for a while
2022-10-22 14:33:32 @criticalneuro @gershbrain @gershbrain can weigh in about whether i misunderstood his tweet...if so, then i wasted 30 minutes summarizing my view of the history of NeuroAI, which hopefully some people might find interestingbut he also raises an interesting question about future neuro-->
2022-10-22 14:17:23 @NeuroChooser @KordingLab i would agree except i dont think it's "just" engineering. Engineering is an essential and equal partner to the underlying inspiration...without proper implementation and development, those ideas are useless
2022-10-22 14:11:12 @KordingLab NoNeuro has historically been essential for many/most of the major advances. Unless you think "scale is all you need" then it's a great way to find hints as to what path to followhttps://t.co/Q8QczhC3zu
2022-10-22 14:08:01 @gershbrain @josephdviviano i agree with that (much weaker) formulation...neuro is not about delivering "widgets" to AI. Neuro can inspire big ideas. It can hint about what the right path is. But to make these ideas work requires engineering
2022-10-22 13:56:00 i should have cited this very nice summary of the history https://t.co/PVwiZBa2yFFIN+1
2022-10-22 13:53:39 @gershbrain yes i do think that... https://t.co/wLvNYYHiH4
2022-10-22 13:52:47 But stepping back: I think it's not coincidental that the early, major, advances in ANNs were made by people with feet in both communities. When NeurIPS was founded, the ANN community was indistinguishable from comp neuro
2022-10-22 02:02:19 @benj_cardiff @KordingLab He is not the first to say that! Luckily, we addressed that by arguing that we would be well advised to study ornithology if our goal were to endow a machine with "bird-level flight", eg "the ability to fly through dense forest foliage and alight gently on a branch" https://t.co/yo7JnGGVSG
2022-10-22 01:18:13 @neurograce @VenkRamaswamy @nidhi_s91 Cosyne is attracting more AI these days too
2022-10-22 00:37:53 @nidhi_s91 here, specifically we are talking about the energy efficiency of neural processing. A brain can do eg object recognition with a lot less power than a computerMy belief, shared with many, is that spiking (along with perhaps stochasticity, eg of synaptic transmission) is key
2022-10-22 00:35:30 @nidhi_s91 love to hear about it.To some extent, this is a call for AI to return to an earlier time when neuro and AI were much tighter. As a grad student in comp neuro, NeurIPS was my go-to meeting...neural networks and comp neuro used to be very tightly integrated
2022-10-22 00:06:27 @nidhi_s91 agree. all important and interesting fields
2022-10-22 00:05:50 @nidhi_s91 studying real animal bodies and how they interact with the environment is key to building robots. Inspired by "How to walk on water and climb up walls"https://t.co/INYhrLWmDD
2022-10-22 00:01:45 @nidhi_s91 that said, i am greatly inspired by ethology and agree it has a great deal to contribute
2022-10-21 23:59:34 @nidhi_s91 the overall goal of the paper is to galvanize excitement about NeuroAI. Historically neuro drove many key advances in AI, but one might ask what remains? Algos/circuits that address Moravec's Paradox (via embodiment) is one possible deliverable. Energy efficiency is another
2022-10-21 23:47:10 @nidhi_s91 the energy efficiency of neural circuits has indeed been studied for decades, eg this great paper by Laughlin. But studying energy efficiency of neural circuits does seem to fall squarely within the purview neuroscience, no? https://t.co/AZCQZ38NRF
2022-10-21 23:38:26 @criticalneuro @Abel_TorresM @summerfieldlab The primary target for funding would be govt not industry (though it'd be great if industry ponied up as well).
2022-10-21 23:34:20 @summerfieldlab AFAIK, there was little attempt in the Human Brain Project to "abstract the underlying principles"
2022-10-21 19:53:01 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical well in the Shannon information sense the information is there. How to decode is a separate questionIf you listened to the raw signal received by your cellphone it wouldnt mean anything to you. Luckily your phone knows how to decode it into an acoustic waveform
2022-10-21 19:22:56 @sanewman1 @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical i guess this reflects a very different understanding of how biology works from mine
2022-10-21 19:21:37 @SimsYStuart @PaulTopping @sanewman1 @ehud @kohn_gregory @GaryMarcus @SpeciesTypical I think the evidence for transgenerational epigenetic inheritance (Lamarckian evolution) playing an important role in humans (or most other animals) is very limited at best.Although Lamarck is a better algorithm, nature mostly seems to content itself with Darwin
2022-10-21 17:59:21 @kohn_gregory @GaryMarcus @sanewman1 @PaulTopping @ehud @SpeciesTypical i am using "information" in the technical (Shannon) sense, closely related to entropythere are other common uses of that word, and this might be at the root of some of the confusion here
2022-10-21 17:56:48 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical I am not clear how the fact that ink patterns might as well be stains is relevant here...can you clarify?
2022-10-21 17:52:37 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical it is semantics in that we know a lot about how these things work, so we're discussing what words to describe how it happens.There was a recent discussion about whether it's correct to call cells "machines" which imo was also just semantics.
2022-10-21 17:47:04 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical If i hand you a long set of instructions in Hungarian, i expect they will be challenging for you to follow (assuming you dont speak Hungarian). Nonetheless, i would say that the information is still there in the instructions. (not a perfect analogy but perhaps useful?)
2022-10-21 17:42:38 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical is there some other word that captures your understanding of the relationship btw geno/phenotype better? As a fellow biologist i assume we mostly agree on what that relationship is, so i guess we are just discussing word choice/semantics?
2022-10-21 17:37:35 @sanewman1 @GaryMarcus @PaulTopping @ehud @kohn_gregory @SpeciesTypical well, my phenotype includes being primarily bipedal, whereas my dog is mostly quadrupedalwould you not say that his genes "determined" his (quadrupedal) phenotype?
2022-10-20 20:48:08 @MelMitchell1 @mpshanahan @LucyCheke yes good point! We should add that to the next iteration
2022-10-20 20:13:50 @DavidJonesBrain @jeffrey_bowers @KordingLab I would include neurology as part of neuroscience. #bigtent
2022-10-20 18:39:15 @patrickmineault @KordingLab @seanescola ?
2022-10-20 18:37:41 @jeffrey_bowers @KordingLab my view is that much of what is needed is already present in animals (Moravec's paradox), which is not the primary focus of most psychology work today https://t.co/nTWXd3JGuB
2022-10-20 14:21:01 White paper —Rallying cry for NeuroAI to work toward Embodied Turing Test !Let’s overcome Moravec’s paradox: Tasks “uniquely” human like chess and even language/reasoning are much easier for machines than “easy” interaction with the world which all animals all perform. https://t.co/ehKRWl7rgJ
2022-10-19 21:40:40 @PessoaBrain @MillerLabMIT @LKayChicago @NicoleCRust By parts, I meant, synapses, channels, neurons. We know an awful lot about molecular and cellular neuroscience. How they are organized into higher level units like areas etc I agree is a bit less clear.
2022-10-19 20:47:25 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust sure it's all about figuring out how computation emerges from those parts...but IMO, it's worth keeping all that we learned about those parts (and how they are organized into circuits, etc) in mind as constraints...
2022-10-19 20:35:52 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust i think we know an awful lot about the parts that make up brains. Just not how they compute.... https://t.co/P2FGaui07C
2022-10-19 19:57:40 @jonasobleser @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust flattered to be compared with the GOAT but i'm not sure that most people who know me would characterize my discussion style as #ropeadope.
2022-10-19 19:55:26 @LKayChicago @MillerLabMIT @PessoaBrain @NicoleCRust hopefully we will all walk away with a shared understanding of what words like "organizing effects", "cause" and "epiphenomenon" mean in this context....
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!
2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong
2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?
2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9
2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!
2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong
2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?
2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD
2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…
2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!
2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas
2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&
2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline
2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")
2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome
2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &
2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)
2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?
2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9
2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!
2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong
2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?
2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD
2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…
2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!
2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas
2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&
2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline
2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")
2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome
2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &
2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)
2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?
2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9
2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!
2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong
2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?
2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD
2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…
2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!
2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas
2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&
2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline
2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")
2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome
2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &
2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)
2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?
2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9
2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!
2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong
2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-11-28 23:50:30 RT @JustusKebschull: Postdoc opportunity: In collaboration with @yavinshaham and with NIH funding, we are recruiting a postdoc with backgro…
2022-11-28 21:42:27 @joshdubnau @Hoosierflyman yes, having 3 specific games is definitely more fun!
2022-11-28 21:25:01 @isEdgarGalvan @bleepbeepbzzz thx!
2022-11-28 16:40:47 @captgouda24 58% of adults would choose not to buy a particular model bcs it is less safe than other models. And this in a world where all cars are pretty safe (x5 safer than 60 years ago) https://t.co/5LOEaWrpF1 https://t.co/bW6Wa53jPo
2022-11-28 16:36:22 @captgouda24 Consumers have clearly indicated that they are willing to pay for safety. Contrary to what Iacocca claimed, consumers DO care about safety, which is why safety is often prominently displayed in ads Seatbelts/airbags/ABS/drunk driving laws are very popular
2022-11-28 16:32:51 @captgouda24 Not sure details in this case, but regulators are often in bed with the industries they regulate (promise of high paying jobs in private sector), so the regulations do not necessarily reflect the "public will" (hard to determine) but rather what is good for the industry
2022-11-28 16:27:18 @captgouda24 i agree it is sad that the American regulators are so beholden to the industries they regulate that they fail to protect consumer interests, meaning that fear of litigation becomes a major driving force.
2022-11-28 16:24:55 @captgouda24 what was the reputational damage to Ford? American auto makers had a reputation for making unreliable and unsafe cars, which is part of why Japanese makers surpassed them in the 1980s. Turns out drivers don't want to die in car crashes https://t.co/jvHVXjHB0o
2022-11-28 16:18:29 @captgouda24 of course not *all* safeguarding risk is worthwhile. "all" is a strawman. But i think Ford learned the hard way that society wanted a different tradeoff from the one they chose.
2022-11-28 16:17:09 @bleepbeepbzzz indeed! we are tweaking the compression algorithm to encourage formation of modules etc. Stay tuned!
2022-11-28 16:15:53 @captgouda24 from a financial POV Ford clearly made the wrong decision (massive punitive damages) In the 70s there was major pushback from auto manufacturers against seatbelts and airbags bcs they were too expensive. Turns out they were wrong
2022-11-28 15:02:10 @captgouda24 i think from a legal/ethical POV what mattered was Ford's "state of mind". The cars were legal but they thought they were dangerous and decided they weren't worth fixing. The massive $$ damages against them serves as a warning to future companies that that is a bad decision
2022-11-28 14:27:40 @captgouda24 "Ford had a decision to make. Its car was in compliance with industry standards of the time, so it was not breaking any laws. But its own research had proved the car was unsafe, and even deadly." https://t.co/f4QLNoLg12 https://t.co/zlF5Uof8yY
2022-11-28 14:14:49 @bleepbeepbzzz We have used this idea to develop a "genomic bottleneck algorithm" in which the compression of the circuitry into a "genome" acts as a regularizer. https://t.co/Vx624XCEyp https://t.co/7HguPojgSt
2022-11-28 14:12:00 @bleepbeepbzzz Indeed, we have argued that: In animals, there are two nested optimization processes: an outer “evolution” loop acting on a generational timescale, and an inner “learning” loop, which acts on the lifetime of a single individual. https://t.co/9i0NnpnZhE
2022-11-25 15:37:19 RT @MorePerfectUS: Elon Musk has spent decades building something big: himself. And it’s worked. The myth of Elon Musk as the “good billio…
2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?
2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD
2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…
2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!
2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas
2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&
2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline
2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")
2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome
2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &
2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)
2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder
2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?
2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9
2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!
2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong
2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)
2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck
2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!
2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se
2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?
2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood
2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.
2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test
2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)
2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7
2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >
2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984
2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982
2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"
2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...
2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary
2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?
2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!
2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience
2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence
2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW
2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric
2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer
2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"
2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication
2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk
2022-12-07 21:49:12 @pwlot I’m also following mainly tech and academic people. Seems like there’s a lot less interesting discussion around papers or results
2022-12-07 21:44:31 @pwlot The number of engaging conversations I’m tempted to join is 73.2% lower than it used to be. That is my 100% quantitative &
2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?
2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!
2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience
2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence
2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW
2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric
2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer
2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"
2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication
2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk
2022-12-07 21:49:12 @pwlot I’m also following mainly tech and academic people. Seems like there’s a lot less interesting discussion around papers or results
2022-12-07 21:44:31 @pwlot The number of engaging conversations I’m tempted to join is 73.2% lower than it used to be. That is my 100% quantitative &
2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?
2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!
2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience
2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence
2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW
2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric
2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer
2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"
2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication
2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk
2022-12-07 21:49:12 @pwlot I’m also following mainly tech and academic people. Seems like there’s a lot less interesting discussion around papers or results
2022-12-07 21:44:31 @pwlot The number of engaging conversations I’m tempted to join is 73.2% lower than it used to be. That is my 100% quantitative &
2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?
2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!
2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience
2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence
2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW
2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric
2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer
2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"
2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication
2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk
2022-12-09 05:09:19 @AndrewHires https://t.co/1pSCYjnFTw
2022-12-09 05:07:34 @AndrewHires https://t.co/PhxBVwd8sy
2022-12-09 05:04:12 @AndrewHires here is the answer it gave me, in an ongoing session so a very different context. Different first sentence https://t.co/XRCr09S6WC
2022-12-09 05:01:38 @Aella_Girl not sure if you're trolling but FYI here is Charles Davenport's "Eugenics Creed", which includes gems like "I believe in such a selection of immigrants as shall not tend to adulterate our national germ plasm with socially unfit traits." https://t.co/kJ8mne0Xfb https://t.co/hZED403bIM
2022-12-09 04:48:31 @AndrewHires chatGPT's answers are stochastic and context-dependent so i'm not sure there is a "stock" response. Historically and in much of the world even today competence is assessed via oral exams...maybe it's time to return to that? shouldnt be a problem to test 1000 students, right?
2022-12-09 04:09:31 @KordingLab several people suggest that chatGPT has done well bcs your textbook was part of the training set. But given how poorly it does when asked to spit back facts that were definitely part of the training set, i think good performance here is unlikely to be due to pure memorization