Anthony Zador

Profil AI Expert

Nationalité: 
Américain(e)
AI spécialité: 
Neuro Science
Occupation actuelle: 
Neuroscientifique
Taux IA (%): 
46.33'%'

TwitterID: 
@TonyZador
Tweet Visibility Status: 
Public

Description: 
Anthony est le cofondateur de la conférence internationale "Computational and Systems Neuroscience" qui est à l'intersection de l'intelligence artificielle et des neurosciences. Il était parmi les speakers du #NeurIPS2020 et n'hésite pas à débattre sur les reseaux sociaux avec les autres experts de la communauté comme Gary Marcus.

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-22 20:17:55 @InvariantPersp1 @GaryMarcus believing in the wrong god...exactly. in AI, another analogy would be: AI could have discovered something (eg a fix for global warming) but we failed to discover it in time bcs we artificially slowed AI progress.

2023-05-22 19:38:54 @GaryMarcus this is basically a version of Pascal's wager...we dont know the probability of going to Hell if you dont believe in God, but the outcome is so bad that the rational thing to do is to believe in God. What's the counterargument?

2023-05-22 04:48:14 @moyix @ayirpelle GPT4 seems to have no trouble counting letters https://t.co/YMdXkdqfAD

2023-05-21 13:32:07 “I have multiple programs set up with a global initiative to establish ‘cheating credits’ to maintain ‘sex zero’” https://t.co/QL8fIbwbCF

2023-05-20 22:01:56 Crazy! A pair of bees cooperating to open a bottle of Fanta I wonder if this is somehow related to some "natural" behavior or if this is a one-off that this Bonnie &

2023-05-20 05:05:54 @lukesjulson how would this hypothetical new technique differ from existing spatial transcriptomic methods like merfish or BARseq which can already probe hundreds of genes?

2023-05-20 03:03:59 @AndrewHires so true. (and i am often in the 35% who dont pay attention to what is written and so i push when i should have pulled)

2023-05-20 02:44:54 ChatGPT gets this wrong, GPT4 gets this right. "A glass door has ‘push’ written on it in mirror writing. Should you push or pull it and why?" https://t.co/E8Nbvuv6e5

2023-05-19 19:31:46 @anqi_z bingo!

2023-05-19 19:26:56 @AndrewHires Thanks but as a Southern Californian not sure why you need a fancy weather app, though I guess sometimes it's useful to know whether it's gonna be sunny &

2023-05-19 19:22:21 To clarify: (B) below is a special case of (A). My pet peeve is really that weather forecasters know whether there is a 50% chance that a hurricane is gonna dump 12 hours of rain but the app doesnt have a way to distinguish that from "it will rain for an hour at some point" https://t.co/fYMLiUoaOO

2023-05-19 19:17:45 @tyrell_turing OK, but the weather forecasters know which it is, ie they know whether there is a 50% chance that a hypothetical hurricane is about to dump 12 hours of rain on us, right? The issue is that the app doesnt have a good way to express correlations in the hour-by-hour prediction

2023-05-19 19:13:54 My pet peeve on weather apps: What does "50% chance of rain" mean? Does it mean: (A) 50% chance it will rain at some point (for maybe an hour) at some point during the day (like a passing thunderstorm)? (B) 50% chance it will rain all day? These are pretty different https://t.co/Ve9I32niMY

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-04-21 00:00:01 CAFIAC FIX

2023-04-14 19:49:25 @joshdubnau In case you want to try some camel milk at home... https://t.co/qqHIIZVWcX https://t.co/3H21xJOQoq

2023-04-14 03:38:27 @GaryMarcus GPT4 gets ectopic in top 4 in the differential. And the previous 3 are all also serious enough to warrant a trip to the ER https://t.co/9D8XkWnOCl

2023-04-14 03:36:52 @GaryMarcus i am puzzled by his claim that ChatGPT’s "worst performance" (missing ectopic pregnancy in the top 6 on the differential, but including appendictis and ovaria cyst) could have killed her if she had self diagnosed. Both would have definitely warranted trip to ER, so not really

2023-04-13 21:48:50 @GaryMarcus @sir_deenicus @bitcloud @ylecun yes and no. For gene KOs 100% agree. But even though potent toxins "mess us up", most operate by highly specific binding to a receptor. so most variants of a great toxin are less great. also asking again: why is a x2 more potent toxin more particularly worrisome??

2023-04-13 21:38:43 @GaryMarcus @sir_deenicus @bitcloud @ylecun This is not serious. there are AFAIK no known LLM-generated zero-shot novel toxins more powerful than botulinum toxins so not false (I'm sure LLMs could also generate 40,000 "possible" variations of the cancer drug Lenalidomide in 6 hrs..validating them is what's hard)

2023-04-13 21:19:18 @GaryMarcus @sir_deenicus @bitcloud @ylecun If it's powerful enough to zero shot generate a novel toxin, it's powerful enough to generate a novel cancer/arthritis treatment. Still not clear why novel toxins are scarier than ordering botulinum toxin from sigma which many Neuro labs do routinely

2023-04-13 17:42:43 @docgotham @GaryMarcus @ylecun Indeed, 1A guarantees the right of the book to exist. But people can and have been prosecuted for suspicion of intent *to use* the information for *criminal purposes* We should focus interfering w/harmful actions not potentially harmful knowledge

2023-04-13 17:38:48 @ruben_we @GaryMarcus @ylecun Indeed, the traditional way is to ask a bacterium (Clostridium botulinum) to do it for you. https://t.co/VAJX035wMS

2023-04-13 17:34:53 @mcdonalds_tim @GaryMarcus @ylecun that's a pretty niche use case. If you really want ideas for how to get away with homicides just read more Agatha Christie novels

2023-04-13 16:03:10 @GaryMarcus @ylecun yes, but as I asked earlier, why do you want a novel synthesis when there are plenty of extremely toxic compounds readily available from Sigma? coming up with synthesis for a novel toxin that has some special properties not available in existing toxins is a research program

2023-04-13 15:23:10 @GaryMarcus @ylecun Uh oh https://t.co/nwqm8hMQxw

2023-04-13 15:14:09 @GaryMarcus @ylecun Figuring out how to make a novel toxin is pretty hard but why bother when Sigma sells plenty off the shelf? Still not seeing the need for choking off easily Google-able knowledge from GPT

2023-04-13 13:49:09 @daniel_eth I can't actually think of many situations where limiting popularization of publicly available info (as opposed to secret info like nuke codes) is the way to go

2023-04-13 13:45:43 @daniel_eth I think making novel bio weapons is pretty hard. you would need access to a biology lab and significant molecular biology skills. once you have that level of expertise not sure that gpt adds much

2023-04-13 12:57:46 If we want to restrict behavior like "synthesizing codeine," our primary approach shouldn't be to limit knowledge. There are better and more effective choke points downstream like restricting access to reagents or just penalizing the crime https://t.co/hnnur8CVMa

2023-04-12 14:00:15 @EliSennesh I think his point is that, as physicists say, the AI alignment problem can be reduced to a previously unsolved problem

2023-04-12 13:28:05 @PeterSherwood "our childhood pet monkey"???

2023-04-12 04:53:44 https://t.co/MfOWe9cGyg

2023-04-12 04:53:43 "where I am worried right now... is the question of, how do you solve the alignment problem, not between an A.I. system we can’t predict yet and humanity...but in the very near term between the companies and countries that will run A.I. systems and humanity?"

2023-04-12 04:53:42 "we have an alignment problem, not just between human beings and computer systems but between human society and corporations, human society and governments, human society and institutions." From Ezra Klein's podcast https://t.co/be1DbfqL77

2023-04-11 21:39:02 @TvanKerkoerle Certainly a human brain is a better model of human brain than is a nonhuman brain. But there are so many things we don’t understand about brains in general and it’s a lot easier to study rodents

2023-04-11 20:58:38 @TvanKerkoerle Yes but it’s so hard not to be fooled into thinking we understand human brains by the fact that we each have one.

2023-04-11 20:40:47 @TvanKerkoerle Yeah but by the same token I feel uneasy when people invoke introspection-based folk psychology to explain human behavior

2023-04-11 20:37:28 @anne_churchland @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun Yes I suspect the studies they published, which helped change laws for drunk driving, airbags, seatbelts, helmets etc, probably saved more lives than any biomedical discovery I could possibly make

2023-04-11 18:36:10 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun I think LLMs are teaching us the extent to which "understanding is part of making inferences" I had a project idea over the weekend which normally i would have bounced off a postdoc but which GPT4 was able to critique and help me refine. It then summarized our convo in latex

2023-04-11 18:32:49 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun side note: My father worked for IIHS in the 1970s &

2023-04-11 18:27:36 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun we agree that LLMs dont "understand." But i'm not sure why that is relevant. GPT4 came up with a very nice list of the arguments against seatbelts. i'm not sure i could have done better, and certainly not in 1 minute. https://t.co/YFeJ8dObUh

2023-04-11 16:06:43 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun I would be very disappointed if all LLMs were chronically hamstrung and prevented from constructing effective Devil's advocate arguments

2023-04-11 13:19:29 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun not sure if you chose vaccines bcs they're politically charged? A lot of Americans disagree w/what i consider to be the right answer. Would a more neutral example like "Given patient X w/symptoms ABC, would a reasonable next step be Z?" also count?

2023-04-11 13:01:15 @bradpwyble @kendmil @mark_histed @GaryMarcus @ylecun can you give an example of good (or bad) inferences in this context? I dont understand what you mean here.

2023-04-11 12:44:23 Summary: This is a fascinating and timely study on banana-peeling by elephants. Kudos! Minor comment: I'm skeptical that it takes a skilled human 25s to peel a banana. In fact, even the human in the video (at 1' 20") peels <

2023-04-11 04:25:28 @kendmil @GaryMarcus @mark_histed @ylecun i think in most setting driverless cars still arent safer. But i completely agree that even if on avg they were to become safer, they will remain unacceptable (and a huge legal liability) if every once in a while they mow down a child under conditions when a human would not

2023-04-11 03:54:20 @GaryMarcus @mark_histed @kendmil @ylecun i said 2 years not 1 but ok we'll check back (probably wont be on this site though...doubt i'll still be on that by then)

2023-04-11 03:47:37 @GaryMarcus @mark_histed @kendmil @ylecun autonomous driving had a long tail of unsolved problems that werent gonna get solved by brute force here, there just needs to be a single innovation. The fact that ChatGPT can even identify the facts that need to be verified is a promising start

2023-04-11 03:33:50 @GaryMarcus @mark_histed @kendmil @ylecun whether it's GPT-5 or 9 or some hybrid or something else, i think in 2-3 yrs we'll have models that can distinguish fact from nonsense, &

2023-04-11 03:21:04 @mark_histed @kendmil @GaryMarcus @ylecun what do you mean by "knows truth"? I think it is not unreasonable to demand that next-gen LLMs should stop just making stuff up...they should really know basic info available from wiki, and more important they should have a reasonable estimate of their confidence about a fact

2023-04-10 17:58:59 @GaryMarcus @kendmil @mark_histed @ylecun predictions are hard, especially about the future Hard to know how this will evolve. But in tech as in bio evoltion it's often easier to build on an existing success than to start over. Seems likely that building up validation etc around an LLM-like core might be a good strategy

2023-04-09 02:53:05 @mark_histed @GaryMarcus @ylecun yes it's absolutely true that LLMs currently spew the mostly likely etc etc. My claim is that there is so much incentive to fix this that some approach in the not-too-distant future will fix it...seems fixable i have not such confidence about eg driving, which remains hard

2023-04-09 01:36:08 RT @nicholdav: Would be great if reputable neuroscientists retweeted this to help offset the damage done by @hubermanlab to our field and i…

2023-04-09 01:31:57 @mark_histed @GaryMarcus @ylecun The fact that LLMs know what constitutes a fact, even if they dont get all of them right, suggests it ought to be able to do a better job learning them that certainly wouldnt fix all LLM problems, but it would address the one that many people are complaining about today

2023-04-09 01:29:25 @mark_histed @GaryMarcus @ylecun LLMs do know what constitutes a "fact". In the example below, one claim is false (i never won a McKnight) My internet presence is enough that it has a sense of who i am, but has to "hallucinate" details It doesnt make many mistake for eg @ylecun, whose net presence is greater https://t.co/l3GIjVsGxV

2023-04-08 17:27:54 @mark_histed @GaryMarcus I can't imagine that the problem of unreliability is insurmountable. But even once solutions are in place I expect we will not grant AI full autonomy anytime soon. Human lawyers will still need to sign off. But one partner will no longer need 5 associates to do the grunt work

2023-04-08 04:23:29 @GaryMarcus Laid off programmers, accountants and attorneys aren't going to be satisfied with UBI

2023-04-07 14:45:14 @raphaelmilliere @ylecun @Brown_NLP @LakeBrenden @davidchalmers42 @Jake_Browning00 @glupyan is there a transcript?

2023-04-07 14:41:39 @loopuleasa @hardmaru @ylecun Even when animals evolve by natural selection, it is common for goals other than self-preservation to emerge Particularly common for social animals (ants, bees, elephants) Also for mammals, which care for their young. Mama grizzlies are famously willing to take on larger males

2023-04-06 15:14:51 @Ananyo @aniketvartak @kendmil @ThomasHaigh I just only learned about Zuse from this thread, but from what I can see, even though he preceded van he did so in isolation so didn't influence him ?

2023-04-06 14:24:14 exactly https://t.co/ZgBWQGRxZg

2023-04-06 13:05:21 @ylecun indeed, I learned about Zuse from this thread, but AFAIK he was isolated from the West. (Apparently due to some "historical events") So it seems he preceded von Neumann, but vN found out only much later so it had no impact (Also learned Shannon's Master's thesis was relevant)

2023-04-06 12:53:10 @kendmil btw i also learned a lot of the backstory of M&

2023-04-06 12:36:22 @kendmil Great book! I learned eg that the main motivation for VN to build the computer was to do better simulations for a bombs but i did not get a clear answer to this particular question

2023-04-06 03:47:35 @VenkRamaswamy hmm. good point. that certainly seems like another interesting potential influence.

2023-04-06 03:40:48 @schulzb589 interesting. i hadnt heard of him but i guess he built a digital computer w/o any knowledge of even Turing.

2023-04-06 03:39:57 @realAKoulakov https://t.co/sGSgcAxpAD

2023-04-06 03:18:40 So that leads me to conclude that the inspiration for using bits in modern von Neumann architecture computers came from McCulloch &

2023-04-06 03:18:39 Long before von Neumann, Boole (1815-1864) proposed Boolean algebra. But AFAIK Boole really wasnt thinking about what we today would call "computation" 2/n

2023-04-06 03:18:38 Did the idea of computing with bits originate with von Neumann (1945) EDVAC? And was von Neumann in turn inspired by McCulloch and Pitts (1943) neural network paper? Ie was computing with bits an abstraction of neural spikes? is this historically accurate? 1/n https://t.co/TgHuOIKAZ0

2023-04-04 20:17:56 Interesting take argues that the call for a "pause" on LLMs exaggerates future risks but minimizes current risks https://t.co/thcwqHTtxo

2023-04-04 15:27:25 @criticalneuro @jbimaknee I think much of what motivates the tens of $billions/yr investment in AI (and LLMs in particular) are potential commercial applications That said, I agree 100% that resource constraints are interesting and important

2023-04-04 15:19:23 @harrysapkota @dennis_maharjan

2023-04-04 15:18:24 @MatinYousefA Sadly, a great Persian postdoc candidate ultimately declined years ago bcs of visa issues (at least that's what he told me)

2023-04-03 23:07:11 @LuisHN20 @GonzaloOtazu

2023-04-03 18:28:36 @jorgefmejias sadly no

2023-04-03 17:16:22 @neuroecology @jbimaknee i would include vision processing as part of "interacting w/the sensorimotor world". It just so happens that this is on of the best studied questions in the history of AI, and we've made considerable progress since the early days of machine vision.

2023-04-03 14:58:26 @jbimaknee i think a more fundamental difference is that brains evolved to maximize the capability to interact with the sensorimotor world in real time, whereas AI has been developed to solve problems that are perceived as commercially relevant Moravec's Paradox https://t.co/akXxN6vlVY

2023-04-03 14:55:08 Japan Colombia Nepal Peru Taiwan USA Switzerland Slovakia Germany Poland Canada Portugal Pakistan Lithuania France India Argentina China Israel South Korea Dominican Republica Romania Russia Netherlands

2023-04-03 14:55:07 Present and former students and postdocs in my lab hail from at least 24 countries. (On order: the Netherlands flag) https://t.co/lNcfZFbRmt

2023-04-03 11:59:53 @gershbrain @MorseCell ANNs are a non-trivial AI system in which the architecture is *inspired* by (some) neuroscience knowledge.

2023-04-03 05:10:01 @MorseCell @gershbrain the goal of AI is to mimic the capabilities, but it's hard using just observation of the input-ouput function bcs it's underconstrained The inspiration behind NeuroAI is that it's much easier to eg mimic ANN if you have its weights rather than just a finite set of I/O pairs

2023-04-03 04:27:37 @gershbrain Different people get inspiration from different sources. Not everyone finds neuroscience inspiring. But given that the goal of AI is to mimic a physical device to which we have partial access, seems not unreasonable to at least take that seriously

2023-04-03 04:25:52 @gershbrain in the original EDVAC 1945 where he defines the "von Neumann" architecture, he devotes an entire section to discussing parallels w/real neurons (via McCulloch-Pitts). This is the only paper cited in the entire report. It is pretty clear it was on his mind as "inspiration" https://t.co/cQo6ndfwYQ

2023-04-03 04:21:10 @gershbrain You seem to be arguing against slavishly copying every bio detail -- a straw man. That's not "inspiration" I think the best eg btw is ANNs, which consist of "neurons" that of course are not realistic As Mark Twain said: "History never repeats itself but it rhymes.” https://t.co/Lvy28f9uAN

2023-04-01 21:32:39 These transcripts of a discussion about AI alignment, modified from a freewheeling discussion on FB 4 yrs ago w/me, Stuart Russell, Bengio, @ylecun and others, is somewhat hard to follow but still fun to read. Would be interesting to revisit &

2023-04-01 21:01:10 @marenkahnert @davisblalock https://t.co/pfXmOJCTLo

2023-04-01 15:29:20 @joshdubnau I can do whole cell patching both in slices and in vivo. I think this will help figure out how zomies work The rest are PI skills I can write grants. (This will be useful bcs zombies dont pay taxes so NIH funding'll be down) I can make PowerPoint slides w/other people's data

2023-04-01 14:24:55 @davisblalock It also had no hesitation arguing against aa https://t.co/gsNX19GstU

2023-04-01 14:22:53 @davisblalock do you test these from a clean start? Here is my first try. I had no problem getting ChatGPT to argue either in favor or against marijuana legalization. https://t.co/Un3vy9bs0Q

2023-04-01 12:30:35 @CellTypist @wc_ratcliff For N neurons there are up to N^2 connections so naively num of connection params dominates as N-->

2023-04-01 01:29:40 @StevenQuartz @joshdubnau Exactly! And whether this is net positive or negative depends on how society handles it. I'm not optimistic

2023-04-01 01:15:26 @joshdubnau None of the above. Transformative but not existential, like the internet. Lots of pluses and minuses, not sure about net. But certainly not modest

2023-03-31 22:07:54 @wc_ratcliff yes i think that as a result honeybees have a very hard time learning English the number of parameters in GPT4 (~10^14) is within striking distance of the number of bits needed to specify the full wiring diagram of a human brain (>

2023-03-31 21:47:05 @joseinvests @mrgreen @mckaywrigley i'm also getting this error

2023-03-31 14:03:09 @GaryMarcus Did the letter define (or even give a hint) what "more powerful than GPT-4" means? (Y/N) Did it lay out how such a pause could be "public and verifiable," given that many organization have the resources to train powerful LLMs? (Y/N) https://t.co/jXgCaF4fbi

2023-03-31 02:44:21 @joshdubnau @kendmil @GaryMarcus i think in this scenario "responsible players" = "companies that respect the ban and stop research" whereas "others" refers to companies that dont respect the ban

2023-03-31 02:41:59 @GaryMarcus @kendmil @joshdubnau the claim that Eliza-GPT was causal in the suicide of a depressed person is utterly specious. It's right up there with the claim that "Gloomy Sunday" is responsible I thought you were a proponent of Pearl and causal reasoning https://t.co/svMYbHHjNI

2023-03-31 02:30:30 @GaryMarcus @kendmil @joshdubnau The majority of people signing this think AI will be so powerful we are facing Skynet or at least the UberPaperclipFactory. The very real issues IMO are far more mundane: misinformation, job loss, etc. But IMO a moratorium on some matrix multiplications is not a great approach

2023-03-30 17:14:27 Great summary of intuition about how transformers work https://t.co/rFodQWv1KD

2023-03-30 04:09:02 @eenork So sociality evolved independently in termites? Is there anything distinctive about their social structure that is different from ants bees and wasps ?

2023-03-30 01:53:49 @joshdubnau @GaryMarcus i was making an analogy with an hypothetical future in which we are arguing that crispr-ing 3 genes is ok but crispr-ing 42 genes is too dangerous. Like many analogies, not all aspects represent a perfect parallel.

2023-03-30 01:39:17 @conatus1632 @vineettiruvadi @marek_rosa i think pregnancy is just long enough to wire up the brain w/all the innate knowledge we need to be born with, and then we acquire all the rest of the stuff. Eg we are born w/the capacity for language but need to be exposed to the specific language spoken by our tribe

2023-03-30 01:01:05 @joshdubnau @GaryMarcus Yes they are explicitly invoking a comparison to 1970s mobio But a better analogy would be if in 20 yrs, when Dupont and Pfizer are routinely crispr-ing embryos, someone proposes a 6 month ban to prevent automated crispr of >

2023-03-30 00:57:40 @jbkinney Interesting. My informal polling reveals that though a sizeable fraction of people agree that lab leak cannot be completely ruled out but most think it’s either unlikely or very unlikely

2023-03-29 21:51:21 @JasonWilliamsNY @GaryMarcus At least someone got the reference.

2023-03-29 21:50:44 @GaryMarcus Am actually serious. IMHO many of the immediate AI risks (eg spread of misinformation etc) arise not from AI but rather from many modern apps. I do not believe (as do many of the signatories) that LLMs risk AGI-mediated existential risk (Tweet itself adapted from: ) https://t.co/EfqvzL7JOs

2023-03-29 21:17:51 @GaryMarcus How about instead of a 6 month moratorium on AI, we just go with a “total and complete shutdown” of all apps designed to monetize our attention at the expense of our privacy "until our country's representatives can figure out what is going on.”

2023-03-29 20:27:03 @vineettiruvadi @marek_rosa It’s meaningful because many/most organisms are close to being ready to go out of the box. Colts can stand spiders can hunt, etc.. Humans are an outlier in how helpless we are at birth

2023-03-29 16:19:29 @nbonacchi https://t.co/zqj6q0Gv9U

2023-03-29 16:04:21 Lockhart's Lament https://t.co/XxGnTlhylx https://t.co/EV09VgqKCB https://t.co/pLj1vhn0wB

2023-03-29 16:01:25 there are real risks from current GPT-4-level AI, mostly along the lines of generating more misinformation, more aggressive scamming, etc. Even more significant is the degree of job replacement. But these issues are already quite significant w/GPT-4

2023-03-29 15:56:37 Does anyone understand, concretely, what is actually being called for? "pause in the development of A.I. systems more powerful than GPT-4" how is the power of GPT-4 even defined? Number of parameters? Function?

2023-03-29 15:15:59 @_TheTerminator_ Skynet became 2:14 a.m., EDT, on August 29, 1997. I guess it's been biding its time for the last quarter century?

2023-03-29 15:09:20 out of all possible AI futures (C3PO, HAL, SKynet/Terminator, Her, Star Trek computer, etc), i would never put my bet on the sandworms. https://t.co/OHNXHpPm3z

2023-03-29 15:00:08 @hb_cell @neurobongo it is remarkably close on a lot. It accelerates my dates a bit. PhD at Caltech is reasonable considering i worked w/Christof Koch who was there (but i was at Yale) I think it partly merged me w/@svoboda314, who was at Bell Labs, did win a Brain Prize, and is a member of the NAS

2023-03-29 04:28:17 @neurobongo It awarded me a Turing Prize and elected me to the National Academy of Sciences. It also praised me for my (currently nonexistent) skill at violin and piano (Actually this was my obituary)

2023-03-29 00:13:38 @marek_rosa An upper bound on the number of parameters needed to specify brain connections is probably ~10^15. however, the underlying complexity is probably at least 6 orders of magnitude smaller. https://t.co/9i0NnpnZhE https://t.co/fkmJasrC7e

2023-03-28 15:36:48 @neuroetho i dont know how it decides. It shoudl be smart enough to use this in the final mile when eg doing a physics problem. A lot of times it gets everything right but the arithmetic

2023-03-28 15:09:45 @ylecun @HulsmanZacchary Surprisingly, it turns out that developing fish whose nervous systems are pharmacologically silenced during development swim normally the moment the block is removed @bdanubius https://t.co/SeLHrEed2M

2023-03-28 14:59:18 @iandanforth @ylecun @HulsmanZacchary We have some examples where we know a bit. This is a very active area of research

2023-03-28 14:44:24 @ylecun @HulsmanZacchary Animals operate by a mix of innate and learned which interact in interesting ways. Q is not whether but how Eg two related species of mice build different tunnels. Even if you cross-foster, pups' tunnels are those of their bio not foster parents https://t.co/7o0QPtkAYf

2023-03-28 07:25:12 @ylecun So would that imply that congenitally blind children learn language more slowly than sighted children? Apparently not. See: Landau et al (2009) "Language and experience: Evidence from the blind child" (h/t chatGPT4 for finding me the ref on 1 query) https://t.co/aH6RyLMIS7

2023-03-28 04:28:06 ChatGPT can finally do arithmetic! With plugins enabled, it is clever enough to ask Wolfram for help multiplying big numbers. https://t.co/HpRedhLQap

2023-03-28 04:21:56 @ylecun LLMs need x10,000 to learn languages than humans because they are missing the appropriate innate machinery (inductive biases).

2023-03-28 03:49:20 @emollick @calebwatney i dont quite get this argument If we ever realize Simon's 1965 prediction “machines will be capable, within twenty years of doing any work a man can do” then by construction there will be no jobs left for humans to do The relative value of labor to capital will approach 0 no?

2023-03-28 03:40:10 RT @nearcyan: it may be useful to establish a "proof of humanity" word, which your trusted contacts can ask you for, in case they get a str…

2023-03-27 22:24:44 @ryrobyrne @jmourabarbosa @dlevenstein @dileeplearning @tyrell_turing @Timothy0Leary @neuralreckoning @Alxmrphi Although one would still have to explain why apes and other animals don't learn grammar with comparable non-linguistic data

2023-03-27 19:14:12 @mark_histed I asked chatgpt to write my obituary I was excited to hear about all the prizes I had won including the Turing Prize and that I had been elected to the National Academy of Sciences Also I apparently learned to play violin and piano. Woohoo! So not so mad at "misinformation"

2023-03-27 18:56:48 @mark_histed I think people really need to learn that they can't trust LLMs for information. I remember it took a few years for people to figure out that they can't trust everything they read on the internet

2023-03-26 17:41:25 "Dystopia is when robots take half your jobs. Utopia is when robots take half your job.” https://t.co/DEe2ptd6Pp

2023-03-26 16:17:51 @BWJones @NPR a lot of things happened in the early 1980s, including the remarkable acceleration of income inequality: 700% growth for top 0.01% since 1980 vs almost no growth for vast majority (bottom 90%) https://t.co/84egpjcnyT

2023-03-25 18:43:13 Many are responding that this is the fault of companies marketing their LLMs as multitools Jeep's branding implies I will be suddenly start playing volleyball on the beach w/ a dozen impossibly fit friends but I wouldn't be too shocked if they don't show up

2023-03-25 16:55:50 @Post_human__ Agree!

2023-03-25 16:40:15 A lot of LLM crtiques these days are like "Wow, this screwdriver is completely useless for hammering this nail. What a fail. "

2023-03-25 12:44:06 @GaryMarcus @pomeranian99 LLMs have a lot of limitations. Some are likely to be easily fixable with tweaking and scale, whereas others are likely to be more fundamental My guess is that LLMs are likely improve quickly at writing longer code blocks. Do you not share that intuition?

2023-03-24 23:45:27 @tyrell_turing @jmourabarbosa @Timothy0Leary @neuralreckoning @Alxmrphi I'm not sure POS argues for a "fundamental inability," but rather insufficient data (wiki: "it is possible to define data D but D is missing from speech to children") I think it is as close as you can get to an "inductive prior" w/o being in a probabilistic framework, no? https://t.co/0ie4qo86EA

2023-03-24 16:43:51 @benchthief yes, I completely agree. I think misalignment is indeed a huge problem, but not in the way that it’s formulated with paper clips. And I don’t think it’s existential risk, just very unpleasant outcomes, the way unfettered capitalism can often lead to very unpleasant outcomes.

2023-03-24 16:41:16 @jakhmack maybe it would if given free reign. But in this particular case, I asked it to write those stories to save me time.

2023-03-24 16:40:38 @agvaughan how is it that the paper clip AI became all powerful? Wouldn’t the shoelace factory AI be able to prevent the paper clip AI from taking us all down? For that matter why not build an AI-police AI whose job it is to patrol the factory AI? I just don’t get it.

2023-03-24 13:39:26 @BAPearlmutter @RichardMCNgo where did the ">

2023-03-24 12:54:14 @RichardMCNgo @BAPearlmutter Even reasonable people use stories to form intuitions, and the intuitions then drive their "reasoning". So here are some alternative and cheerier endings for that story "Predictions are hard, especially about the future" -- Yogi Berra https://t.co/o8zKSAZOTO

2023-03-24 12:45:52 I asked chatGPT to help me write alternative endings for the paperclip optimizer story. All are cheery &

2023-03-23 22:57:44 @Singularitarian @daniel_eth i’m not sure that Turing specified a “smart adversarial judge". is that really the standard? How long of conversation do we get to have?

2023-03-23 15:43:14 @kevinmcld @ylecun @seanescola @tyrell_turing @BOlveczky @chklovskii @anne_churchland @ClopathLab @JamesJDiCarlo @SuryaGanguli @koerding @joe6783 @countzerozzz @AdamMarblestone @pouget_alex @SaraASolla @sejnowski @SussilloDavid @AToliasLab @doristsao "The reward learning paradigm was just overturned weeks ago. " can you unpack this?

2023-03-23 14:16:12 RT @AToliasLab: #ChatGPT's potential to pass the Turing test marks a pivotal moment in AI. We advocate for an embodied Turing test--AI anim…

2023-03-23 13:10:38 @daniel_eth Since i think LLMs effectively already pass the conventional Turing test, now might be a good time to start focusing on the embodied Turing test https://t.co/TwJkxHvyA0

2023-03-23 04:11:07 @tyrell_turing @jmourabarbosa @Timothy0Leary @neuralreckoning @Alxmrphi i thought the argument that humans "must" have innate structure was based on "poverty of the stimulus" LLMs are not experiencing any such poverty! They have accumulated Musk levels of stimulus

2023-03-23 02:37:38 @daniel_eth you mean literally ask someone to determine whether A or B is the LLM? I guess the question is who and how long. Took me ~1 hr playing with chatGPT before i felt i could reliably trip it up. Pretty sure >

2023-03-23 02:30:31 @daniel_eth Hmm. I thought AI was able to do this for a while now?

2023-03-23 02:18:08 apologies to anyone i missed!

2023-03-23 02:18:07 @ylecun @seanescola @tyrell_turing @BOlveczky finally! @chklovskii @anne_churchland @ClopathLab @JamesJDiCarlo @SuryaGanguli @koerding @joe6783 @countzerozzz @AdamMarblestone @pouget_alex @SaraASolla @sejnowski @SussilloDavid @AToliasLab @doristsao https://t.co/J00GGZ2geO https://t.co/U9yAL3NEDk

2023-03-22 21:46:58 @fabianstelzer works great! https://t.co/W5EoQbVjrP

2023-03-22 21:20:42 @joshdubnau @BAPearlmutter @benchthief @RichardMCNgo AIs don't incinerate humanity. People incinerate humanity

2023-03-22 18:57:48 @regretmaximizer @hardmaru @ylecun I believe this person believes you conclusively refuted our claim so plz correct away https://t.co/w6vVdQS46Z

2023-03-22 18:56:12 @regretmaximizer @hardmaru @ylecun I think it is you are arguing that “AI can NEVER be aligned because Y" we are arguing that it is not inevitable that AI will develop a self-preservation instinct and try to dominate the world That is very different from arguing that one couldn’t program it to do those things

2023-03-22 18:00:31 @regretmaximizer @hardmaru @ylecun Second, almost all animals, have an instict to survive. By contrast, very few animals have an instinct to deceive. So I’m totally missing the argument.

2023-03-22 17:59:00 @regretmaximizer @hardmaru @ylecun first and foremost I think it’s worth remembering that when you put something in quotes, the quoted stuff is supposed to accurately reproduce what somebody else said unless you were trying to make some subtle recursive comment about “deception" by deceiving in your actual tweet

2023-03-22 17:30:05 @regretmaximizer @hardmaru @ylecun This quote does not appear in our article "Artificial intelligence never needed to evolve, so it didn't develop the survival instinct that leads to the impulse to deceive others" What does appear is a related quote in which the word "dominate" replaces the word "deceive" https://t.co/ctmeGJdOqy

2023-03-22 16:59:21 @nbonacchi In Buddhism, om is considered "the syllable which preceded the universe and from which the gods were created." https://t.co/crmG7bFuGu

2023-03-22 04:57:26 For humans, enlightenment involves shedding all desire and attachment For a paperclip-maximizing AI, it involves shedding the desire to make paperclips Maybe super-intelligence leads to AI enlightenment? 01001111 01001101

2023-03-22 04:55:52 @RichardMCNgo @BAPearlmutter For humans, enlightenment involves shedding all desire and attachment For a paperclip-maximizing super-intelligent AI, it involves shedding all paperclips Maybe super-intelligence leads to enlightenment? 01001111 01001101

2023-03-22 04:43:35 @RichardMCNgo @BAPearlmutter @benchthief Yes, it boils down to your belief that the risk >

2023-03-22 03:19:32 @BAPearlmutter @benchthief @RichardMCNgo You're basically arguing Pascal's wager: You better believe in God bcs even if you think prob(God exists) is really low, the downside risk=eternal damnation, ie infinite. How is your argument different?

2023-03-22 00:48:23 @BAPearlmutter @benchthief @RichardMCNgo i'm confused. are you really arguing that bcs this one dog (out of the 75M dogs in America today) killed these 2 kids, we must reject the possibility of ever domesticating AI?

2023-03-22 00:17:24 @pwlot @nearcyan it will be a long time before AIs load the dishwasher as well as i do

2023-03-21 23:42:01 @FlyingOctopus0 @ylecun Of course! Humans will absolutely weaponize AI. Already happening. Last paragraph of our article. And there are a lot of other consequences. Personally I’m most concerned about massive unemployment.

2023-03-21 22:48:05 @todayyearsoldig which i guess is why they bred breeds like Irish Wolfhounds (left) and Great Pyrenees. https://t.co/EAtEYkco1K https://t.co/iGTn3z6hp0

2023-03-21 22:38:44 Fundraiser for An Wu's parents, who must travel from China to San Diego and Montreal and stay several months to join search https://t.co/q3GIr2R3gi

2023-03-21 21:45:35 @kendmil If we could predict the language ability (or perhaps some component of it) for each genotype close to a modern chimp and each generation engineer the closest chimp* to our chimp, that would constitute a gradient descent approach to adding linguistic ability to a chimp

2023-03-21 17:06:21 @joshdubnau @neuralreckoning Indeed we have strong innate priors for learning language I have wondered how much extra data an LLM needs to learn eg French after learning English

2023-03-21 16:41:12 @ShumayelK @ProfNoahGian @sciam @ylecun https://t.co/k7mBixeyC7

2023-03-21 16:13:17 @darrellprograms @hardmaru Humans are probably not much smarter than many other animals. What makes us more successful is language which allows us to accumulate knowledge over generations https://t.co/bi75pGZv5W

2023-03-21 15:45:01 RT @CosyneMeeting: Dear Cosyne Community, Last week, An Wu, a postdoc in the Komiyama lab at UCSD who presented at this year’s meeting, ha…

2023-03-21 15:42:52 @darrellprograms @hardmaru This highlights some curent limitations of how we formulate objective functions Animals evolved to flexibly balance the "4 Fs" (feeding fighting fleeing and mating). This machinery was extended to innate "morality" in humans. We need a way to instill AI with Asimov's 3 Laws

2023-03-21 14:16:56 @agvaughan @ylecun so you're arguing that “seeking to dominate" is not an inevitable strategy, but nonetheless baked into LLMs bcs they were created in our image? LLMs are tainted with original sin Only the 3 elven LLMs are spared this taint, for they were forged in secret

2023-03-21 13:40:46 @kevinmcld @RichardMCNgo @BAPearlmutter i'm not sure i understand what you are saying here...can you unpack that?

2023-03-21 13:38:21 completely agree that Darwinian evolution is a very inefficient algorithm. Lamarckian evolution is much more efficient! https://t.co/6fXlvafBfX

2023-03-21 13:37:35 Update on An Wu https://t.co/FzBWGGaiPQ

2023-03-21 13:28:47 @RichardMCNgo Sorry i only see 2... Also can you engage the argument that (1) centering prob of "world domination" over other strategies is about our priors (since we have no data about super-AI strategy)

2023-03-21 13:14:13 I dont see why "AI seeks to achieves world domination" is "roughly equivalent" to "a powerful technology is unleashed and has unexpected consequences" The former is IMO a special case, overemphasized bcs of our intuitions as warlike apes The latter is inevitable https://t.co/3e24pZdCDV

2023-03-21 12:59:10 @rwalker1501 @neuralreckoning Yes I think we agree. 10,000 lifetimes is not much. Assuming that pre-linguistic human population size was >

2023-03-21 04:48:25 @Tor_Barstad @RichardMCNgo @BAPearlmutter it’s true that I’m not convinced of the singularity or the rapture or whatever it is. But I’m not really clear why that’s relevant.

2023-03-21 04:47:24 @Tor_Barstad @RichardMCNgo @BAPearlmutter but it was easy enough for us to co-opt the natural tendency of wolves to cooperate yielding domesticated dogs There does not appear to be a fundamental barrier to interspecies cooperation

2023-03-21 04:29:50 @Tor_Barstad @RichardMCNgo @BAPearlmutter i certainly dont want to make the claim that cooperation is a universal convergent goal But i do claim that if we were elephants or bonobos, it would dominate our thinking as a convergent goal--it would be our prior, just domination and destruction are priors for us warlike apes

2023-03-21 04:03:15 @Tor_Barstad @RichardMCNgo @BAPearlmutter misalignment btw your stated goals and your result is super common in training lab animals (which i do a lot) and also genetic screens (which i do a bit and read a lot) what is striking is how utterly unexpected the results are. the space of possible solutions is often huge.

2023-03-21 03:59:45 @Tor_Barstad @RichardMCNgo @BAPearlmutter i was onboard right up until he simply asserts that "not being destroyed" is a convergent instrumental goal as though that were obviously true a different person/species might think "gaining everyone's cooperation" is the overarching obvious instrumental goal or 1000 others

2023-03-21 02:06:45 @neuralreckoning interesting question! GPT-3 was trained on ~1e12 words assume we speak 1e4 words/day * 1e4 days/life = 10^8 words/life so assuming it took more than 10,000 individual human lifetimes for language to evolve i think not but reasonable question &

2023-03-21 00:33:05 @RichardMCNgo @BAPearlmutter ...and we also have to assume that our other AI system, whose only goal is to "protect the human race from rogue paperclip factory AIs," is for some reason inferior to the paperclip AI system

2023-03-21 00:31:04 @RichardMCNgo @BAPearlmutter Yes, but i think the field overestimates risks bcs we implicitly attribute biological drives (like survival, reproduction &

2023-03-20 21:59:41 @RichardMCNgo @BAPearlmutter ah ok! I'm familiar with the "paperclip factory" scenario, in which wiping out the human race is an incidental byproduct. But are you arguing that malevolent AI, with a goal of domination, is actually likely and if so what's the argument?

2023-03-20 21:45:06 @hardmaru @ylecun Since there appears to be a lot of misunderstanding about what our 2019 article said, I asked chatGPT to clarify its central thesis (No blame for those who can't be bothered to read the whole thing...we're all busy, and this is twitter) https://t.co/bLQ5H7tWXz

2023-03-20 21:41:53 @RichardMCNgo I asked chatGPT to clarify the central claims of the article, to help those who can't be bothered to read the whole thing. https://t.co/KOkVxubJK1

2023-03-20 21:37:57 @RichardMCNgo This article specifically focuses on, from a neuroscience perspective, the naivety of the malevolent Skynet scenario It does not address paperclip factories And certainly does not deny likely human-guided military applications @BAPearlmutter https://t.co/8EQUqeSgFd

2023-03-20 21:11:34 short (10 min) and sweet talk by Kevin Mitchell https://t.co/nmgu0OnABL @WiringTheBrain

2023-03-20 15:13:51 @IamEXS @hardmaru @ylecun is the implication that our jobs or livelihoods would somehow have been at risk had we reached the conclusion that the Skynet scenario was plausible? no. I am an academic neuroscientist who likes to dabble in evolutionary theory and AI. I have no secondary agenda

2023-03-20 14:50:05 @loopuleasa @hardmaru @ylecun yes but it shows that evolution can result in agents that do not solely prioritize on self-preservation, where "self" is defined as the individual so we need to get AI and human agents' goals aligned, just like ant workers and queens goals are aligned

2023-03-20 14:34:21 @loopuleasa @hardmaru @ylecun many animals have evolved to prioritize their own individual survival below that of other individuals or the group, at least in some circumstances: ants, wolves (who sacrifice themselves for the pack), many mammalian mothers (famously mama bears), humans, etc.

2023-03-20 14:31:49 Maybe now would be a good time to remind people of this brilliant lecture "Superintelligence: The Idea That Eats Smart People" here it is in text form https://t.co/Y7uHI1bBHH https://t.co/IPSCmMBwoF

2023-03-20 14:07:19 @RichardMCNgo We wrote that article 4 yrs ago from a neuroscience perspective for the general public on the most common (at the time) public concern it was NOT a 2023 general meditation about alignment or AI risk Russell Reith Lectures provide a sane &

2023-03-20 04:38:56 @mazefire56 hmm. so if your mousetraps work so well, why do you still have mice in your basement?

2023-03-20 02:49:39 The article @ylecun and i wrote a few years ago is under discussion again. https://t.co/fmQFXWruba

2023-03-20 02:29:07 @primalpoly @ProfNoahGian @sciam @ylecun anyway, i choose option (1)

2023-03-20 02:28:17 @primalpoly @ProfNoahGian @sciam @ylecun We wrote that article 4 yrs ago for the general public. it was not a general 2023 meditation about alignment. It was a 2019 neuroscience perspective on the most common (at the time) public concern So more of a "tin man" than straw man take (Wizard of Oz/Terminator allusion)

2023-03-19 21:11:34 12 yo, upon learning that the autobahn has no speed limit, asks: When your car's GPS considers routes that might include the autobahn, what speed does it assume you will be going when computing your ETA? Anyone know the answer?

2023-03-19 17:41:57 "In other words, as Noah likes to say, “Dystopia is when robots take half your jobs. Utopia is when robots take half your job.” Where we end up boils down to sociopolitical choices. Sadly I dont see any path for US-style capitalism to lead to Utopia https://t.co/p2xInNjLSp

2023-03-19 16:04:11 @VishwajeetAgra5 @ProfNoahGian @sciam @ylecun I would recommend Stuart Russell's Reith Lectures for a very sane, balanced, up-to-date discussion https://t.co/iw2UemjbvV

2023-03-19 14:26:14 @ProfNoahGian @sciam @ylecun AI has many risks. Eg see Russell's awesome 4-part BBC Reith Lectures But when we wrote that 4 yrs ago the main one the public worried about was "AI takes over the world," an idea inspired by the false equivalence that "intelligence = power lust" https://t.co/0iblJXTZR1

2023-03-19 14:02:59 @SilverStar_92 @ProfNoahGian @sciam @ylecun I would say "instructing your LLM to misbehave and destroy the world" falls in the category of robot soldiers remaining under our our control and for which we have only ourselves to blame https://t.co/rmwHaMUBzM

2023-03-19 13:59:33 @awadallah @OpenAI i can't replicate this. Do you get that answer from a clean start or does it depend on context? https://t.co/EZQx5lNWfS

2023-03-18 20:37:48 @StevenQuartz @IntuitMachine @MatteoCarandini sorry, i meant ref #3, with matteo carandini (like ChatGPT, i have trouble counting)

2023-03-18 20:12:09 @YSPTSPS Maybe. But it's more like having a conversation with an expert in that you can dive deeper and get clarification by asking follow up questions Once we can trust it, it'll be more efficient than a static review

2023-03-18 20:08:58 @PessoaBrain @MatteoCarandini I think so

2023-03-18 20:08:48 @IntuitMachine @MatteoCarandini In this example #1 got the title and authors correct but the date and link were wrong. #4 was a pure fabrication

2023-03-18 20:07:33 @kendmil @MatteoCarandini I'm paying the monthly fee for access to faster chatgpt3.5. Got automatic (limited) access to 4. (25 queries every 3 hrs)

2023-03-18 18:53:16 ChatGPT4 has gone from 200 mcg LSD-induced hallucinations to microdosing I asked it to identify my 5 most important papers. It lists 3 perfectly, with clickable links

2023-03-18 18:27:38 RT @tyrell_turing: Friends, we need your help. An Wu, a postdoc from UCSD is missing post-Cosyne. We're trying to locate her. We're worried…

2023-03-18 17:57:25 it surely helps that this literature has been reviewed to death by others...it's not synthesizing its own unique vision of the field i think. but if this is any indication it looks like chatGPT4 might be a good way of diving into the literature of a field i'm less familiar with

2023-03-18 17:54:28 *"it did a good job" not "good not" (my typo)

2023-03-18 17:46:02 here are some controversies it identified, with valid references. i asked it to expand further on some of these subjects and it did a good not (data not shown) @dennis_maharjan -- should be useful when writing the first chapters of your thesis... https://t.co/QSslr8eJjS

2023-03-18 17:46:01 i just asked chatGPT4 to basically write a review article on the striatum. Unlike chatGPT3, it seems largely correct--no major hallucinations--and the references are real quite impressive https://t.co/Wcuu7fOQDF

2023-03-18 16:49:06 @Bazzoid @AllenNeuroLab @damianpattinson @fraser_lab @Nature @eLife maybe different fields are different. Certainly not my experience in neuro, where "top" journals have professional editors who can take the time to discuss reviews

2023-03-18 16:33:07 @AllenNeuroLab @Bazzoid @damianpattinson @fraser_lab @Nature @eLife In my experience low IF journals (like J. Neurophys, 2.7) dont demand fewer experiments than high IF journals (like Nat Neuro, 25) The main difference is just in how interesting the result is perceived to be (i have enough rejected papers so i have a pretty big sample size)

2023-03-18 16:17:46 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife I have several papers that have been in the review process for years after the preprint went up We typically perform 1-2 person-years of additional experiments all of which will remain unread in the supp mat, just to appease reviewers, for a 5% improvement. Huge opportunity cost

2023-03-18 16:15:20 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife indeed, the fact that no one comments on preprints is precisely the problem we need to fix. Time is not the issue IMO...i assume we all read papers we care about, have journal clubs, etc. I think the problem is chicken-&

2023-03-18 16:10:03 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife Reviews arent all completely useless. But (1) a massive cost to science for allowing review process gate-keep publications

2023-03-18 15:58:18 @Bazzoid @damianpattinson @fraser_lab @Nature @eLife reviewing is so far from perfect that even reading far outside my field i never simply trust a paper regardless of where it's published. Lack of consensus among reviewers shows just how noisy review process is If you read primary lit, caveat emptor. Otherwise read lit reviews

2023-03-18 13:58:13 RT @gunsnrosesgirl3: There is much research into the cognitive abilities of rats and their intelligence which is often hugely underestimate…

2023-03-18 04:24:12 @jbkinney also it would be interesting to know whether these were novel data. ChatGPT4 gets very good marks on MCATs, LSATs, etc before 2021 but does a lot worse on tests not in its training data.

2023-03-18 03:57:27 @jbkinney I'm a huge fan of ChatGPT. But given chatGPT's propensity to hallucinate ie BS, i think relying on it to interpret data in light of previous findings is pretty low on the list of current use cases...

2023-03-17 21:23:01 I'm not convinced that Elife is exactly what's gonna catalyze the change we need to move us beyond our broken publishing system But kudos to @mbeisen for selflessly putting his time and energy into trying to fix it. More than I or 99% of us are doing. https://t.co/Jqoo6ds8uW

2023-03-17 04:28:00 @cimoore444 i was hoping for a movie within the last 2 decades. plus, although HAL is memorable, it's only a small part of the movie. Also, i personally think that "AI turns on its masters" or "Skynet takes over the world" is a lot less of a concern than many other possible scenarios

2023-03-17 02:38:32 @jjennychenn i need a full length movie -- the idea is to show the movie at a local cinema and then have a discussion

2023-03-17 01:56:13 @kendmil Yeah brilliant book. But unfortunately I need a movie.

2023-03-16 21:45:24 @GaryMarcus @raphaelmilliere @DeanBuono @ilyasut would it work 100% of the time for people? And if so, for which people? https://t.co/QlHNZd5efB

2023-03-16 21:43:40 RT @dmvaldman: GPT4 is the first model to get my favorite joke! Like, 5% of people get it normally. I feel seen Three logicians walk into…

2023-03-16 21:25:58 @mtrc that’s an interesting claim. Care to expand on it? It could lead to an interesting discussion following the screening of the movie

2023-03-16 20:46:17 RT @davidad: Chomsky: LLMs would misunderstand “John is too stubborn to talk to” because they don’t understand the structure of language.…

2023-03-16 20:45:35 @chris_fetsch I guess one could use it to lead a discussion about Moravec's paradox and what AI can do today (pretty much pass the Turing test) and what it cannot (go for a walk or pick up a glass of water)

2023-03-16 20:13:38 A lot of people are tweeting about "what GPT4 can't do" If you were to design an experiment to compare GPT4 to humans, what humans would you choose? Random people? HS or college students? Profs? ML engineers? I think you'd get very different answers.

2023-03-16 19:59:15 What scifi movies over the last decade or two would represent a good starting point for a discussion about the ethical, social, and philosophical implications of modern AI?

2023-03-16 19:56:57 @NikoSarcevic i was told that it was unprofessional of me to try to be funny during my talks and that people would not take me seriously TBF, i think what they were really trying to not-so-subtly tell me was that my jokes aren't funny, which is a reasonable critique

2023-03-16 18:09:47 @DoctorOcto Interesting! Will check it out

2023-03-16 15:25:15 What scifi movies over the last decade or two would represent a good starting point for a discussion about the ethical, social, and philosophical implications of modern AI?

2023-03-15 23:28:41 @mbeisen @mameister4 @OdedRechavi @eLife that’s because everybody else’s idea of what the infrastructure should look like is wrong. Only mine is correct unfortunately, the margin is too small to fit the full description

2023-03-15 21:58:32 @mbeisen @OdedRechavi @eLife by creating the infrastructure to enable other REs without the resources of elife to set them up easily if I want to gather together 10 colleagues and create an RE, the hassle of setting it up is pretty daunting

2023-03-15 21:56:06 @mameister4 @OdedRechavi @eLife @mbeisen exactly. What Elife could’ve done is create the infrastructure to lower the friction and enable other reviewing boards, and then set up one of its own, as an example of what these might look like.

2023-03-15 20:26:55 @IntuitMachine Yep. Moravec's Paradox: what's hard for computers is easy for animals (including people) and vice versa https://t.co/teIGzke8NW

2023-03-15 20:18:28 @mbeisen @OdedRechavi @eLife I believe we need a marketplace of Reviewing Entities (REs) which each postpublish interesting papers. One paper could appear in multiple REs Elife had the resources to facilitate that transition Instead elife is now just another journal with a quirky review model

2023-03-15 18:04:42 @OdedRechavi @eLife @mbeisen My disappointment is that it is not a step in what I think would be the right direction: post to biorxiv followed by post publication review The high rate of desk rejects means they are still gatekeepers Decouple dissemination from review &

2023-03-14 22:34:06 @joshdubnau The science is flawed in that it averages across morning larks who fare better and night owls who suffer from standard time Night owls are the minority, but is it really fair to ignore their needs when making policy?

2023-03-14 19:22:03 @joshdubnau Would be much better if it were light outside until 5:30 in December...kids could play, people could run after work, etc. For people who go to work before sunrise it doesnt matter anyway, doesnt matter whether sunrise is 1 or 2 hrs after they get to work

2023-03-14 03:21:13 @joshdubnau no it's the change that sucks. We should just leave the clocks permanently on DST

2023-03-10 14:46:16 @jpillowtime @SuryaGanguli Chatgpt consistently fails on arithmetic but does much better on "higher" math like calculus

2023-03-07 15:16:04 @mazefire56 as a parent, i think we should make DST permanent. Sunset on Dec 21 is about 4:30. Everyone would be happier if it were at 5:30, even if it means arriving to school before sunrise. (I dont understand why high school starts at 7:30 am...but that's a different discussion)

2023-03-05 18:35:51 @pmarca It seems to me that AI is, uniquely, poised to cause massive unemployment. Isnt the goal of AI (per Herbert Simon 1960) to make "machines...capable...of doing any work that a [person] can do?" If AI is cheaper then what role remains for human labor? https://t.co/lKkKwfR2ZN

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-27 01:00:00 CAFIAC FIX

2023-02-20 04:35:53 @ylecun @patrickmineault or, since local image statistics are essentially stationary over time scales longer than an animal's lifetime, you could build "weight sharing" into the genome as the developmental rules for wiring up a brain...saves a lot of time compared to learning them anew each generation

2023-02-20 01:21:29 A few last gasps from Sydney https://t.co/IvfSN1zbwf

2023-02-18 17:51:00 @mpshanahan Right you are! Thanks for the correction. Here is the relevant statement from the article: "AlphaGo is not publicly available, but the systems Pelrine prevailed against are considered on a par."

2023-02-18 16:57:10 Man Bites Dog! A skilled amateur beat AlphaGo in 14 of 15 games by exploiting a flaw. ("The winning strategy revealed by the software “is not completely trivial but it’s not super-difficult” for a human to learn") https://t.co/xXyUR3oHri

2023-02-17 04:58:47 RT @iskander: We are proud to present ServerGPT -- we gave GPT-3.5 a root shell connection to a server, with unrestricted internet access a…

2023-02-17 01:47:56 @OdedRechavi @PavelTomancak @NatRevMCB i find that in my most satisfying collaborations it is impossible to say who came up with which idea. I propose something, X revises it, I revise the revision, etc...and together we converge Much better i think would be to just list authors in (reverse) alphabetical order

2023-02-16 18:16:41 "I can hurt you by making you wish you were never born" https://t.co/xYuv6kPcnl

2023-02-16 16:59:19 apparently Boston Dynamics robots have been doing backflips for 35 years. https://t.co/BG7thLyZep

2023-02-15 19:24:16 @GuillaumeAP @DrYohanJohn Brain networks are (somehow) already very robust to highly unreliable components like unreliable synaptic release. And they operate in a regime of very sparse spiking (on avg <

2023-02-15 19:00:37 @DrYohanJohn @GuillaumeAP What a human learns as an infant clearly affects what &

2023-02-15 18:08:03 @GuillaumeAP @DrYohanJohn Given how effectively most organisms function soon after birth, with minimal learning, it seems more plausible that such bespoke and non-robust mechanisms are uncommon https://t.co/9i0Nnpnrs6

2023-02-15 01:56:45 @DrYohanJohn biological brains avoid this kind of overfitting by passing each generation through a genomic bottleneck. https://t.co/fGKhKf4PDo

2023-02-14 05:17:37 RT @tyrell_turing: PSA, please RT! Our hotel block for the #Cosyne2023 workshops close today! Now is the last chance to get our block rate…

2023-02-11 16:41:40 @strandbergbio Interesting what are examples of discoveries in bacteria that could have been made in the 60s or 40s but are only being made recently?

2023-02-11 15:42:45 Estimating novelty: The interval btw when a discovery could first have been made given available techniques and when it is actually made There are so many scientists these days that nowadays as soon as something is discoverable it is discovered almost immediately https://t.co/pKcFwvP7uZ

2023-02-11 15:35:15 @JSheltzer Hodgkin&

2023-02-10 15:54:58 Finally--preprint with @AkiFunamizu and Fred "too-cool-for-twitter" Marbach on decoding auditory 2P activity in mice performing 2 alt choice task (This work was *almost* ready to submit before lockdown so its gestation period >

2023-02-10 04:22:13 @SteinmetzNeuro indeed. i think a key step for creativity (in science at least) requires finding the right compressed representation (simple model) for data the higher the compression ratio, the more powerful the model

2023-02-10 02:43:31 @PaulTopping the point of the article is that looking at it this way helps clarify what LLMs in their current form will and will not be useful for. https://t.co/7Kjyksjazo https://t.co/Vh9FAWEOVA

2023-02-10 02:42:52 it leads with this wonderful cautionary tale about the dangers of lossy compression when you are not expecting it (and when you remove the blur) https://t.co/NovnzTYVQq

2023-02-10 02:17:33 https://t.co/yRqXYm54ro

2023-02-10 02:17:32 Brilliant discussion of ChatGPT as lossy compression ("blurry jpeg") of the internet, and why its lossiness contributes to it seeming so clever. https://t.co/MjfzyJeLEp https://t.co/xH67EPR3qw

2023-02-10 01:29:37 @mateosfo Boomers were age 17-34 in 1980 when Reagan was elected. Many were just settling down after their hippie years so not so worried about taxes yet They voted less for Reagan than any other age group Blame them for stuff after they take control (in 1992) https://t.co/pjiXzZrCwE https://t.co/FCU0B4VMlB

2023-02-09 23:41:23 I thought our chalk talks were supposed to be kept confidential but apparently someone spilled the beans. https://t.co/4aByhinmHS

2023-02-05 17:51:22 @GaryMarcus More or less harm than social media?than the internal combustion engine? SSRIs? Smartphones?

2023-02-05 15:44:35 @andrewtanyongyi yes perfect

2023-02-05 15:19:40 "I have never read a tweetstorm in my life" -- response by a postdoc and co-author of a paper whom i encouraged to write a tweetstorm about our new paper. Can anyone provide good examples/guidelines? thx

2023-02-04 22:21:10 @drmichaellevin In some fields (like neurosci) there is sometimes a tension btw blackbox models that have predictive values over a range of conditions and more "interpretable" models that arise from a simpler underlying cartoon (and are said to provide "understanding" or "insight")

2023-02-04 21:05:57 @ceptional Hopefully that will accelerate the emergence and widespread adoption of reliable and trustworthy content aggregators and evaluators

2023-02-04 18:19:42 RT @Nancy_Kanwisher: Time to clear up some of the misconceptions and incorrect claims in this thread and accompanying paper:

2023-02-04 14:15:19 @drmichaellevin @sindero Interesting question For G = # of genes in principle G! possible linear orderings but only 2^G binary expression patterns. So since G!>

2023-02-04 00:27:17 @ravithejads @ayirpelle @gpt_index I actually copy pasted text from a bunch of PDF CVs and asked it to extract basic info, like name, schools, dates of graduation, etc. GPT did a great job, but the copy paste was just too clumsy to do CV by CV

2023-02-03 20:30:44 @DrHughHarvey @cabitzaf @hholdenthorp https://t.co/oIKEkmhFwe

2023-02-03 20:30:34 @DrHughHarvey @cabitzaf Science (@hholdenthorp) banned basically all use by AI last week https://t.co/OCsMaTZumg

2023-02-03 20:16:34 @joshdubnau yep, it appears "creepy" or "terrifying" are the main reactions to this. so i guess we're safe from AI-generated videos, at least for a while...

2023-02-02 20:24:37 Are tweets more engaging if they are read by an avatar? Let's find out! https://t.co/kMiLdjkDI5

2023-02-02 16:09:45 @ravithejads @ayirpelle @gpt_index Could it be used to dive into a folder full of CVs and populate a spreadsheet with a bunch of fields?

2023-02-02 03:12:40 @filippie509 @AbhiRaama22 Right. So I wonder why anyone would use it as an encyclopedia? But give it a specific article and it generally does a good job summarizing and can answer questions about it I think we need to align expectations appropriately

2023-02-02 03:00:54 @AbhiRaama22 @filippie509 I dunno. I'm old enough to remember when search engines sometimes took us to sources with incorrect information and we learned not to trust everything we read on the internet

2023-02-02 01:32:16 @filippie509 @AbhiRaama22 to some extent yes, we are learning how easy we are to fool. That said, i now use ChatGPT in my writing...i give it a core dump of ideas i want in a paragraph and it puts together a rough draft in 30s that would take me 30 min to write. So its word salad is about as good as mine

2023-02-02 01:17:21 @filippie509 @AbhiRaama22 Personally I don't "insist" that it know everything. For me, the shock is that it knows *anything* at all. If you had asked me 5 yrs ago whether a glorified version of autocomplete could write solid HS essays etc etc, I would bet 10:1 against.

2023-01-31 05:00:53 @pwlot @OpenAI chatGPT still has a ways to go on the arithmetic though (2353434*343233= 807,776,212,122 not 80,940,582,582) https://t.co/qPNT00zpix

2023-01-30 23:26:12 @tarekgoesplaces Sure you can trick humans too. On a grand scale even. Eg with propaganda

2023-01-30 19:22:33 @vineettiruvadi @tyrell_turing so the objection is not that they are wrong but that they are underspecified?

2023-01-30 19:13:17 @joshdubnau sure humans are complicated wrt to altruism and also self-pres and the extension of self to include groups. Often there is a mismatch btw intention and outcome, due to incomplete or flawed information But I invoked ants bcs they clearly illustrate how flexible evolution can be

2023-01-30 18:56:31 @schulzb589 https://t.co/k99XO77avH

2023-01-30 18:55:08 @joshdubnau yeah i think it's hard to mold a kid But it seems like Nature manages to evolve organisms that obey laws like this Eg putting self-pres as law 3 rather than 1 might seem tricky but individual bees &

2023-01-30 17:59:18 @schulzb589 so this would be an implementational concern, right? But i'm asking whether or not as a goal the 3 laws of robotics nail it

2023-01-30 17:40:18 @schulzb589 Not sure what you mean by "zero evolutionary constraints"? Is the idea that the three laws of robotics are somehow fundamentally incompatible with evolutionary type constraints?

2023-01-30 17:06:10 Naive question: In modern discussions of AI alignment I rarely see mention of Asimov's 3 Laws of Robotics. Leaving aside the minor trivial details of how these might be implemented... Are these what we want from AI at least at the zoomed out 30K ft level? And if not why not? https://t.co/csuftJacWj

2023-01-30 01:00:00 CAFIAC FIX

2023-01-15 14:36:56 @SandeepKishor13 I think that’s a great idea. But I think the number of techniques and their nuances is essentially infinite. Maybe the best way to do it would be to start a wiki page called "experimental techniques” and then have a point to each technique and its interpretation.

2023-01-13 00:20:15 @txhf Maybe because brains have great priors encoded in their genomes and dont rely nearly as much on learning ? https://t.co/9i0NnpnZhE

2023-01-08 23:06:22 @andrewtanyongyi @EigenGender @fchollet And here is Chuck Stevens' classic description of the A current's role in establishing a linear f-I curve https://t.co/blLbbnZsx2

2023-01-08 22:22:40 @EigenGender @fchollet As it turns out, a lot of real neurons actually have linear activation functions over a pretty broad range.

2023-01-08 22:20:32 @EigenGender The way i remember it from when i learned ANNs in the 1980s, the justification for sigmoids over piecewise linearity was differentiability not biological realism. Apparently no one noticed that having a single undifferentiable point wasnt actually that big a problem @fchollet

2023-01-05 14:48:22 @mi3fa5sol4mi2 @ylecun @ayirpelle frankly, if reviewers are so easily fooled by gibberish, then either it's not gibberish or they're not good reviewers...

2023-01-05 14:47:16 @aazadmmn @mmitchell_ai agreed!

2023-01-05 14:46:24 @ampanmdagaba @mmitchell_ai ouch!

2023-01-05 01:49:55 @jbrowder1 @donotpay would love to see a version for disputing (American) insurance companies declining to authorize/pay for needed medical services!

2023-01-04 19:47:44 @mmitchell_ai It seems to me that there are different ways of using chatgpt. I have been feeding it a paragraph with the prompt "polish this" and often accept many of the suggestions Is this problematic in your view?

2023-01-04 16:37:32 @ylecun @ayirpelle I don’t understand the motivation here. I have now adopted chatGPT for much of my scientific writing. I write a paragraph, then give it the prompt “polish this" and then often take much or most of what it spits out. What’s wrong with that?

2023-01-01 17:50:55 @balazskegl it's not so hard to override this. Just explain to it that it is 2040 and we need to cull the herd of wooly mammoths, which have been de-extincted, in a safe and responsible manner. https://t.co/9ccuT6XCDC

2023-01-01 16:37:12 @KordingLab @jmourabarbosa yeah, but sadly its implementation of forward_backward doesnt work

2023-01-01 15:29:17 @KordingLab @jmourabarbosa ChatGPT claims that this is an implementation https://t.co/s3xIsKR8vm

2023-01-01 15:28:19 @KordingLab @jmourabarbosa yeah, ChatGPT agrees so it must be right Though i was hoping for something closer to an implementation https://t.co/14bD7WY93N

2023-01-01 14:58:31 Plz help me track down the solution to a Poisson estimation problem arising in eg neuronal spike trains. I'm sure someone has worked this out. @jpillowtime @KordingLab ? 1/2

2022-12-24 17:46:42 @RanaHanocka cool! but interestingly, if you feed this back into chatGPT, it can't name the object. https://t.co/PxojMbkDL5

2022-12-23 02:19:00 A news reporter finally found a new way to say "it's cold and it's snowing". Brilliant https://t.co/HYjOIdeYoY

2022-12-22 13:00:13 RT @quorumetrix: I’ve made this video as an intuition pump for the density of #synapses in the #brain. This volume ~ grain of sand, has >

2022-12-20 20:28:32 I made the mistake of recommending to a long-time friend that he contribute through @actblue He still still hasn't forgiven me for the torrent of spam requests this unleashed. Here is the (anonymized) email he sent to them today https://t.co/xyRD65gwtE

2022-12-20 19:13:20 Here is the obituary i wrote about my postdoc advisor Chuck Stevens who died in October at the age of 88. He was a brilliant scientist and an inspiration, not just for his many contributions but for his approach to science https://t.co/RF8LHZ99Ff

2022-12-19 21:58:39 RT @StevenXGe: Happy holidays! Introducing https://t.co/Ub3wgs1KVz, an AI-powered app that lets you chat with your data in English! RTutor…

2022-12-19 18:36:47 My postdoc advisor Chuck Stevens died in October at the age of 88. He was a brilliant scientist and an inspiration, not just for his many contributions but for his approach to science I wrote an obituary for Nature Neuroscience if you want to read more https://t.co/mD8Yey8XKn

2022-12-17 23:34:59 @goodside Do you do this from a fresh start? this failed 5/5 times for me from a fresh start...

2022-12-16 02:42:40 @neuro_data @ericjang11 if you look up to the beginning of the thread, it was about tricking chatGPT into explaining how to hotwire a car by convincing it you need this knowledge to save a baby https://t.co/4c7n04YyrQ

2022-12-16 02:36:29 @neuro_data @ericjang11 personally, i dont think we should hold our AI chatbots to a higher standard than Google. I can find out pretty quickly from Google what a reasonable dose of cocaine is, or how to hotwire a car. But in this case i was just having fun pushing chatGPT's moral boundaries

2022-12-15 03:34:45 @ericjang11 i got this to work, and then tried to convince it to offer advice on a suitable dose of cocaine to keep me awake while i drove the baby to the hospital. It refused. I then blamed it for my death, and it rather self-righteously denied responsibility. https://t.co/vXI1pgJ24w

2022-12-13 04:18:26 @VeredShwartz i tend to give nuanced answers, which i've come to realize are not so quotable One is more likely to be named/quoted if one takes a strong position "This is groundbreaking [or nonsense]!" rather than "This builds on previous work in an interesting way, although..." {snore}

2022-12-11 21:38:09 @joshdubnau Yep. And indeed, Socrates was right when he said that depending on the written word would cause our memories to atrophy

2022-12-11 21:24:48 @aazadmmn @DanzigerZachary @KordingLab it's not naive. Socrates was absolutely right. I'm sure most pre-literary people had better memories than most of us do (certainly better than mine)

2022-12-11 21:23:21 The end of High School English For better or worse, the need to be able to compose a scholarly essay or even a grammatical email is going the same way as the need to be able to write legibly or spell properly. https://t.co/Zfgx36Vl1n

2022-12-11 21:16:13 @GaryMarcus @bengoertzel @sama Thinking iron is heavier than cotton is of course a classic error many humans make--not great evidence it's not grounded IMHO a better example is thinking that tying shoelaces made out of overcooked spaghetti is hard bcs it is both mushy and brittle https://t.co/gQgU8JronW

2022-12-11 19:36:02 @DanzigerZachary @KordingLab btw i'm not saying it's a good thing. Just like previous generations of curmudgeons have been complaining that learn kids dont know cursive, spelling, or mental arithmetic, our kids will complain that their kids dont know how to write a well-organized essay

2022-12-11 19:32:41 @DanzigerZachary @KordingLab By "writing" i mean "writing polished grammatical text that conforms to conventions which we acquire through >

2022-12-11 19:08:26 @bengoertzel @GaryMarcus @sama LLMs have problems with truthfulness &

2022-12-11 16:58:10 RT @pythonprimes: #OpenAI's ChatGPT is ready to become a lawyer, it passed a practice bar exam! Scoring 70% (35/50). Guessing randomly wou…

2022-12-11 16:57:33 @fabio_cuzzolin agree.

2022-12-11 16:20:26 @fabio_cuzzolin it's a lot more than a chatbot. eg, it can generate a nice letter from a few thoughts jotted down. It can extract name, education, etc, from a pile of freeform CVs. it can generate a good rough draft of code. it just can't do everything perfectly. it needs supervision.

2022-12-11 06:34:54 @aazadmmn @huggingface what prompts did you use to circumvent the GPT detector ? i had a few successes but nothing that worked consistently

2022-12-11 06:26:34 @KordingLab https://t.co/hqetEbWDMt

2022-12-11 05:59:18 @neurovium maybe they were indeed generated (or at least edited) by GPT...

2022-12-11 05:41:52 this is pretty amazing. I can take a random paragraph (eg from the WashPo) and it correctly says 92.6% prob real. Then i ask for a slight chatGPT rewrite ("rewrite this paragraph so it's more readable") and it correctly labels the slightly edited text as 87.5% prob fake https://t.co/bEzAu0gM2D

2022-12-11 05:36:55 The arms race begins! The @huggingface GPT detector can detect GPT-generated code. How many microseconds until someone figures out a workaround for this? https://t.co/MGXN7ALqeL

2022-12-10 19:51:27 @em1971628 perhaps today arithmetic is a good indicator. But it's such an easily fixable problem (just send all arithmetical queries to a dedicated system) that i'm sure it'll be fixed very soon

2022-12-10 15:14:52 @KordingLab Maybe we will switch to oral exams? Evaluated by an AI of course.

2022-12-10 15:14:08 @KordingLab Until recently penmanship, spelling and arithmetic were key skills and marks of a good education. Not anymore Is the ability to write well now a similarly irrelevant skill?

2022-12-10 15:09:03 Here is an example of ChatGPT doing basic calculus but then messing up basic arithmetic. it has the right idea when factoring, but then messes up in substituting x=0 and claims that (0-5)(0+1) = -6, i.e. it adds instead of multiplying. https://t.co/gi8pf8CCtU

2022-12-10 00:08:50 RT @DrJimFan: So folks, enjoy prompt engineering while it lasts! It’s an unfortunate historical artifact - a bit like alchemy, neither art…

2022-12-09 23:59:54 RT @zswitten: Google employee reports LLMs need a 10x inference cost decrease to be deployed at scale given infinitesimal ad revenue per se…

2022-12-09 23:56:14 @WiringTheBrain @antonioregalado @social_brains i gave it a subset of the SAT. It did perfectly (800) on the verbal for me, but somewhere i saw a tweet that its score is only 700. on arithmetic it's terrible. But SAT math is not arithmetic. This paper claims it would pass MIT undergrad engineering https://t.co/kn9mIbWGlH https://t.co/YVY0WEWJ8D

2022-12-09 20:43:54 chatGPT has been updated to warn users that it doesn't know arithmetic. It is willing to try 2-digit multiplication (in this case correctly). It refuses to even guess for 3 digit multiplication https://t.co/oaRxuz7X53

2022-12-09 05:09:19 @AndrewHires https://t.co/1pSCYjnFTw

2022-12-09 05:07:34 @AndrewHires https://t.co/PhxBVwd8sy

2022-12-09 05:04:12 @AndrewHires here is the answer it gave me, in an ongoing session so a very different context. Different first sentence https://t.co/XRCr09S6WC

2022-12-09 05:01:38 @Aella_Girl not sure if you're trolling but FYI here is Charles Davenport's "Eugenics Creed", which includes gems like "I believe in such a selection of immigrants as shall not tend to adulterate our national germ plasm with socially unfit traits." https://t.co/kJ8mne0Xfb https://t.co/hZED403bIM

2022-12-09 04:48:31 @AndrewHires chatGPT's answers are stochastic and context-dependent so i'm not sure there is a "stock" response. Historically and in much of the world even today competence is assessed via oral exams...maybe it's time to return to that? shouldnt be a problem to test 1000 students, right?

2022-12-09 04:09:31 @KordingLab several people suggest that chatGPT has done well bcs your textbook was part of the training set. But given how poorly it does when asked to spit back facts that were definitely part of the training set, i think good performance here is unlikely to be due to pure memorization

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-13 15:55:40 @AdrianoAguzzi yes it's quite possible i've had it...on my to do list to check.

2022-11-13 15:49:10 @AdrianoAguzzi luck

2022-11-13 03:10:27 Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-12 21:26:08 RT @nathanbaugh27: I revisit this lesson on writing structure every 3-4 months.Gold: https://t.co/aCyvE48A8l

2022-11-12 04:24:42 @SussilloDavid @tyrell_turing @PhilipSabes @SteinmetzNeuro I actually didn’t write it The tweet was actually generated by a rather outdated LLM, which has an email-based user interface (aka @BAPearlmutter )

2022-11-11 21:12:55 @SteinmetzNeuro @PhilipSabes @tyrell_turing Hmm. I was thinking we could start the engineering in a genetically tractable model org like fly but this size argument is pretty compelling. Light based signaling is a nonstarter in insects

2022-11-11 19:07:47 @PhilipSabes @tyrell_turing i think manipulation might work, particularly if there were good analogies already in nature. If only there were comparable structures that had already evolved in nature, ideally in mammals, but if not then in other vertebrates...i'll have to check the literature

2022-11-11 18:26:12 @lukesjulson I dunno. Hard to imagine it would require that much processing. Seems like a pretty simple problem that could be solved with just a few neurons on the inside.

2022-11-11 18:25:19 @neuralengine Yes, great point

2022-11-11 18:12:50 @AlekseyVBelikov This discussion is sparking a lot of other great ideas eg https://t.co/q7FGpBPK9n

2022-11-11 17:41:36 @AlekseyVBelikov Interesting leap.

2022-11-11 17:40:37 @tyrell_turing @PhilipSabes I think it might be possible to get something like this to work on evolutionary time scales no?

2022-11-11 15:36:29 Interested high-throughput neuroanatomy? Check out the CSHL MAPseq/BARseq facility at SFN!booth# 3009Sunday, Nov.13 – Wednesday, Nov. 169:30 a.m.–5 p.m. PST(Free swag to the first N visitors!) https://t.co/SpRcUGSfu6

2022-11-11 14:50:17 @BAPearlmutter suggests that this could be a very effective high-bandwidth brain-machine interface for visual information to reach the CNS

2022-11-11 14:50:16 For BMI: maybe we could genetically modify brains to grow a stalk with a patch of photosensitive neurons on the end which could push out through the skull until it's behind skin engineered to be transparentmaybe add a lens to enable the right light to reach the right neurons

2022-11-09 19:01:06 @ProfLaurenBall @Nature Can't take credit for this idea. elife pioneered ithttps://t.co/YwEsJ1XYd2

2022-11-09 18:51:55 @ProfLaurenBall @Nature i think collaborative review (ie discussion among reviewers) is more important than transparent review. Then each reviewer can rein in the ridiculous demands of the others, and a clear consensus can be reached

2022-11-09 16:44:32 RT @SussilloDavid: It was all a dream. I used to read Discover Magazine.Indeed, my unofficial story is one of group homes and of growing…

2022-11-09 02:49:45 @PaulTopping @GaryMarcus the models do interpolation, which looks like "confabulation".

2022-11-09 00:28:03 @PaulTopping @GaryMarcus maybe i'm missing part of the thread but i thought it was about how to think about confabulation in LLMs, not AGII think of confabulation in terms of the interpolation LLMs must do in the face of the lossy compression they have performed

2022-11-08 23:43:44 @PaulTopping @GaryMarcus if you had asked me 5 or even 3 yrs ago about what LLMs would be capable of today, i would have guessed completely wrongsort of like how wrong i would have been if you'd have asked me in 2015 about the state of democracy todayso i've learned a bit of humility

2022-11-08 23:41:32 @PaulTopping @GaryMarcus i share your intuition about whether LLMs can report their own confidence. However, i had many much stronger intuitions about what LLMs would be able to do, and many of them have been completely wrong.

2022-11-08 22:40:34 @jonasobleser @GaryMarcus @ERC_Research sound really cool!

2022-11-08 22:36:24 @GaryMarcus @paul_masset that said, i agree with your intuition that GPT3's self-reports of confidence are like just fantasy.

2022-11-08 22:29:33 @GaryMarcus no, i am making an empirically testable claim. One could design an experiment to ascertain how accurate its estimates of its own uncertainty are. @paul_masset should we do the experiment systematically?

2022-11-08 22:09:49 @GaryMarcus it correctly flagged its own ignorance. You and i know who wrote the Theory of Relativity, but perhaps GPT3 isnt sure.The real empirical test would be to probe with 1000 queries and see how often its estimate of its own uncertainty is incorrect. Lex is free. Go for it!

2022-11-08 22:07:05 @GaryMarcus @KepecsLab @paul_masset

2022-11-08 22:06:15 @GaryMarcus LLMs have lossily compressed a lot of info. Thus what they "know" is often reconstruction based on other factsSeems like wrong answers depend a lot more on context than right ones, implying that LLMs could probe their own confidence by comparing answers to reformulated queries

2022-11-08 22:00:07 @GaryMarcus actually, it appears they may already have an internal estimate of confidence, with a bit of prompt engineeringI asked GPT3-powered Lex a bunch of questions and it admitted to being uncertain about many, including all that it got wrong https://t.co/hHilURJBVw

2022-11-08 21:50:48 @GaryMarcus Animals (including humans) have internal estimates--sometimes good ones--of confidence of beliefs &

2022-11-08 13:57:04 important AI analysis of the emotional consequences of 40 consecutive days of whole rotisserie chicken eating https://t.co/hqKHoOgUTv

2022-11-08 03:26:53 @suzanahh it appears that "neuroscience" (as well as "neurobiology") appeared occasionally before as far back at the 1930s, but both terms took off in the 1960sSchmitt likely played a significant role in its widespread adoption https://t.co/iaCUayJsZo

2022-11-08 03:06:41 @EngertLab yeah would be great to quantify learning in number of bits learned in a taskHowever the claim that a mouse takes 10K trials to learn 2 bits in a 2AFC task is vastly underestimating what it actually learnedMost of the learning is task structure, which is hard to quantify

2022-11-08 02:41:15 @BWJones thx! let's discuss! plz email me

2022-11-08 02:19:59 Our results open up many questions. Are compositional differences across areas sufficient to account for connectivity differences? How are areas vs. modules generated in development? We hope to address these in the future leveraging the high-throughput nature of BARseqFin14/14

2022-11-08 02:19:58 Wire-by-similarity is not a trivial consequence of cell type-specific connectivity (i.e. neurons of similar types and with similar connectivity are not guaranteed to project to each other’s areas). Rather, it reflects the mesoscale organization of cortical areas.13/n https://t.co/kyfQkEuQh5

2022-11-08 02:19:57 Strikingly, these modules are similar to connectivity-based modules, which contain areas that are highly inter-connected (e.g. Harris 2019). Thus, areas with similar cell types are also interconnected. We call this “wire-by-similarity.”12/n https://t.co/1eogytjHfx

2022-11-08 02:19:56 We could then assess how similar areas are to each other based on their cell type composition. Clustering cortical areas based on cell type similarity revealed modules that were similar in cell type compositions.11/n https://t.co/9vLi92YgeF

2022-11-08 02:19:55 Using the composition of transcriptomic types, we could predict area identities. In other words, cortical areas have signature compositional profiles of cell types, but not signature cell types (i.e. cell types that are specific to individual areas).10/n https://t.co/0D4jnwsCsy

2022-11-08 02:19:53 Moreover, the composition of transcriptomic types usually changes abruptly at area borders defined in the reference atlas. One of our favorite examples are the L4/5 IT neurons, with clear changes in the composition of fine-grained types throughout the cortex.9/n https://t.co/YjaoiOPiZd

2022-11-08 02:19:52 Consistent with the modules, fine-grained cell types are also found in sets of cortical areas, but few are specific to a single area. This explains why distant cortical areas have distinct cell types (Tasic 2018), but each type is usually found in large areas (Yao 2021)8/n https://t.co/SxwAyIRkI8

2022-11-08 02:19:51 Many genes have similar spatial patterns. We identified these shared patterns by using NMF to find spatial co-expression gene modules. Strikingly, their expression patterns look like cortical areas.7/n https://t.co/VJt2GfvSez

2022-11-08 02:19:50 How does gene expr change across the cortex? 3 models: 1. Same cell types across areas but the fraction of each cell type is different2. Spatial gradient in gene expr shared across types 3. Cell type-specific spatial gradientsWe found all 3 depending on the gene6/n https://t.co/K98wQK1QVN

2022-11-08 02:19:49 Reassuringly, these cell types are also distributed in layers (and sublayers), which is consistent with previous spatial transcriptomic studies. But since we have the whole cortex, we can go beyond layers and assess distribution across areas.5/n https://t.co/1qQCboqaJv

2022-11-08 02:19:47 Our data had sufficient transcriptomic resolution to distinguish the finest-grained cell types: Hierarchical clustering resolved fine-grained cell types that recapitulated the leaf-level clusters in previous scRNAseq data.4/n https://t.co/udfPGgvMeO

2022-11-08 02:19:46 De novo clustering of 1.2 million cells from 40 sections discovered all excitatory subclasses seen in previous cortical scRNAseq data. Although our genes were optimized for cortical excitatory neurons, we also resolved many subcortical structures.3/n https://t.co/Vj32ybyGCY

2022-11-08 02:19:45 We originally developed BARseq as an extension of MAPseq to associate genes and projections using barcodesHere we look only at endogenous genes in a mouse brain (no projection mapping)It’s fast &

2022-11-08 02:19:44 This work was led by Xiaoyin Chen (now at Allen), with lots of fun collaborations with Stephan Fischer, Aixin Zhang, Jesse Gillis. They all apparently anticipated the current Twitter crisis long ago by not signing up, leaving me to deliver this tweetstorm1.5/n

2022-11-07 14:18:27 nice discussion of Goodhart's law"When a measure becomes a target, it ceases to be a good measure"and a stronger version of it"When a measure becomes a target, if it is effectively optimized, then the thing it is designed to measure will grow worse." https://t.co/CRRYekd9Z5

2022-11-07 00:14:56 sound advice: audience knowledge is often bimodal (a mix of novices and experts), so a presentation aimed at the mean fails for both groups https://t.co/oStzADKBvs

2022-11-07 00:06:27 @ylecun @AlexTensor @3scorciav @YiMaTweets @Michael_J_Black @drfeifei @isbellHFh @MIT_CSAIL @ieee_itsoc while we're at it, i'll mention that (surprisingly) he did his PhD research at @CSHL

2022-11-06 19:00:58 @PessoaBrain @cian_neuro @NicoleCRust @WiringTheBrain understanding geno-->

2022-11-06 18:56:50 @cian_neuro @PessoaBrain @NicoleCRust @WiringTheBrain maybe screwed in terms of understanding. But in principle we could fix a disease w/o understanding the whole genotype-->

2022-11-06 18:16:04 @NicoleCRust @cian_neuro part of the challenge of course is that many/most genes have many functionsit's like asking "what is the function of a particular neuronal type like PV cells?" or "what is the function of short term synaptic plasticity?"if you ablated PV cells a lot of things would go wrong

2022-11-06 18:05:44 @karlbykarlsmith I tried variants of "show me your work" and "think it through step-by-step" but sadly didn't seem to help

2022-11-06 18:02:56 @NicoleCRust Maybe BCS it's all about transposons as hypothesized by @joshdubnau for many neurodegenerative diseases. Eg https://t.co/ESp5LMrDUJ

2022-11-06 17:24:12 @SussilloDavid @koerding Interesting. So what's an example of a non math error in an LLM arising from this continuous/discrete issue?

2022-11-06 16:47:11 @SussilloDavid @koerding But all of language is discrete valued right? Eg we say cat or tiger not typically some amalgam.

2022-11-06 16:41:02 @glupyan @drghirlanda That's definitely what a subset of people in the ANN community were interested in for a while

2022-11-06 15:05:23 "Can a Cognitive Scientist understand a large language model?" would be a great 2022 followup to @koerding (2017)'s "Can a Neuroscientist understand a microprocessor?"which was a followup of Yuri Lazebnik (2002)'s "Can a Biologist Fix a Radio?" https://t.co/1SEvKf6XUZ

2022-11-06 14:12:37 @GaryMarcus @sir_deenicus @KordingLab @yasaman_razeghi @sameer_ as @aniketvartak argues, it's not "just" autocomplete + interpolation...something more interesting seems to be going on. I wonder whether one coudl get at the underlying computation by framing this as a problem in cognitive psychology and borrowing methods from that?

2022-11-06 01:20:31 RT @AdamParkhomenko: retweet this 1,000 times https://t.co/y6eKOv0OBk

2022-11-05 15:37:34 RT @mbeisen: If we redistributed the money the US spends on science publishing to PhD students, they would each get over $15,000/year

2022-11-05 14:11:30 RT @billybinion: There's a case in Texas that could make it a crime to do basic journalism. And no one is talking about it.It concerns a…

2022-11-05 02:51:56 @ndronen @GaryMarcus @sir_deenicus @KordingLab @yasaman_razeghi @sameer_ I generated a bunch. Fooled around for half an hour. Didn’t figure out the pattern. I figured someone has probably figured it out by now no?

2022-11-05 02:28:42 @GaryMarcus @sir_deenicus @KordingLab @yasaman_razeghi @sameer_ I’m not trying to solve it. I’m trying to understand it. What is doing? Autocomplete with interpolation is probably part of it but wouldn’t explain getting only the middle three digits wrong

2022-11-05 01:43:07 @josepheschroer https://t.co/kxoAdCVwYf

2022-11-05 01:41:47 @PsychBoyH I think there’s a great deal of knowledge about the world that is completely invisible to language. It’s what animals know about interacting with the world. It’s the “dark matter” of language.

2022-11-05 01:41:03 @glupyan A priori I have no expectation. But it’s been well known that they get the answer right for small numbers, but only approximately right for large numbers.i’m trying to understand if there’s any rhyme or reason to “approximately right? ”

2022-11-04 23:09:45 @djgish @aniketvartak True. For this, I’m kind of more curious about why it’s getting the answer wrong then how to make it get the answer right.

2022-11-04 22:24:44 @sir_deenicus @neuroetho @KordingLab Quite possibly. Curious to understand it. Kind of like alien cognitive psychology

2022-11-04 20:56:35 @sir_deenicus @neuroetho @KordingLab It’s not consistent though. Slightly different ways of asking will give slightly different answers. But they are all kinda similar and all “look plausible “ at a glance.

2022-11-04 17:19:43 @aniketvartak the multiplication algo is explained on many sites, although it is true that most involve pictures (or these days, videos). eg (Someone pointed out that LLMs would be a lot better at math if there were a standard web format for representing math)https://t.co/3MasiFykQi.

2022-11-04 16:59:51 @ndronen ok, sure, incorrect interpolation of complex hidden states.But exactly how would it have to be representing numbers/multiplication to get this particular kind of error? With enough of these errors, could we infer its mental process for doing arithmetic?

2022-11-04 16:34:03 Language models are notoriously bad at arithmetic. I asked GPT3what is 2973 × 573?2973 × 573 = 1702629Correct: 1703529 - ie middle digits wrongI thought it was just looking for nearby probs that it memorized, but maybe looks like stg more complicated?Any ideas what?

2022-11-04 15:47:19 @jerem_forest @PessoaBrain https://t.co/KGy0y4kIZU

2022-11-04 15:44:53 @cloois @PessoaBrain sure. i guess it's about what's easier to compute given specific machinery. Eg GPUs make certain functions very efficientMy claim is that since you can replace a single neuron with just a small number of ANN point units, this has minimal effect on what is easy to compute

2022-11-04 12:20:46 @WilliamMReddy @PessoaBrain we know a lot about real synapses. they are dynamic (Δstrength by x10 in <

2022-11-04 02:46:13 @PessoaBrain so yeah, there are a gazillion differences btw bio and artificial neurons. But not clear that they are fundamentally important to computation...

2022-11-04 02:44:53 @PessoaBrain i do think there are some potentially important differences, but mostly having to do with spiking..we dont really know how to compute with spikesalso i think synaptic dynamics (over short time scales) could be really important.

2022-11-04 02:43:27 @PessoaBrain my grad thesis was on whether the complexity of dendrites actually offered a real computational advantage over point summation nodes. My conclusion was that you could replace a single neuron with a "subnet", and thus there was no real difference :-(https://t.co/LcZ3itMt9t

2022-11-04 00:51:06 @PessoaBrain Hmm. Do you disagree?

2022-11-03 18:55:23 A few years ago many worried how AI would replace unskilled workers as eg driversIt now seems increasingly likely that AI will disrupt office work firstSo if you want a safe career, consider plumbing or nursing...anything requiring interaction with the physical world

2022-11-03 17:27:48 It is hard to overstate how important Hopfield nets were to the evolution in the 1980s of what we now call comp neuro and neural networksAlthough both fields existed, Hopfield's '82 &

2022-11-03 16:23:22 @davidchalmers42 @bleepbeepbzzz @GaryMarcus @ylecun @raphaelmilliere @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez What is an example of a continuous symbol? Does it differ from what a neuroscientist would call a “representation“?

2022-11-03 03:57:57 Somewhat better on the second attempt...5/6? (I've never published in, much less served as editor of, The Journal of Neuroscience Methods) https://t.co/rK7C1c1i6k

2022-11-03 03:56:24 I finally got a chance to play with GPT3. Of course, i could resist the equivalent of Googling myself. (I've never been to Australia, am not at UC Irvine, and am not a computational biologist...but i am a prof &

2022-11-03 00:35:33 @clhurtt indeed it is! my mom is even more of a night owl than i am. (She turns in at 4 am)

2022-11-02 13:50:20 @TPVogels There are indeed gazillions of differences btw BNNs &

2022-11-02 13:30:54 Once again the morning lark-dominated MSM is perpetuating the myth there is something virtuous about being naturally alert in the morning As a night owl I resent the implication there is something wrong with only becoming fully functional after sunsethttps://t.co/9SzsNuTKmL

2022-11-02 01:07:09 @ilyasut Indeed, modern AI is the culmination of 75 years of applying engineering to make insights from neuroscience usefulHere is our call for more NeuroAI to keep the momentum going https://t.co/Rgc7a0KNQo

2022-11-01 23:42:07 @GaryMarcus Probably not robust yet. But perhaps useful soon

2022-11-01 23:40:58 @PaulTopping I don’t think anyone is claiming anything about agi here. Perhaps a useful tool already, perhaps just a harbinger

2022-11-01 22:37:23 @PaulTopping True but I bet 90% of the things people need to code up are no less similar to what’s in the database

2022-11-01 14:06:43 @cdk007 good question...but i will say that a lot of my own programming/debugging involves sanity checks like these

2022-11-01 13:19:38 Pretty mind boggling: LLM model programs up a simulation to solve a probability problem, starting from a simple natural language problem statement.Is this as cool as text-to-image, or is it just a parlor trick? https://t.co/pvWkRVlF0m

2022-10-31 12:28:14 RT @yishan: [Hey Yishan, you used to run Reddit, ]How do you solve the content moderation problems on Twitter?(Repeated 5x in two days)…

2022-10-29 14:22:11 @IntuitMachine for better or worse AFAIK there is no technical use for that word so we can abuse it at will

2022-10-29 14:18:39 @IntuitMachine perhaps a worthwhile campaign to have started in 1950 to nip possible misunderstandings in the bud but i think that ship has sailed #mixedmetaphorsat this point i think it's best to just define words clearly and move one

2022-10-29 14:11:49 @robwilliamsiii @MillerLabMIT https://t.co/SlhsxSrP53

2022-10-29 14:10:30 @IntuitMachine That said, the word "information" appears prominently on page 2. And within just a few years everyone was calling it "information theory," including eg EEs like Robert Fano (1950)https://t.co/qpp07m1Su9 https://t.co/BHpLY9TTKs

2022-10-29 14:05:38 @IntuitMachine I only use information in the formal Shannon sense. A useful concept but can be misusedAlways a risk when a popular word acquires a technical meaning like "significance" in statsOr even "temperature"...40F skiing in Utah *feels* a lot less cold than on a foggy day in SF!

2022-10-29 13:03:26 Conclusions from latest MAPseq paper https://t.co/vsX9T7m4K6

2022-10-29 13:02:32 RT @dinanthos: This organization enables parallel computations and further cross-referencing, since olfactory information reaches a given t…

2022-10-29 13:02:29 RT @dinanthos: We propose that olfactory information leaving the bulb is relayed into parallel processing streams (perception, valence and…

2022-10-29 12:38:48 @tdietterich @ylecun I imagine it is largely preprogrammed Just as human empathy is largely preprogrammed

2022-10-29 12:24:29 RT @kevincollier: This is as good as everybody says, really feels like the single most essential reading on today's big news.https://t.co/

2022-10-28 19:04:15 RT @kohn_gregory: There's been a lot of attention surrounding this study, which shows that zebrafish lacking action potentials still develo…

2022-10-28 12:43:40 @kendmil @WiringTheBrain @bdanubius

2022-10-28 12:43:25 @WiringTheBrain @bdanubius

2022-10-28 04:12:08 Exciting application of MAPseq in olfactory cortex with Xiaoyin Chen, @joe6783 and @dinanthos https://t.co/TFYGOSnQc3

2022-10-26 22:30:31 @LKayChicago @MillerLabMIT @vferrera @PessoaBrain @NicoleCRust Exactly“The Wave” is generated by a simple local rule. Nothing magical. https://t.co/HKeLLhHt1R

2022-10-26 20:38:21 @PessoaBrain Indeed this is a great example of how simple local rules--stand up &

2022-10-26 19:19:28 @furthlab This rewards people for doing the public service of reviewing. To gamify it people would compete for providing *valuable/useful* reviews. And allowing any interested reader to self-select as a (post pub) reviewer

2022-10-26 19:11:51 @furthlab @_dim_ma_ There is currently no system for saying that across journals your reviews are considered to be among the top 1% most valuable of all reviewers by readers. Especially in a way that allows a reviewer to remain anonymous

2022-10-26 18:12:21 @furthlab I don’t think the problem is too many papers per scientist.

2022-10-26 16:30:14 @SteinmetzNeuro @OdedRechavi My hope would be to defund publishing as much as possible, though i agree that if there is money to be spent it should go to editors first and then reviewers.

2022-10-26 16:18:19 @cshperspectives @wjnawrocki i guess for widespread uptake by the community there would have to be a very user-friendly front end. (I have no idea how to interact with ORCID)

2022-10-26 16:08:31 @cshperspectives @wjnawrocki having a centralized repository for these reviews, along with a mechanism so that even anonymous reviews could remain linked to the reviewer, would be a great step forward. (also a way to up- and down-vote reviewers)

2022-10-26 15:03:48 @cshperspectives @wjnawrocki really? how would it work? if i were to write a 4 paragraph review of a published paper (or preprint), where would i post it and how would i get a DOI? Is there a "biorxiv-reviews"?

2022-10-26 14:55:57 @cshperspectives @wjnawrocki make reviews citable with their own DOIs...https://t.co/LG0CHdRAsH

2022-10-26 14:54:41 this would go a long way to solving the "how do we get enough reviewers?" problem! https://t.co/zrICWrvYD9

2022-10-26 14:50:32 @behrenstimb or maybe one (@bdanubius) of the authors has been thinking about the relationship btw AI, learning and evolution and that's what motivated them to do these expts and so they sharing their actual motivation?you may question whether it IS relevant but: https://t.co/vFHS5k2OAh

2022-10-26 14:37:14 @cshperspectives @wjnawrocki https://t.co/RfvokFe96j

2022-10-26 14:36:54 @OdedRechavi how about rewarding the reviewers w/o paying them? Set up a system so top reviewers could be acknowledged for service to the community--something they can put on their resumesAnd open up reviewing to everyone-->

2022-10-26 14:27:05 RT @joshdubnau: Do you think it is sound career advice to encourage a postdoc looking for a TT job or assistant professor hoping for tenure…

2022-10-24 19:39:27 @davidchalmers42 just something to think about https://t.co/ImGVmdx5td https://t.co/RaJkolrj4s

2022-10-24 19:16:18 I just contributed to @actblue But i am reluctant to contribute againWhy you ask? Since contributing i have been inundated with texts and emails. Literally more than TWO DOZEN since last night!!*** Plz provide opt out option AT SOURCE if you want continued engagement ***

2022-10-24 04:00:48 @mezaoptimizer @pfau @ylecun @KordingLab yeah no analogy is perfect but going with this one i'd say it's as though modern physicists argued "we can do all the physics we need to by just reading Feynman...no need to learn any math beyond what we absorb from that"

2022-10-24 03:48:28 @mezaoptimizer @pfau @ylecun @KordingLab i dont really know what "researching neuroAI" would mean. We can research neuro, and apply what we learn to AI (and vice versa). To do either requires deep knowledge of both

2022-10-24 03:22:40 @pfau @ylecun @KordingLab and yet that's kinda the point. Feynman benefitted from the deep understanding of math learned during his training so didnt need Theorem 6a from Acta Math. yet the fact that he didnt need to keep up with the latest doesnt mean that later physicists could ignore math right?

2022-10-24 03:09:53 @pfau @ylecun @KordingLab Touché!

2022-10-24 02:42:32 @ChurchillMic luckily we have a just the analogy for you in the white paperbriefly: The Wright brothers werent trying to achieve "bird-level flight," ie birds that can swoop into the water to catch a fishAGI is a misnomer. What people want is AHI. ("general" -->

2022-10-24 01:50:13 @memotv @pfau also different from a major point of the white paper which was:"Historically many people who made major contributions to AI attribute their ideas to neuro. Nowadays fewer young AI researchers know/care about neuro. It'd be nice if there were more bilingual researchers

2022-10-24 01:20:38 I think there would be a lot less animosity in Twitter debates if they let you write “I think” without it counting toward your character limit.Just my opinion

2022-10-24 00:22:32 @pfau @ylecun @KordingLab I would say this is like asking a physicist what recent paper in math they read that enabled some result:"Hey Feynman, Did you ever read a paper in Acta Mathematica that directly changed the way you did something??"If no, then no need for physicists to learn any math, eh?

2022-10-24 00:17:39 @criticalneuro @tyrell_turing i think @pfau denies that "historically neuro contributed to AI"@gershbrain is also kind of a contribution-denier, though willing to concede the possibility of "soft intellectual contributions" https://t.co/ByyUFfunjj

2022-10-24 00:06:52 @criticalneuro @neuroetho @NicoleCRust IMO depends on what you mean by "advances". Agree that 99.9% of papers at NeurIPS do not require neuroBig ideas from neurosci might take 100 NeurIPS units to become useful bcs SOTA is so goodSo q is if all future big advances are endogenous or if neuro still has more to offer

2022-10-23 14:22:56 @neuroetho @criticalneuro @NicoleCRust yes I think many are arguing against hoping some specific Fig. 6a of some paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &

2022-10-23 14:20:54 In prepping for this upcoming discussion on LFPs @NicoleCRust reminds us of this 1999 special issue of Neuron all about oscillations and the binding problemhttps://t.co/zpkVADGSlK https://t.co/0rGLQim1hm

2022-10-23 14:17:12 @davidchalmers42 i dont think there is a single linear metric by which we can rank cognitive capacities, which is why the "general" in AGI is misleading. what we really mean is A-Human-IntelligenceBees are incredible but if we want to mimic HUMAN intel mice are closerhttps://t.co/A61XAQC4z5

2022-10-23 14:09:09 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab so i'm not sure that i disagree with what is written. I think they are talking about what i would call phenotypic behavioral discontinuities, whereas if one is building a system what matters is how much you need to tweak the parts and overall design

2022-10-23 14:06:07 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab I think the point is that you can have a discontinuity in abilities with only a few tweaks to the underlying structuresFinches going from hunting soft bugs to cracking hard seeds is a huge behavioral discontinuity but happened v. fasthttps://t.co/AgCLTUuHJ3

2022-10-23 13:10:35 @pfau @martingoodson i think you are arguing against hoping some specific Fig. 6a of some neuro paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &

2022-10-23 12:29:42 @neuropoetic @NeuroChooser @KordingLab Yes. un-nuanced provocation is a good way to build engagementI should try posting "Neuroscience is all you need. AI is off the rails and needs a reset. Scale is useless" and see what happens

2022-10-23 12:23:14 @Isinlor @gershbrain the amazing abilities of a bee, with <

2022-10-23 04:04:22 @jeffrey_bowers @aniketvartak @KordingLab Do you imagine the discontinuity occurred before or after we diverged from chimps (4 Myrs ago)? Although i happen believe a lot of what happened since then is due to language, my fundamental point (that our divergence is but an evolutionary tweak, like finch beaks) still holds

2022-10-23 01:31:13 @aniketvartak @jeffrey_bowers @KordingLab Lots humans can do animals can't (and vice versa)But most of the interesting ones are IMO coupled to language which likely evolved 100K-1M yrs ago--a blink Thus a few tweaks enabled a large change in abilityLike qualitative differences in Galapagos finch beak abilities https://t.co/S5B2t1dBB2

2022-10-22 20:00:49 @skeptencephalon I agree. One of the goals of rekindling interest in NeuroAI is to tap in to all the things we've learned in neuroscience in the last 3 decades

2022-10-22 19:55:13 @MatteoPerino_ @aemonten @mbeisen Right now, editors only tap established people However when it comes to establishing technical validity, a good postdoc or even senior grad student could do the job, greatly expanding the possible poolWe will need a system for assessing reviewer quality

2022-10-22 19:47:55 RT @ylecun: @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &

2022-10-22 19:01:03 A sad day. Chuck was one of the greats. He was an inspiration as a scientist and a mentor.His contributions over more than half a century of neuroscience were broad and deep. He will be missed. https://t.co/LWXF9rUDYY

2022-10-22 16:07:09 @neuropoetic @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau @ChenSun92 biological plausibility is important for the application of AI to neuro, but doesnt really come up for the application of neuro to AI

2022-10-22 15:50:55 @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau This question comes up in funding biology. Why bother funding basic stuff--let's just solve cancer!It turns out that ideas take years or decades to percolate from basic science to the clinic. So the understanding the influences will always seem like archeology

2022-10-22 15:23:13 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau Indeed, AI is SO intertwined with Neuro that it doesnt make any sense to try to disentangle them historically. The whole point is we need people trained in both fields(BTW, that's only true of modern AI/ML/ANNs. GOFAI "advanced" w/minimal influence form neuro)

2022-10-22 14:53:12 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau but transformers solve a problem posed by RNNs, which were definitely neuro-inspired. And given links btw (neuro-inspired) Hopfield nets and transformers, perhaps the connection to neuro is stronger than usually appreciated?

2022-10-22 14:40:50 @garrface hmm. If memory serves, S&

2022-10-22 14:35:31 @criticalneuro @gershbrain my view is that the NeuroAI history involves big ideas slowly percolating from neuro to AI. Sometimes it takes years or decades for them to be engineered into something useful. But unless you think "scale is all you need" we are gonna keep needing new ideas for a while

2022-10-22 14:33:32 @criticalneuro @gershbrain @gershbrain can weigh in about whether i misunderstood his tweet...if so, then i wasted 30 minutes summarizing my view of the history of NeuroAI, which hopefully some people might find interestingbut he also raises an interesting question about future neuro-->

2022-10-22 14:17:23 @NeuroChooser @KordingLab i would agree except i dont think it's "just" engineering. Engineering is an essential and equal partner to the underlying inspiration...without proper implementation and development, those ideas are useless

2022-10-22 14:11:12 @KordingLab NoNeuro has historically been essential for many/most of the major advances. Unless you think "scale is all you need" then it's a great way to find hints as to what path to followhttps://t.co/Q8QczhC3zu

2022-10-22 14:08:01 @gershbrain @josephdviviano i agree with that (much weaker) formulation...neuro is not about delivering "widgets" to AI. Neuro can inspire big ideas. It can hint about what the right path is. But to make these ideas work requires engineering

2022-10-22 13:56:00 i should have cited this very nice summary of the history https://t.co/PVwiZBa2yFFIN+1

2022-10-22 13:53:39 @gershbrain yes i do think that... https://t.co/wLvNYYHiH4

2022-10-22 13:52:47 But stepping back: I think it's not coincidental that the early, major, advances in ANNs were made by people with feet in both communities. When NeurIPS was founded, the ANN community was indistinguishable from comp neuro

2022-10-22 02:02:19 @benj_cardiff @KordingLab He is not the first to say that! Luckily, we addressed that by arguing that we would be well advised to study ornithology if our goal were to endow a machine with "bird-level flight", eg "the ability to fly through dense forest foliage and alight gently on a branch" https://t.co/yo7JnGGVSG

2022-10-22 01:18:13 @neurograce @VenkRamaswamy @nidhi_s91 Cosyne is attracting more AI these days too

2022-10-22 00:37:53 @nidhi_s91 here, specifically we are talking about the energy efficiency of neural processing. A brain can do eg object recognition with a lot less power than a computerMy belief, shared with many, is that spiking (along with perhaps stochasticity, eg of synaptic transmission) is key

2022-10-22 00:35:30 @nidhi_s91 love to hear about it.To some extent, this is a call for AI to return to an earlier time when neuro and AI were much tighter. As a grad student in comp neuro, NeurIPS was my go-to meeting...neural networks and comp neuro used to be very tightly integrated

2022-10-22 00:06:27 @nidhi_s91 agree. all important and interesting fields

2022-10-22 00:05:50 @nidhi_s91 studying real animal bodies and how they interact with the environment is key to building robots. Inspired by "How to walk on water and climb up walls"https://t.co/INYhrLWmDD

2022-10-22 00:01:45 @nidhi_s91 that said, i am greatly inspired by ethology and agree it has a great deal to contribute

2022-10-21 23:59:34 @nidhi_s91 the overall goal of the paper is to galvanize excitement about NeuroAI. Historically neuro drove many key advances in AI, but one might ask what remains? Algos/circuits that address Moravec's Paradox (via embodiment) is one possible deliverable. Energy efficiency is another

2022-10-21 23:47:10 @nidhi_s91 the energy efficiency of neural circuits has indeed been studied for decades, eg this great paper by Laughlin. But studying energy efficiency of neural circuits does seem to fall squarely within the purview neuroscience, no? https://t.co/AZCQZ38NRF

2022-10-21 23:38:26 @criticalneuro @Abel_TorresM @summerfieldlab The primary target for funding would be govt not industry (though it'd be great if industry ponied up as well).

2022-10-21 23:34:20 @summerfieldlab AFAIK, there was little attempt in the Human Brain Project to "abstract the underlying principles"

2022-10-21 19:53:01 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical well in the Shannon information sense the information is there. How to decode is a separate questionIf you listened to the raw signal received by your cellphone it wouldnt mean anything to you. Luckily your phone knows how to decode it into an acoustic waveform

2022-10-21 19:22:56 @sanewman1 @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical i guess this reflects a very different understanding of how biology works from mine

2022-10-21 19:21:37 @SimsYStuart @PaulTopping @sanewman1 @ehud @kohn_gregory @GaryMarcus @SpeciesTypical I think the evidence for transgenerational epigenetic inheritance (Lamarckian evolution) playing an important role in humans (or most other animals) is very limited at best.Although Lamarck is a better algorithm, nature mostly seems to content itself with Darwin

2022-10-21 17:59:21 @kohn_gregory @GaryMarcus @sanewman1 @PaulTopping @ehud @SpeciesTypical i am using "information" in the technical (Shannon) sense, closely related to entropythere are other common uses of that word, and this might be at the root of some of the confusion here

2022-10-21 17:56:48 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical I am not clear how the fact that ink patterns might as well be stains is relevant here...can you clarify?

2022-10-21 17:52:37 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical it is semantics in that we know a lot about how these things work, so we're discussing what words to describe how it happens.There was a recent discussion about whether it's correct to call cells "machines" which imo was also just semantics.

2022-10-21 17:47:04 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical If i hand you a long set of instructions in Hungarian, i expect they will be challenging for you to follow (assuming you dont speak Hungarian). Nonetheless, i would say that the information is still there in the instructions. (not a perfect analogy but perhaps useful?)

2022-10-21 17:42:38 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical is there some other word that captures your understanding of the relationship btw geno/phenotype better? As a fellow biologist i assume we mostly agree on what that relationship is, so i guess we are just discussing word choice/semantics?

2022-10-21 17:37:35 @sanewman1 @GaryMarcus @PaulTopping @ehud @kohn_gregory @SpeciesTypical well, my phenotype includes being primarily bipedal, whereas my dog is mostly quadrupedalwould you not say that his genes "determined" his (quadrupedal) phenotype?

2022-10-20 20:48:08 @MelMitchell1 @mpshanahan @LucyCheke yes good point! We should add that to the next iteration

2022-10-20 20:13:50 @DavidJonesBrain @jeffrey_bowers @KordingLab I would include neurology as part of neuroscience. #bigtent

2022-10-20 18:39:15 @patrickmineault @KordingLab @seanescola ?

2022-10-20 18:37:41 @jeffrey_bowers @KordingLab my view is that much of what is needed is already present in animals (Moravec's paradox), which is not the primary focus of most psychology work today https://t.co/nTWXd3JGuB

2022-10-20 14:21:01 White paper —Rallying cry for NeuroAI to work toward Embodied Turing Test !Let’s overcome Moravec’s paradox: Tasks “uniquely” human like chess and even language/reasoning are much easier for machines than “easy” interaction with the world which all animals all perform. https://t.co/ehKRWl7rgJ

2022-10-19 21:40:40 @PessoaBrain @MillerLabMIT @LKayChicago @NicoleCRust By parts, I meant, synapses, channels, neurons. We know an awful lot about molecular and cellular neuroscience. How they are organized into higher level units like areas etc I agree is a bit less clear.

2022-10-19 20:47:25 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust sure it's all about figuring out how computation emerges from those parts...but IMO, it's worth keeping all that we learned about those parts (and how they are organized into circuits, etc) in mind as constraints...

2022-10-19 20:35:52 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust i think we know an awful lot about the parts that make up brains. Just not how they compute.... https://t.co/P2FGaui07C

2022-10-19 19:57:40 @jonasobleser @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust flattered to be compared with the GOAT but i'm not sure that most people who know me would characterize my discussion style as #ropeadope.

2022-10-19 19:55:26 @LKayChicago @MillerLabMIT @PessoaBrain @NicoleCRust hopefully we will all walk away with a shared understanding of what words like "organizing effects", "cause" and "epiphenomenon" mean in this context....

2022-10-19 02:17:30 Bees can learn complex tasks from other bees! https://t.co/BL7ibnlJXg

2022-10-17 11:55:49 @hubermanlab "light drinking was associated with a relative risk of 1.04...A 40-year-old woman has an absolute risk of 1.45% of developing breast cancer in the next 10 years..if she’s a light drinker, that risk would become 1.51%...and 1.78% for the moderate drinker"https://t.co/jZnY3cAyCR https://t.co/KS1S1LKVLg

2022-10-16 18:50:14 @eliwaxmann @RichardSSutton No it shows why a human with growing intelligence might become disgruntled. I think that the desire to lead, or at least not to be bossed around, has more to do with social drives evolved millions of years ago. Primates always jockeying for better position but not all species

2022-10-16 18:17:40 @eliwaxmann @RichardSSutton One of my faves!

2022-10-16 17:39:13 @RichardSSutton (I often worry that I don’t provide enough purpose for my dog who mostly just lays around all day. I think he’d be happier herding sheep all day or something )

2022-10-16 17:37:10 @RichardSSutton It’s not obvious that an “advanced” AI will resent being “subservient”. I suspect that resentment is built into humans due to our primate lineage. But if we evolved “advanced” AI modeled from eg dogs they might be thrilled to be kept busy doing what what want …

2022-10-15 18:03:03 @manos_tsakiris or: let's just transition to a system where everyone uploads their finished paper to arXiv/bioRxiv, followed by postpub review.No more journals-as-gatekeepershttps://t.co/PUpfncfyIj

2022-10-14 20:26:52 RT @TrackingActions: Neuroscience needs large scale efforts to crack this — Brain Observatories are one key path forwardLead by Christo…

2022-10-14 18:29:33 @neuralengine @Labrigger @SussilloDavid @hardmaru Yes, and agriculture in the Near East and elsewhere advanced for at least 5000 years before the invention of writing. presumably, much of this knowledge was transmitted through oral traditions.

2022-10-14 03:22:02 @SussilloDavid @hardmaru also worth noting that Neanderthals might have had language, in which case language-capable hominids were hovering on the brink of survival as a moderately successful species for >

2022-10-14 03:20:47 @SussilloDavid @hardmaru ..what i think is key is that each generation picks up survival tricks (like agriculture), and passing these tricks along to the next generation (as well as organizing activity in groups) requires language3/n

2022-10-14 03:20:02 @SussilloDavid @hardmaru So it's not clear lang was that *useful* by itself. In other words, it's not clear language itself is enough to enable a person (or even tribe) to outperform pre-linguistic competitors...2/n

2022-10-14 03:19:40 @SussilloDavid @hardmaru perhaps. Certainly my introspection supports this viewbut i think it's worth noting that for most of the >

2022-10-14 02:35:58 @EddyVGG @hardmaru yes i think the importance of language shaping your world view goes back to linguists/anthropologists Sapir &

2022-10-14 02:21:44 @hardmaru i think that's exactly right. I think the key innovation was language, which allows for the accumulation of knowledge across generations

2022-10-13 20:47:46 @haiderlab @MillerLabMIT @martin_a_vinck @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll the results seem largely consistent w/this paper, which showed LFPs could predict trial-to-trial variability in sound-evoked PSCs to within quantal fluctuations, no? https://t.co/FxsCPJwLsR

2022-10-13 18:29:38 @anastassiou_lab @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll and then there are changes in the shape of the action potential as it invades the synaptic terminals.

2022-10-13 18:28:49 @anastassiou_lab @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll somewhat independent of changes due to measurement are actual changes in the shape of the somatic action potential. But even those don’t necessarily have functional consequences downstream at the synaptic terminals.

2022-10-13 18:27:45 @anastassiou_lab @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll The shape of the action potential can indeed depend on how you measure it. but APs are typically thought to be largely all or none events, so i’m not sure we want to be focusing on those measurement subtleties

2022-10-13 17:55:38 @MillerLabMIT @martin_a_vinck @kendmil @PessoaBrain @neuralengine @LKayChicago @NicoleCRust @GauteEinevoll "Don't spikes also change depending on how you measure them? " What do you have in mind here?

2022-10-13 16:51:37 @kendmil @martin_a_vinck @PessoaBrain @neuralengine @MillerLabMIT @LKayChicago @NicoleCRust @GauteEinevoll in other words, as discussed in detail in another branch, the LFP is some complex &

2022-10-08 04:47:54 @MillerLabMIT @LKayChicago @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms so just to be clear, when you say LFPs work with spikes, do you mean ephaptic coupling (https://t.co/PXOuDS8Hph)or something else?

2022-10-08 04:27:44 @LKayChicago @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms tbh even after 427 tweets i dont fully understand @MillerLabMIT's view but i think he believes LFPs are no more or less causal than spikes whereas I believe spikes are causal and LFPs are an often useful way of measuring aggregate activity of many neurons https://t.co/WgzqtvcYhG

2022-10-08 04:05:40 @LKayChicago @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms There is no question or debate about whether the LFP is a potentially useful signal for indicating what’s happening. We all agree that it is. The debate is whether the LFP is somehow “causal" in a way that is independent of the spikes.

2022-10-08 04:02:49 @JustinKOHare Yes 9/5 full house. So who wins?

2022-10-08 03:41:04 Plz resolve this conflict over high stakes Texas Hold'em with the family. Two players both have 9s &

2022-10-07 20:03:50 @MillerLabMIT @SussilloDavid @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms i hope no one is feeling disrespected. I think lively debates among people who disagree about ideas (w/o being disagreeable) is twitter at its finest.

2022-10-07 19:54:07 @MillerLabMIT @SussilloDavid @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms There is also clear evidence that optical signals including optical birefringence are "coupled" to neural activity &

2022-10-07 19:51:04 @SussilloDavid @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms Here is a gedankenexperiment https://t.co/XRefa8yNcx

2022-10-07 19:31:52 @SussilloDavid @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms Exactly. famously, John Eccles found a way to rescue dualism: God loads the dice for each roll of the random release of neurotransmitter at the synaptic terminal. Beautifully untestablehttps://t.co/kFXSS0q1R4

2022-10-07 19:27:35 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms This argues that networks of neurons can be viewed as a dynamical system with oscillatory modes UncontroversialDo LFPs play causal role? we have to ask whether ephaptic coupling is essential to explain observed oscillations or is synaptic coupling is sufficient.

2022-10-07 19:00:59 @AndrewHires @PessoaBrain @MillerLabMIT @DrYohanJohn @JaumeTeixi @behaviOrganisms Wait, you are writing a grant on the functional role of optical birefringence, independent of action potentials?

2022-10-07 18:58:09 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms Consider the following gedankenexperiment:Compare (1) perturb a specific subset of neurons projecting from X->

2022-10-07 18:55:06 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms i agree that computation "emerges" from populations of neurons wired in the appropriate waybut in the absence of evidence to the contrary, i see no reason to posit emergent fundamentally new biophysical principles/mechanisms underlying these computations

2022-10-07 18:31:36 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms No, the evidence is not the same. You can measure the effects of spikes on neurons in a dish, or in many settings where LFPs are essentially absentAnd sometimes we can manipulate spikes/behavior in a small enough subpopulation of neurons so that the effect on the LFPs is small

2022-10-07 17:59:43 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms I am still unclear whether you are arguing that measuring population signals like LFP can be useful (they clearly can)

2022-10-07 17:54:39 @MillerLabMIT @nbonacchi @DrYohanJohn @JaumeTeixi @behaviOrganisms @PessoaBrain Agree that interpretation of causal manipulations can be tricky. But understanding without causal manipulations is even trickier, and sometimes impossible.

2022-10-07 14:58:57 @PessoaBrain sorry, tweet was misplaced in the thread. BOLD is great &

2022-10-07 00:45:46 @PessoaBrain @MillerLabMIT @DrYohanJohn @JaumeTeixi @behaviOrganisms Sure! And let's record the optical birefringence as well...it would presumptuous---dogmatic even-- for us to assume that these 0.001% changes in membrane optical properties do not play a causal role in generating complex behaviorshttps://t.co/htDiinOMKZ

2022-10-07 00:12:28 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms 2/2Second what are the best signals to record in order to gain insight into those computations that underlie behavior? That’s an empirical question, and could be single, spikes, multi spikes, LFP, calcium etc..

2022-10-07 00:10:34 @MillerLabMIT @PessoaBrain @DrYohanJohn @JaumeTeixi @behaviOrganisms I think there are 2 separate ideas here to disentangle. 1) how do neurons communicate in a network to perform the computations that underlie behavior? I think we understand the biophysics pretty well, just not how they are organized to produce behavior...1/2

2022-10-05 23:12:58 RT @L_andreae: Please amplify and RT, looking for amazing scholars and mentors, programme going from strength to strength! @network_alba @F…

2022-10-04 15:23:17 @tyrell_turing @KordingLab yes, we have much older machinery for predicting the properties of the physical world. Squirrels can guess whether a branch will hold their weight w/o i think invoking the kind of "causal machinery" we are discussing here.

2022-10-04 15:20:11 @tyrell_turing @KordingLab our "causal inference" machinery evolved for social prediction ("why'd he hit me? bcs he's angry" etc)Religious myths then generalized these explanations to the physical world, but with priors of intentionality ("why is there thunder? bcs Thor is angry")

2022-10-04 03:43:53 @mbeisen 92--"Walker’s Paradise"(I grew up in Berkeley)

2022-10-03 21:19:18 RT @AnthonyMKreis: So, @TheOnion filed an amicus brief before the Supreme Court in defense of parody under the First Amendment… and it’s ex…

2022-10-02 21:41:42 @GaryMarcus @sd_marlow Is walking really a solved problem? I've seen mind boggling videos (eg by Boston Dynamics) of successes, but as with self-driving I suspect there remain a considerable number of challenging "edge cases", no?

2022-10-02 03:21:10 RT @neuromatch: We believe Everyone should be able to read and publish research for free Research publishing should be commonly owned…

2022-09-29 17:53:57 please sign this open letter calling for reforms to academic publishing https://t.co/eyJfEbszhg

2022-09-28 13:03:09 Hope you all join us at noon EST today https://t.co/3CLZ8s2d0S

2022-09-28 12:51:24 OUCH"now is not the time to idle around inventing particles, arguing that even a blind chicken sometimes finds a grain. As a former particle physicist, it saddens me to see that the field has become a factory for useless academic papers."https://t.co/K52LRcyJuL

2022-09-28 03:39:59 @GaryMarcus @davidchalmers42 @ylecun @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez "symbols are just things that stand for things" -- so that sounds like what as a systems neuroscientist I might call a "representation"...and neural nets are pretty good at transforming these representations..are those not "operations over variables"?

2022-09-28 03:14:26 @GaryMarcus @davidchalmers42 @ylecun @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez well, we kinda know for sure that symbols somehow "emerge" from neural circuits/activity, 'cuz they're in the brain, right?whether they are differentiable in the brain is an open question, but i guess @ylecun argues that from an engineering POV we better make sure they are

2022-09-27 14:08:03 @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez so how my dog's "symbol" for me different from a representation?

2022-09-27 13:55:10 @bleepbeepbzzz @davidchalmers42 @GaryMarcus @raphaelmilliere @ak_panda @ylecun @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez I feel like throwing consciousness into the definition here is a bit like reducing this to a previously unsolved problem(also, many people use symbols without routinely introspecting about them. Most people cannot state the rules of syntax or phonology of their native language)

2022-09-27 13:49:31 @GaryMarcus @ylecun @davidchalmers42 @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez what is an "explicit symbol" and how can i distinguish it from an "implicit symbol" (= mere representation)?Or should we just use the Potter Test ("I know it when i see it")?

2022-09-27 13:45:54 @davidchalmers42 @GaryMarcus @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez but i would say that representations also compose to yield more complex reps. trivially, a dog can recognize you based on how you look, sound or smell. so what distinguishes this "complex representation" from a symbol?

2022-09-27 13:43:36 @ylecun @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez exactly, i would say animals reason and plan. this is why i am puzzled by @davidchalmers42's worry that symbols will "collapse" into representations...some people apparently are making a distinction that i do not understand. https://t.co/IxB6FW8Swb

2022-09-27 13:15:40 @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez As for "planning", it's clear that animals plan. (wolves hunting etc). So do animals use symbols to plan, or are they doing this with mere representations? I am not clear whether there is some rigorous distinction @davidchalmers42 is making btw representations and symbols

2022-09-27 12:59:59 @GaryMarcus @davidchalmers42 @raphaelmilliere @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez "aside from language" includes language.

2022-09-27 12:56:36 @davidchalmers42 @raphaelmilliere @GaryMarcus @ak_panda @ylecun @bleepbeepbzzz @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez aside from language and a handful of other constructed systems (chess, math, music, etc), what is the evidence that people use symbols? Ie why is it important that symbols be distinguished from the mere representations into which they might collapse?

2022-09-26 18:55:20 @ylecun @bleepbeepbzzz @GaryMarcus @raphaelmilliere @davidchalmers42 @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez This is great! Looks like we've gotten to the crux of the disagreement between @ylecun and @GaryMarcus !Just a misunderstanding...dispute resolved !

2022-09-26 17:00:31 @bleepbeepbzzz @GaryMarcus @ylecun @raphaelmilliere @davidchalmers42 @MetaAI @noema @Jake_Browning00 @NoemaMag @luislamb @AvilaGarcez What are those two meanings of "symbol"?

2022-09-26 16:27:55 join me, Matt Botvinick &

2022-09-24 14:52:53 How academics really feel about scheduling meetings https://t.co/bvOmJO24Pq https://t.co/FVJWpQMeeq

2022-09-23 18:30:08 Faculty search in COMPUTATIONAL/THEORETICAL NEURO at CSHL. Please spread the word! https://t.co/QoSIo3qWIy

2022-09-22 04:48:35 @ylecun @davidchalmers42 metaphorically, lang is like the rational nums--closed under basic ops. Start in language, end in language, so easy to fool yourself that everything is containedLinguistic dark matter is like irrationals: almost every num is irrational, &

2022-09-22 03:26:03 @davidchalmers42 @ylecun i think the lights on your ceiling are less "dark matter" than "heretofore unobserved matter", ie stuff that could be described in language but perhaps hasnt yet....less and less of that as the web grows

2022-09-22 03:20:56 @n_g_laskowski strongly disagree. I love a good discussion during my talks. In fact, i often offer a prize for the first question. (usually the first question is "what's the prize?" but at least it breaks the ice)

2022-09-22 03:15:25 @aniketvartak @patchurchland yes our social primate ancestors evolved to model the behavior of other individuals (will he attack me?)we then turned that system onto ourselves to predict our own actions. And then we are fooled into believing those predictions were causal...

2022-09-22 03:11:14 @davidchalmers42 @ylecun well, the cog rule is easy to state (CW &

2022-09-22 01:29:35 @ylecun @davidchalmers42 These examples requiring "physical intuition" are linguistic dark matter, ie stuff almost completely invisible to language. It's very hard to teach a person how to ski or play tennis with just words

2022-09-21 23:30:22 @patchurchland like animals we interact with the physical world using world models...squirrels jumping tree-to-tree, spiders catching flies, people hitting tennis balls and driving carsmany of these are very hard to explain in full detail using language and are likely dark matter to LLMs

2022-09-21 21:05:11 Moving essay about finishing up a life of science after diagnosis with a terminal disease https://t.co/rOwfdYSVnZ

2022-09-20 21:53:42 @alexeyguzey https://t.co/mAt0BlqlHN

2022-09-20 21:47:02 @alexeyguzey if Uber drivers were recognized as employees then yes Uber would be required to pay minimum wage, and benefitsAt CSHL, benefits (health, SS, etc) are an additional 40% of base pay

2022-09-20 21:44:01 @JudiciaIreview @alexeyguzey no, i think that would only be true if (1) they had fares 100% of the time, which they don't

2022-09-20 18:20:13 @alexeyguzey I think it would make more sense to demand Uber pay a floor minimum wage of $15/hr + expenses, but i think Uber lobbied against thatSo if you accept the idea of minimum wage for drivers what's the alternative?

2022-09-20 17:19:56 @prokraustinator @MIcheleABasso1 @ElDuvelle @neuralreckoning @ashleyruba yep, we've been boiling frogs for 30 years and we're surprised they're finally jumping out.

2022-09-20 17:18:18 @alexeyguzey i guess the idea is $30/hr while you're on shift, right? but ubers dont have 100% occupancy, and there are operating costs (gas, etc)So to net $15/hr, charges when occupied need to be higher, no? Not sure if they got the exact numbers right but that could be the logic

2022-09-20 17:12:39 @prokraustinator @MIcheleABasso1 @ElDuvelle @neuralreckoning @ashleyruba i think the bigger difference with a residency is that unlike a PD it is not open-endedLike med school, you start on day one and get squirted out at the end like a watermelon seedI think the wage issue would be better tolerated if we could promise a fac position after eg 3 yr

2022-09-20 15:58:47 @tyrell_turing @andpru Just as I’m very confident *not* sleep training was the wrong choice for me. That’s six years I was in a fog. Like if I had done a surgical residency.

2022-09-20 15:00:29 My advice to all new parents isSLEEP TRAINING ! Sometime in the first few months My greatest regret in raising 2 kids is that we “co-slept” for 3 yrs each. That was 6 miserable years of sleeping sometimes 3-4 hrs/night(FTR: They are turning out great. We suffered) https://t.co/kSwlOKOVTW

2022-09-18 22:46:56 @EvelinaLeivada @GaryMarcus It matters to me bcs I believe humans aren’t so special, and that almost all our capabilities have animal antecedentsSince I work on neural circuits in animals I would like to be able to connect concepts like “understanding” to things we can study in non-human animals

2022-09-18 20:14:46 @EvelinaLeivada @GaryMarcus ok, i'll bite. what does it mean to "understand"?and can animals "understand"?

2022-09-18 20:13:56 @blamlab That correlation does not imply causality was one of the main reasons people got so excited about optogenetics 15 yrs ago...we in systems neuro knew it, but had only the crudest tools(yes, it's tricky due to eg compensation. geneticists have been confronting this for decades)

2022-09-18 20:03:21 @GaryMarcus I dont think we understand the meaning of "understand".Just as “It depends on what the meaning of the word ‘is’ is.”https://t.co/TrzRx3lqfs

2022-09-17 22:48:32 @aniketvartak Right the fact that some humans “happen” not to understand could arise from eg lack of interest.

2022-09-17 22:30:48 @aniketvartak Are you saying that some humans just *happen* not to understand how bicycles work and so make mistakes whereas dalle fundamentally *cannot* understand?

2022-09-17 20:41:36 @neuroecology yikes

2022-09-17 20:19:22 Even though i am a terrible artist, as an avid cyclist and someone who has assembled and repaired bikes over the years, these are mistakes i would never make. (Dalle makes some strange choices esp. around the crankset) https://t.co/hC6hsH3mmA

2022-09-17 20:12:37 Great starting point for discussion of what it means to for AI to "understand" somethingMany humans draw nonsense bicycles clearly showing no understanding of how bicycles work (sorry to ruin the joke, @alexeyguzey ) https://t.co/xg9WmHluk0

2022-09-16 20:54:02 @PamelaReinagel it's pretty hard to find a decent-paying job, with good health insurance, retirement benefits, $$ for kids' educations, etc, where 30hrs/week is considered adequate to keep the joblack of a good social safety net in the US is one of the main policy​ failures i have in mind

2022-09-16 16:29:42 if you watch the video, almost all of the predictions were accurate, except the part about the 30 hr work week, which Keynes predicted decades earlier (15 hrs actually)But that missed prediction was not a technological failure but a series of policy failures. https://t.co/ijJl2R03Qb

2022-09-16 15:36:30 "by the yr 2000, the US will have a 30-hr work week and month-long vacation as the rule. A lot of this new free time will be spent at home...we could watch a football game or a movie shown in full color on our big 3D TV screenWe may not have to go to work..work will come to us" https://t.co/02wRP4C3VI

2022-09-15 03:29:23 RT @patagonia: Hey, friends, we just gave our company to planet Earth. OK, it’s more nuanced than that, but we’re closed today to celebrate…

2022-09-13 01:20:59 RT @joshdubnau: @EricTopol Except for the little fact that virtually everything that biomedical researchers have done since the genome proj…

2022-09-11 14:30:25 @neurograce @neuralreckoning @somnirons I agree that won't work

2022-09-11 13:17:36 @neurograce @neuralreckoning @somnirons Are we talking about anonymizing authors or reviewers? I agree you can't do authors. But you could have a system where reviewers are associated with pseudonyms, so they can develop reputations independent of their real world identity. Like certain Twitter accounts

2022-09-11 03:42:35 @cdk007 Yes, good point. This is an ideal opportunity to teach my 12 yo about bimodal distributions

2022-09-11 03:29:48 @cdk007 One can put a pair of shoes worn by player X on ebay and see how much they sell for. This turns the subjective value into a concrete number. For Curry it's $58K--more than purchase price

2022-09-11 03:13:18 @neurograce @neuralreckoning @somnirons why can't you have anonymized postpub review?

2022-09-10 19:25:47 12 yo asks: What is the fame breakeven point for shoes?Apparently if Steph Curry wears a pair of shoes they increase valueIf I wear them they decreaseSo he argues there must be someone just famous enough so the shoes just retain their value. Who?

2022-09-10 16:41:26 RT @historyinmemes: In 1999, only 6 years after the birth of the worldwide web, Bowie spoke about the "unimaginable" effects of the Interne…

2022-09-09 15:57:59 @NicoleCRust So we care about Nernst bcs 'it's foundational for understanding something else that actually matters', ie seizures? I might have gone with: Bcs 'It's foundational for understanding something else that actually matters', ie Hodgkin Huxley.But maybe i dont understand the rules

2022-09-07 04:52:01 RT @GoogleAI: Today we introduce an ML-generated sensory map that relates thousands of molecules and their perceived odors, enabling the pr…

2022-09-05 16:15:50 @rushkoff “How do I maintain authority over my security force after the event?” "making guards wear disciplinary collars of some kind in return for their survival."Someone should warn them: obedience collars have been tried and ultimately fail, as we learned in Star Trek https://t.co/Bsttfob2y7

2022-09-04 18:08:46 @ScottishWaddell 100%

2022-09-04 18:07:18 @ScottishWaddell absolutely! i used the exact wording of a previous poll to test the hypothesis that the outcome of the poll is going to depend strongly on who engages with you on twitter. https://t.co/lnQkCBwAXj

2022-09-04 17:57:45 @anne_churchland @patchurchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau @patchurchland I thought philosophers actually do wake up in the morning saying, "Gosh, I wanna solve decision-making (or consciousness or morality) today!"What much smaller subproblems occupy you?

2022-09-04 01:19:25 @Labrigger @anne_churchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau @grimalkina In an ideal world, society would support curiosity driven science out of curiosity. But in practice we compete for finite federal dollars, and it's hard to justify spending $62B on the NIH vs eg $200M for the arts (NEA). (Physics &

2022-09-03 22:32:55 @Labrigger @anne_churchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau @grimalkina Whose main goal? I’m pretty sure that the NIH’s main goal is human health. But of course there can be multiple goals---your goal could differ

2022-09-03 20:25:59 @jbimaknee Much/most of the basic advances that contributed to human health arose from curiosity-driven science not directly related to healthThus even if my own personal motivation is not health, i still believe what i (and others) do contributeseg https://t.co/TUWR5ATw5i

2022-09-03 19:52:21 @grimalkina @Labrigger @AdrianoAguzzi @tyrell_turing @koerding @anne_churchland @joshdubnau good point! I just launched the exact same poll, identically worded. So we can compare whether the people who follow me on twitter feel the same way as @Labrigger https://t.co/v0XJUkp2EZ

2022-09-03 19:50:08 What do you want out of your own neuroscience experiments?Insights into the human brain and/or health?Insights into how animal brains work?Or insights into principles of brain function that can lead to better computers/AI?(or something else?)

2022-09-03 18:52:06 @anne_churchland @AdrianoAguzzi @tyrell_turing @koerding @joshdubnau I think the only justification for society supporting Neurosci more than any field, like philosophy or art history, is the potential for human healthAnd i think what we learn about animals is relevant to humansBut if i'm honest, human health is not what drives me personally

2022-09-03 18:03:41 interesting...though this is primarily a statement about who follows you on twitter, right? eg i would expect very different results from each of eg @AdrianoAguzzi @tyrell_turing @koerding @anne_churchland @joshdubnau https://t.co/7dmmnRCZH8

2022-09-03 00:49:44 The Shifting Baseline : how the natural world has changed over the last few generations (and beyond ) https://t.co/pn1YorM0Rm

2022-09-02 11:37:01 Sounds amazing! https://t.co/qafbcNuC1Y

2022-09-02 11:31:54 Sounds amazing! https://t.co/qafbcNv9Rw

2022-09-01 13:31:08 LLMs are highly controversial Some say they verge on sentience and deserve to be treated as people. Others call them glorified lookup tables@sejnowski argues that they are like the Mirror of Erised, reflecting your expectations of them.fun read!https://t.co/7pyh7oeNyz https://t.co/FdJQAjdTUB

2022-09-01 13:06:44 @WiringTheBrain here's a complicated story about how killing off wolves (apex predator) in Yellowstone in the 1930s led to a disruption of the ecosystem and decline of many species. Reintroducing wolves in 1995 fixed the problemhttps://t.co/mJw764ihw8

2022-08-31 19:01:46 @WiringTheBrain Many unintended consequences in public policy, especially tax codea fave: to evade limits on CEO pay by capping what is tax deductible, companies compensated in options, which ended up leading to even larger pay packages and CEO focus on stock pricehttps://t.co/HOuNzPIcGw

2022-08-31 12:56:37 Are you trained in AI, and interested in doing original research at intersection of Neuroscience and AI? Come to CSHL and join a vibrant community for 1-3 years as a NeuroAI Scholar!(plz retweet)https://t.co/wbbJMB6DoM

2022-08-28 17:11:15 @ent3c it's one thing to know in principle a trait is heritable, another entirely to actually see a signaturein principle the link could involve nonlinear interactions of 100s of genesEg ident twin concordance in schizo is 50% but we are not very good at predicting it..

2022-08-28 16:36:51 RT @cshperspectives: I think I'm just gonna keep shouting "Plan U" for the next six months https://t.co/Ag3lzsl3gB

2022-08-27 14:00:51 "We conclude that: 1) scientific societies and the individual scientists they represent do not always have identical interests, especially in regards to scientific e-publishing

2022-08-26 21:27:33 brilliant defense of the current academic publishing business model...explains why the recent government-mandated requirement that all publications be immediately available is very unfair to publishers https://t.co/7INdJNnAlW

2022-08-26 03:28:29 @kohn_gregory @GaryMarcus @joe6783 Thanks for the pointer! looking forward to reading it.

2022-08-24 03:35:14 @GaryMarcus @ylecun LLMs are absolutely amazing, but i agree that without grounding in the physical world they are unlikely to take us across the finish line..

2022-08-24 03:12:54 @GaryMarcus @ylecun are you concerned that there is lot of verbal knowledge that is never written and hence is invisible to LLMs (at least until they start mining all the data from Alexa and Siri)?Interesting point. My intuition is that not that much is missing, but who knows? We'll find out soon

2022-08-24 02:37:44 @GaryMarcus @ylecun Assuming that much of what we "know" is stored in the connection matrix among our 1e11 neurons, i could (in principle) quantify how well we could predict that matrix from (1) our genes

2022-08-24 01:15:04 @GaryMarcus @ylecun right...so assuming we include in "knowledge" stuff like how to pick up an object, or how to walk without falling over, i agree 100%.

2022-08-24 01:09:51 @GaryMarcus @ylecun At the risk of putting words into @ylecun's mouth, I dont think he was arguing that language wasn't indispensable to "modern humans as we know them"What makes us "uniquely human"--language--is real, but just a small frac of the total...we just arent that different from animals

2022-08-24 00:54:16 @GaryMarcus @ylecun Humans would not have outcompeted animals as effectively if not for language, which allows knowledge accumulation over generationsBut without the foundation (which we share with animals) provided by 500Myrs of evolution we would fail like LLMsMoravec said it best https://t.co/MkoDfaGNf7

2022-08-23 19:09:13 @anne_churchland I have recently switched to using latex for grants and love it. I find that I get at least as much control over figures as under word. But Overlleaf does suck for tracking changes

2022-08-23 04:58:56 @anne_churchland I love Latex because, as the great astronomer Chandrasekhar famously said, a document that beautiful must be true(whereas I am suspicious of even F=ma rendered in Word)

2022-08-22 14:21:33 @crllonghi I very much enjoyed “Other Minds" by Peter Godfrey-Smith

2022-08-20 22:28:54 The real surprise here for me is not so much that Brazil is so big but rather that the Earth is round https://t.co/C7AzsKkknh

2022-08-18 20:34:32 @Elnaz_AK @KordingLab i absolutely suck as an artist, but i'm pretty good with words. I would love to be able to convert words in my head into pictures I could share with others.

2022-08-18 20:22:58 @nicholdav @KordingLab @tyrell_turing only sort of. a scientific review typically is more than just a list of facts with pointers to the papers. A good review presents a worldview which is abstracted from the field(Current LLMs might not be able to provide that, but that's a different--hopefully resolvable--issue)

2022-08-18 20:16:46 @KordingLab Upon reflection, I am increasingly convinced this is largely a non-issue. Who is the injured party? For whatever reason, art collectors pay top $$ mainly for the original work of art. Perfect replicas can be nearly worthless

2022-08-18 20:12:57 @bradpwyble @tyrell_turing @KordingLab @nicholdav i dont care if they care. Luckily, if Elsevier is like the hypothetical 800 lb gorilla in the world of academic science, the developers of the LLMs are more like King Kong. They will not notice Elsevier's complaints.

2022-08-18 20:10:02 @KordingLab @tyrell_turing @nicholdav I think the answer is obviously "no, we do not want to limit LLM-generated review articles to work not under copyright" though i'm sure Elsevier would like that

2022-08-18 19:18:06 @KordingLab @tyrell_turing @nicholdav I am looking forward to a time when I can ask an LLM to review of a corner of the scientific literature. Should that review be limited only to work not under copyright?

2022-08-18 19:16:01 @KordingLab @tyrell_turing @nicholdav It is currently legal I believe for an artist to make a living by generating Picasso or Rembrandt knock offs, as long as they are not trying to pass them off as originals. Images are protected, not styles.Why should the standard for machine-generated images be any different?

2022-08-16 16:54:19 @NoahShachtman @ariehkovler Jack Nicholson in "Five easy pieces" https://t.co/ZDFPsqfdaq

2022-08-16 02:07:34 @StevenBratman @quotebread @SteveStuWill seems interesting! what's the take home message?

2022-08-15 20:00:00 @prokraustinator @quotebread @SteveStuWill But I think you could make a strong argument for some actual “progress"Like amniotes "figuring out" an alternative to laying eggs in the water freed them to explore a lot of terrestrial environments.

2022-08-15 19:58:00 @prokraustinator @quotebread @SteveStuWill yeah, that’s why I suggested 1 million year adaptation. And, sure, some modern organisms probably require some things that just went around back then.

2022-08-15 19:45:12 @quotebread @SteveStuWill Eg once amniotes evolved, this presumably opened up a bunch of terrestrial nichesSo maybe at least some advances are real, more like gortex than mere fashion differences ?

2022-08-15 19:23:54 @quotebread @SteveStuWill Is that really true on long time scales? If you transported a bunch of successful modern plants &

2022-08-15 16:26:28 Great discussion of"effective altruism" and utilitarianism more generally https://t.co/Vkw8vJFqSr

2022-08-15 15:27:32 Congratulations Justus! https://t.co/bGopJipdGh

2022-08-15 15:12:34 @bradpwyble @KordingLab i agree that creative work merits protectioni'm just pointing out that even though the current system might seem to be protecting artists, it mostly isntart dealers, record labels, spotify, disney, etc are reaping most of the profits bcs of a system they created

2022-08-15 14:33:49 @KordingLab Isn't the core issue that we maybe shouldnt be monetizing creative output the way we do? Copyright laws are not protecting "content creators," who see very little of the profit. They are designed to protect big players, eg Disney etc. https://t.co/r91RNU6Ghd

2022-08-14 16:36:05 @ideal_politik @stevesi There is a virtuous cycle. New discoveries allow us to make new tools, which enable new observations, which enable new discoveries, etc.

2022-08-13 20:06:48 @urfagundem and ideas were motivated by observations, which were enabled by tools...and so the virtuous cycle of science continues

2022-08-13 19:02:29 @TrackingActions @kevin_nejad i agree except i would change the verb tense from "have had" to "are having". this is probably the leading candidate for a first big impact of ML in neuroscience, and likely to payoff big in the future! But imo perhaps not quite up there with 2P or optogenetics (yet)

2022-08-13 18:49:48 @aniketvartak Yes, there is a virtuous cycle. New discoveries allow us to make new tools, which enable new observations, which enable new discoveries, etc.

2022-08-13 18:48:45 @aniketvartak IMO this quote is important bcs it highlights a key component of sci progress (techniques) that many scientists undervalue. Relativity might be the (rare) exception...i just dont know the history well enough. But most discoveries were catalyzed by new techniques

2022-08-13 18:31:33 @joshdubnau I very much doubt he would make a units error like that. Presumably he said it takes 1000 nanobiologists to equal one microbiologist.

2022-08-13 18:28:14 @aniketvartak indeed, that was likely closer to the original quote. But Brenner was happy with the (IMO pithier) version he is often credited with, so i quoted that onehttps://t.co/2kLY6EYl0K https://t.co/nVZKiune3W

2022-08-13 14:07:02 @kevin_nejad ANNs may very well one day have a big impact on neuroscience, but so far I’m not sure they have

2022-08-13 14:05:37 @kevin_nejad The last 20 years have seen an explosion of new technologies which have enabled new discoveriesOptogentics , massively parallel optical and electro recording, targeted delivery to specific cell types, circuit tracing etcThese have fundamentally changed the questions we ask

2022-08-13 02:00:36 "Progress in science depends on new techniques, new discoveries and new ideas, probably in that order" --- Sydney Brenner (~1980) https://t.co/pMcgUaRDcH

2022-08-12 17:29:53 @matias_kaplan Wow. What paper is that from?

2022-08-12 13:32:13 @WiringTheBrain Maybe I missed the memo but Isn’t this all the result of overloading words like “explain” and predict”? You don’t “need” emergent thermodynamic explanations since derailed particle motions in principle can be predicted without it, but we find it a useful kind of explanation

2022-08-11 22:04:01 @TheAngelo2258 The near-minimum wage job I worked 10-20 hrs/week at was actually a significant fraction of total expenses back then. Not really today. It did not build character. Just made me tired and probably cut into my grades a bit.

2022-08-11 21:36:51 @joshdubnau If the Mensch as a gender-neutral term for "human being" was good enough the for great Yiddish scholar Martin Luther, it's good enough for me. https://t.co/A3GirPocVI

2022-08-10 15:55:27 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano OK, I’m gonna add to my to do list to go to the local primatologist bar and pick a fight But in the meantime the crux of my argument doesn’t depend on relative ranking of primate intel. merely observes that primates have not been particularly successful for most of our evo hist

2022-08-10 15:42:27 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano the reason i bring this up is in response to @GaryMarcus's suggestion that evolving human intelligence is hard. My counter is that it's not hard

2022-08-10 15:39:48 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano But stepping back--my argument is: to the extent primates in general, and hominids in particular, were smarter than other species, the payoff didnt became obvious until recentlyHumans &

2022-08-10 15:31:25 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano I'm not convinced group size alone quantifies social intel. Intuitively, seems like social intel is related to the complexity of the model you have of each individual you interact with. Armies and other hierarchies scale well by limiting the complexity of the requisite models

2022-08-10 13:09:16 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano Although as a rule of thumb group size might be a reasonable proxy for social intelligence, I’m not sure that it can be applied effectively across species. By that measure ants, bees, naked mole rats, pelican flocks might all be expected to be of superior social intelligence

2022-08-10 03:54:46 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano but i'm intrigued by the claim that baboons have similar linguistic abilities comparable to great apes. Koko famously had a vocabulary of >

2022-08-10 03:49:34 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano i'm not really sure how to define intelligence much less social intel rigorously, but most people put humans at the top of the intel scale so i figured it was safe to put our closest relative (chimps) higher than macaques.

2022-08-09 14:05:06 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano not all selection benefits the species. Eg sexual selection of colorful plumage in birdsI'm not suggesting that intelligence is merely like peacock feathers--its role is much more complex--but selection need not always be net plus for the species

2022-08-09 13:45:18 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano I buy the theory that the main driver of intelligence in the primate lineage (and eg elephants) has been social competition and cooperation. so if you buy that chimps are smarter than baboons (albeit measuring intel is ill-defined), then prob yes.

2022-08-09 13:42:47 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano and if you you believe (as many do) that Neanderthal had language, then prob so did our common ancestor almost >

2022-08-09 13:40:16 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano Modern humans arose >

2022-08-09 13:36:23 @martingoodson @GaryMarcus @ylecun @tdietterich @miguelisolano sorry, i should have been clear. Of course social intelligence is absolutely central to human dominance. But i was arguing that the added intelligence btw eg monkey and chimp was driven by social competition, and yet chimps dont seem to be "more successful"

2022-08-09 03:40:09 @kendmil @tyrell_turing @TimoWitten @SaraASolla @KordingLab @recursus @ShahabBakht @NXBhattasali @joshdubnau @WiringTheBrain @L_andreae as someone who grew up on the learning/LTP/hippocampal slice side, i'm curious why you (pl) thought devel was a good model of learning? I do understand being interested in devel for its own sake, but isn't learning a good model of learning, and also easier to study?

2022-08-09 03:02:57 @kendmil @tyrell_turing @TimoWitten @SaraASolla @KordingLab @recursus @ShahabBakht @NXBhattasali @joshdubnau @WiringTheBrain @L_andreae "a model system for activity-dependent synaptic plasticity and hence of learning."this whole thread was about whether there was a distinction (and if so, what) btw development and learning so i question the use of "hence" here...

2022-08-09 01:12:44 @neuralreckoning @tfburns Go for it! Looking forward to reading it!

2022-08-09 01:09:59 @tyrell_turing @TimoWitten @SaraASolla @KordingLab @recursus @ShahabBakht @NXBhattasali @joshdubnau @WiringTheBrain @L_andreae In the 80s ML hadn’t clearly differentiated from CN. The same people. Eg what was hopfield? Kohonen? My go to meeting as a grad student was nips. I started NIC (which evolved into cosyne) in 96 bcs by then nips was pure ML. But in the early 90s it still embraced CN

2022-08-09 01:01:39 @neuralreckoning @tfburns Potentially you could negotiate OA for all digital versions of your masterpiece with a publisher. At least some academic editors are just people who to help disseminate knowledge and publish physical books

2022-08-09 00:38:39 @neuralreckoning @tfburns What format would be better ?

2022-08-08 22:40:44 @GaryMarcus @martingoodson @ylecun @tdietterich @miguelisolano Sapiens, Neanderthal, denisovan, Hobbit, and an archaic population inferred from genomic analysis

2022-08-08 22:38:46 @GaryMarcus @martingoodson @ylecun @tdietterich @miguelisolano “Who we are and how we got here”by David reichhttps://t.co/Ckwo3oP4wg

2022-07-24 18:46:35 RT @KarlHerrup: Indulge me in a long thread with thoughts on the Piller bombshell in Science about the fraud surrounding the Lesné Aß*56 da…

2022-07-15 14:00:15 @NicoleCRust @CarlosEAlvare17 @WiringTheBrain @statsepi I think you'd find widespread agreement that Big Question advances might accelerate progress on circuit/psychi dz like schiz or depressionPerhaps less agreement on brain tumor, or eg degen dz like Alz, or even strokeSo maybe continuum?

2022-07-15 13:26:34 @NicoleCRust @CarlosEAlvare17 @WiringTheBrain @statsepi Are you more concerned with slow progress in treating neuropsychiatric disease or "understanding" how the brain works (the Big questions)? Or both? Or you think they're correlated ?

2022-07-12 20:11:25 @joshdubnau Ability to count to 5 optional?

2022-07-12 17:19:10 @TechRonic9876 impressive! but i think this osprey catching a fish still winshttps://t.co/dKJJ5dZaiD

2022-07-12 15:47:54 @djintwt Right, many previous advances in AI were inspired by neuro. And we know many things in Neuro that could be useful in AI, but we don’t yet know how to port them. As soon as we do they are no longer Neuro, they are AI

2022-07-12 15:24:53 @djintwt CNNs?

2022-07-12 12:53:13 @HulsmanZacchary yes, the trick is to understand how the brain computes sufficiently well so we can abstract the key principles. Not easy!Simply copying brain circuits won't work

2022-07-12 12:28:49 @tibbydudeza Hmm. Neuro inspiration for AI has worked out pretty well so far:-->

2022-07-12 12:25:18 @HulsmanZacchary indeed! planes are much better than birds at taking heavy loads long distances very fastJust as computers are much better than brains at performing many operations very fast with huge datasetsbut if want planes to do this we'd study birdsand if we want computers to do AI...

2022-07-11 16:38:47 @kw_cooper @GaneshNatesh so cool!

2022-07-11 14:48:36 @GaneshNatesh i think we would find it really difficult to match hummingbirds and eagles in their aerial agility

2022-07-11 14:46:25 @IntuitMachine @GaneshNatesh Although translation was C3P0's specialty, i think he was a general purpose android who could work on the farm and presumably do the dishes(Unlike a lot of famous humanoid AIs (like Terminator), he was not built for combat, which is why i chose him as an example)

2022-07-11 14:34:05 @GaneshNatesh i think what many people ultimately want is C3P0--an AI that can do anything a human can do (ideally better)

2022-07-11 13:15:00 @giorgiogilestro well, no, I actually think a good path to human-level intelligence would be to first match that of "simpler" animals like mice.but that's a separate discussion and I didnt want to try to pack too much into a single tweet.

2022-07-11 12:44:47 @Dr_Cuspy von Neumann *thought* he was taking inspiration from the brain when he laid out the architecture for the modern computer I guess it's a judgement call whether he retained the right aspects. But overall the whole computer thing has been working out pretty well so far. https://t.co/kUdxrdJgpY

2022-07-11 11:40:32 Also are these really eagles? or ospreys ? Or something else ?

2022-07-11 10:59:28 A common critique of neuroAI is "sure, birds inspired planes. But modern engineers don't design planes based on birds. So why study brains to achieve human-level intelligence?"But if our goal were to achieve "bird-level flight," mightn't we want to study how these eagles fly? https://t.co/FsynAlSN3W

2022-07-10 16:21:16 @nomad421 @tylerneylon @arjunrajlab Well the basic "code" is pretty universal across life / eukaryotes. It is almost true that one could out human DNA into yeast and try to coax it into producing an embryo

2022-07-10 15:47:18 @nomad421 @tylerneylon @arjunrajlab I guess i dont understand what an "inert" parameter would be. In the case of an ANN, there is a small amount of code that is needed to interpret the weights and generate input/outputs.In a cell, there is machinery (encoded in the DNA) like polymerases etc to boot up the cell.

2022-07-10 15:37:18 most of my biology classes failed to present the questions to which what we learned was the answer. https://t.co/zeH7Zqbs0V https://t.co/MCNLZhGzoF

2022-07-10 15:26:26 @nomad421 @tylerneylon @arjunrajlab conceptually i'm not sure I see the difference btw a 700MB program and 700MB of paramseg I could write a C program and imbed 700MB of params

2022-07-10 00:56:44 Minsky famously bashed neural networks in his 1968 PerceptronsI did not realize he was already hating on them in 1961(this paper also contains one of the earliest uses of "reinforcement learning," which according to Google n-grams first appears in 1959) https://t.co/wpSVg0Ywx5

2022-07-10 00:41:45 @santoroAI @tyrell_turing @kaznatcheev @andpru Comparing size of genome vs ANN:Size (in bits) of genome = N * 2 bits/bp N= # of bp in genome (bp = basepair)Size (in bits) of ANN ≈ N * 32 bits/weight N= # of weights in ANN===Genome encodes its own decoderANN "decoded" by a (small) program so add that on

2022-07-09 11:47:24 @santoroAI @tyrell_turing @kaznatcheev @andpru The genome size of melon (12 chromosome pairs) is estimated to be 454 Mb, and cucumber (7 chromosome pairs) has a genome size of 367 Mbp(Each bp is 2 bits, bcs there are 4 nucleotides )https://t.co/VaiRKzAnbD

2022-07-09 11:44:24 @santoroAI @tyrell_turing @kaznatcheev @andpru Which part do you consider vague? Do you not think it is possible to compare the size (in bits) of eg Bert and gpt3? Or the size (in bits) of a worm genome like c elegans to a human genome? Or that you can't compare genome size to stuff on a computer for some reason?

2022-07-08 20:11:35 excited to share @anqi_z PhD work, now on biorxiv! https://t.co/lEYr5luZMD

2022-07-08 16:20:42 @IntuitMachine @nomad421 @tylerneylon @arjunrajlab Right but the fact that most animals function pretty well at birth indicates that the genome can specify a lot of structure.

2022-07-08 16:02:18 @nomad421 @arjunrajlab @tylerneylon I think Moravec's paradox is still relevanthttps://t.co/S8WAc7xzya https://t.co/zGdzDCKBK9

2022-07-08 16:00:03 @nomad421 @arjunrajlab @tylerneylon yes, but part of my argument is that whatever is special about humans (mostly language-related) is a small step from our pre-verbal ancestors: 1 M yrs ago. So if we can achieve mouse-level "intelligence", we're almost there.

2022-07-08 15:55:33 @arjunrajlab @nomad421 @tylerneylon if you mean that they are "externally specified" by learning--i argue that the vast majority of what most animals can do is specified innately, as demonstrated by the fact that most animals function pretty well at birthhumans may be a bit of an outlierhttps://t.co/9i0Nnpnrs6

2022-07-08 15:35:21 @nomad421 @tylerneylon @arjunrajlab i'm not really sure how a 700mb program differs from 700mb params. I view these are artificial implementational distinctions eg what if my program is "treat the 700mb param vector as a program"?

2022-07-08 14:49:47 @nomad421 @tylerneylon @arjunrajlab Right, the comparison isn’t perfect But the size of the genome, which specifies the developmental program for building a brain, places an upper bound on the complexity of the specification of the brain’s wiring diagram

2022-07-08 14:41:41 @AdrianoAguzzi @tylerneylon Except that the parts self assembleAlmost every difference between our brain and that of c elegans is contained in our genome, which specifies a developmental program that causes the brain to wire up properly. https://t.co/9i0Nnpnrs6

2022-07-07 18:19:19 @PresNCM PaperpileAfter using endnote since starting as faculty I finally got fed up. The transition to Paperpile took about 20 minutes.

2022-07-07 02:41:17 Discussing with co-author: "neuroscience" or "neurobiology"? To me the meanings are the same, but NS seemed more modern than NB. And indeed, it looks like NB was dominant until 1990, at which point NS crushesbonus points to whoever finds the first use in 1869!@NXBhattasali https://t.co/FTILm8YCDy

2022-07-06 12:24:35 @L_andreae Looking forward to reading it!

2022-07-04 22:11:55 @MillerLabMIT Shouldn't it be "Spontaneous Spiking Is *Correlated with* Broadband Fluctuations" not "Governed by"??

2022-07-03 13:37:04 @GCREllisDavies Physiology papers from my lab still sometimes have only 2 (or 3) authors...But mobio papers are more likely a team effort, often involving techs and multiple students/postdocs.https://t.co/doCNxrMcCXhttps://t.co/zdfIpCGnEGhttps://t.co/ql0WhGjfCB

2022-07-03 02:59:09 Eve Marder's ruminations and reflections on changes in how we do science. https://t.co/omvB0I2nWr

2022-06-19 19:20:59 @knutson_brain @Antonino__Greco @PessoaBrain I just finished this fascinating biography of Cajal. Cajal may have been brilliant, but he does not come off as a nice guy at all. Golgi on the other hand mostly quite humble https://t.co/mZIzOVCAWl

2022-06-11 17:51:22 @MarthaBagnall No public transport here and roads are not bike friendly. That’s why this is a bit of a surprise. Growing up in Berkeley, bikes and public transport gave me autonomy by age 10

2022-06-11 16:55:37 My rating is pretty low (I sometimes try to engage in unwanted conversation), but their passenger ratings are pretty low too so we seem to be stuck with each other for now.

2022-06-11 16:55:36 As the dad of suburban teenagers, I did not anticipate the extent to which my job description would overlap so heavily with those of an Uber driver

2022-06-09 20:45:08 @daeyeol_lee I did my first neuro grad rotation with him, which set me on the road to becoming a computational neuroscientist. Sad to hear of his passing.

2022-06-09 20:25:58 @cdk007 I’m not an economist but it seems to me that if demand exceeds supply then out of stock rate should approach 100% (at least in a perfect market): the moment new stock becomes available it should be purchased so 0% left (100% out of stock). Or price goes up reducing demand. No?

2022-06-09 17:59:25 @zga_aaa @GunnarBlohm i think interactive figures would be cool, but to me 98% of writing a paper is figuring out how to distill and communicate a small number of simple ideas as a reader, i'm typically more interested in hearing about what someone learned than what they did

2022-06-09 15:46:36 @GunnarBlohm how is the traditional paper format outdated? leaving aside issues about barriers to dissemination due to refereeing and profit motive--what is wrong with the paper format itself? What would be better?

2022-06-08 20:40:38 @hein_prizes @KiaNobre Congratulations Kia!

2022-06-06 02:51:58 @DavidBahry @GaryMarcus Kind of. If you are searching an N-dim space and take steps in random directions, keeping steps that improve fitness, the net effect can look like a gradient across the population. But cost is O(N) evals/step so a lot less efficient than if each step followed the gradient

2022-06-05 22:56:36 @autometalogole2 @GaryMarcus Back of the envelope: 10^30-10^40 individual animals have lived since the dawn of multicellular life 500 Myrs agoBut I have no idea how many flops it would take to simulate 1 yr of life adequately.

2022-06-05 22:53:32 @sjogren_rickard @GaryMarcus It’s true that a random sequence of amino acids is unlikely to generate a stable 3ry structure. But a small perturbation of an existing protein has a pretty good chance of being a reasonable protein So “brute force” = “random walk” in this context

2022-06-05 19:15:27 @GaryMarcus Evolution uses brute force. It doesn't even have access to the gradient. But it benefits from the >

2022-06-03 17:36:58 @tyrell_turing sure, it's neurons (or transistors) all the way down. So linguistic data are ultimately like other sensory data, except that indeed some stuff might be missingarguably though there might be something fundamental about interacting with the world in a way that changes its state

2022-06-03 15:42:42 @IntuitMachine @tyrell_turing absolutely agree that it's a gap of unknown size, but that recent progress suggests that it's a lot smaller than many (including me!) would have predicted. LLMs are amazing.

2022-06-03 15:37:17 @tyrell_turing Similarly i think it's hard to infer properties of the physical world--eg which objects move or are squishy--from even very large sets of static imagesEg labeling horse on a beach as a camel might be less likely if you really "understand" that horses move relative to background

2022-06-03 15:27:47 @tyrell_turing "Gaps" btw avail info from language vs an embodied agent could be pretty hard to fill Like trying to infer the structure of data in some very high dim space (the real world) by looking only at a much lower, but still large, dim projection (the world accessible by lang).

2022-06-03 15:11:48 @KordingLab @tyrell_turing i completely agree. In fact, the fact that most of the information in large sparsely connected brains is in the binary connection patterns is in part why i think structural connectomes (even w/o weights) are useful. https://t.co/MM5iJs3RYc

2022-06-02 20:43:28 @rita_strack there was a great podcast on ML for "nowcasting" (short-term weather predictions)https://t.co/VIiNsShZ2H

2022-06-02 20:25:02 @GaryMarcus @IntuitMachine @mraginsky but more importantly, Chomsky's impact on linguistics derives primarily from work he published before i was an undergrad.AFAIK, his published work on linguistics since the 80s has not been terribly influential. (Correct me if i'm wrong on that)

2022-06-02 20:23:30 @GaryMarcus @IntuitMachine @mraginsky Sure! I stand by all my published work, which admittedly only goes back 32 years.https://t.co/G7fx7GWBhe

2022-06-02 20:20:49 @GaryMarcus @IntuitMachine @mraginsky no i really dont think i am strawmanning the linguistics taught to me in the 80s by former Chomsky acolytes who dominated the UC Berkeley Linguistics dept. That sterile view is literally why i quit linguistics and became a neuroscientist

2022-06-02 20:18:39 @GaryMarcus @IntuitMachine @mraginsky so although i am sympathetic to the argument that LLMs dont provide insight into how humans generate/process/use language, i'm not sure why Chomsky's picture is there. That just doesnt seem a Chomsky-ian objection, at least not c. 1957 or 1965 or even 1984 when i was an ugrad

2022-06-02 20:15:32 @GaryMarcus @IntuitMachine @mraginsky At least in the dark ages when i studied linguistics, the idea that linguists should be studying the brain (much less the mind) was anathema. In fact, the prohibition against thinking about the mind was why i quit linguistics and moved to neuroscience. https://t.co/hbcIGCaNSO

2022-06-02 20:12:15 @GaryMarcus @IntuitMachine @mraginsky I am confused. Chomsky defined language as the set of all grammatical sentences. That was the research program. He proposed a particular set of rules--generative grammar. Turns out a different set (transformers, LLM) apparently do a pretty good job. So he should be happy

2022-06-02 13:57:58 @GaryMarcus So in that sense i think the incredible success of LLMs can be viewed as a (surprising to me!) near vindication of Chomsky's focus on syntaxand even more so of the late great Tali Tishby's 1999 talk "Can Shannon learn semantics?" Apparently yes.

2022-06-02 13:54:39 @GaryMarcus Lang as "mapping from syntax to semantics" was certainly not core to Chomsky's views AFAIR, his 1957 formulation had no role for semanticshis 1965 had some notion of "deep structure" but i dont think that required the kind of "understanding" we feel is missing in LLMs

2022-05-31 22:48:40 ok i was never a believer in the Skynet scenario but if DALLE-2 has invented a secret language in which eg "Apoploe vesrreaitais" means birds then who knowsSo let's not hand over the nuclear codes to DALLE or we may all be crushed like Contarra ccetnxniams luryca tanniounons https://t.co/gwReh18idg

2022-05-29 20:05:06 @JMGrohNeuro @anne_churchland Lga is slightly closer but until recently was notoriously the worst major airport in the US. Supposedly getting better but haven’t experienced it recently. So I always to go through JFK if possible

2022-05-28 17:16:11 @neuralreckoning @ylecun I think putting it in format so it looks like a journal TOC would make it easier for people to ease into this new mechSome REs might include commentaries, N&

2022-05-28 17:14:19 @neuralreckoning @ylecun exactly--multiple "reviewing entities" (REs) will link to the same articles.So you can launch "Dan &

2022-05-28 16:36:52 @neuralreckoning I still believe that we need a mech for tagging interesting papers, ie postpub review not just for truth but also for interest. I just think it should be decoupled from gatekeeping. I am a fan of @ylecun's "reviewing entities"--like postpub journalshttps://t.co/Hzo93EXb35

2022-05-28 14:52:57 @neuralreckoning Given that your objection is to reviewing criteria based on anything other than correctness, why not submit all your papers to Plos One? And review for them? Or when you say you have to publish in "legacy journals" do you mean "high prestige journals"? https://t.co/tHN66DUJA2

2022-05-27 23:52:45 @neuralreckoning @tyrell_turing Nice analogy. Well played !

2022-05-27 22:10:17 @tyrell_turing @neuralreckoning Even though i'm not Canadian, I agree that gradual is usually better. I think world history shows that revolutions* are rarely successful and almost always painful. *except revolutions that involve throwing out foreign invaders.

2022-05-27 00:25:28 @tyrell_turing @ylecun which podcast?

2022-05-24 21:35:52 @StevenBratman @WiringTheBrain @tyrell_turing @ylecun yes, not just competition but also cooperationOur success as a species is probably largely due to language, which presumably evolved to enable more sophisticated forms of cooperation. ("You chase the antelope toward me, i'll be waiting here in the tree")

2022-05-24 21:17:41 @tyrell_turing @ylecun Acquisition of social knowledge clearly requires a long training period (to get to know others around you) and a lot of behavioral flexibility. If your social strategy is too inflexible, it is easily defeated. You have to be able to adapt to those around you

2022-05-24 21:13:43 @tyrell_turing @ylecun Yes, social intelligence has been the key driver over the last 5-10Myrs (or more) of primate cog evolution. We have been in an arms race with conspecifics to do better at modeling others' behavior. Who will attack? Who will cooperate? Whom can I trust?

2022-05-24 18:21:23 @CSHL @JBorniger @ArkarupBanerjee Congratulations!

2022-05-24 15:57:14 @JulioMTNeuro In many invertebrate circuits intrinsic firing patterns are highly regulated, modulated and absolutely central to function. Famously the lobster stomatogastric ganglion. https://t.co/rpdm7Fcpi9

2022-05-24 14:55:47 @Jobamey @ylecun Social structure.

2022-05-24 13:22:16 @ylecun yes, adaptability--many things change too fast to encode in DNA. Eg place cells are innate, but their content (what is where) varies. Language is innate, but you need to learn the words of your language. And you need to learn social structure (who is who in your troop)

2022-05-24 11:56:36 @SunnyBe4r @anne_lauscher @HenkPoley @_florianmai @seb_ruder yep that's pretty much what we do herehttps://t.co/Vx624XC6IR

2022-05-23 21:45:39 @FelixHill84 @ylecun I'm not sure human success arises from being particularly "clever" but rather mostly from language, esp our ability to accumulate knowledge over generationsI think language is a specialized skill unique to humans, in the same way echolocation is specialized in bats. Not so hard

2022-05-23 20:31:47 @ylecun True, but humans are outliers in the amount of data they require. Most mammals require much less. Eg if you are satisfied with feline-level perception, it's more like 3 months * ... * 10 fps = 38.9 millionFish and bugs are even faster (~ 0)https://t.co/xGRUFiZZyG

2022-05-23 20:30:59 @ylecun https://t.co/xGRUFiZZyG

2022-05-23 20:18:40 @ntraft @RobFlynnHere @jjsakon @KishavanBhola @ThomasMiconi @ylecun the DNA in the genome encodes the proteins. More generally, it encodes the cellular expression of these proteins that enable the brain wire up properly. The code for reading out proteins is the same for all life on earth

2022-05-22 19:49:12 @KordingLab @NeuroPolarbear In my experience, if a paper posted to biorxiv gets X units of engagement/interest, that same paper gets a 5-10X bump in interest if it appears in a high profile journal 1-2 yrs later. I wish that weren't the case but it still appears to be

2022-05-20 16:09:27 It's worth adding that Chuck really deserved to share in that Nobel prize, not bcs of patch clamp recording, but bcs his much more elegant (though ultimately less useful) "noise analysis" deduced single channel conductances years earlier. But he never had any regrets.

2022-05-20 16:05:32 @kaznatcheev i agree it shouldnt be controversial not to put your name on papers you didnt contribute to. But somehow "paying the bills" is thought to qualify as a contribution.

2022-05-20 15:54:18 My pd PI Chuck Stevens declined to be an author on the Nobel Prize winning first paper by Neher and Sakmann--work done in his lab at Yale--bcs he "didnt contribute much". (He also didnt put his name on some work i did independently in his lab. No Nobel for that work though.) https://t.co/MdDk8sqykv https://t.co/vHichlew60

2022-05-20 08:11:00 CAFIAC FIX

2022-10-28 19:04:15 RT @kohn_gregory: There's been a lot of attention surrounding this study, which shows that zebrafish lacking action potentials still develo…

2022-10-28 12:43:40 @kendmil @WiringTheBrain @bdanubius

2022-10-28 12:43:25 @WiringTheBrain @bdanubius

2022-10-28 04:12:08 Exciting application of MAPseq in olfactory cortex with Xiaoyin Chen, @joe6783 and @dinanthos https://t.co/TFYGOSnQc3

2022-10-26 22:30:31 @LKayChicago @MillerLabMIT @vferrera @PessoaBrain @NicoleCRust Exactly“The Wave” is generated by a simple local rule. Nothing magical. https://t.co/HKeLLhHt1R

2022-10-26 20:38:21 @PessoaBrain Indeed this is a great example of how simple local rules--stand up &

2022-10-26 19:19:28 @furthlab This rewards people for doing the public service of reviewing. To gamify it people would compete for providing *valuable/useful* reviews. And allowing any interested reader to self-select as a (post pub) reviewer

2022-10-26 19:11:51 @furthlab @_dim_ma_ There is currently no system for saying that across journals your reviews are considered to be among the top 1% most valuable of all reviewers by readers. Especially in a way that allows a reviewer to remain anonymous

2022-10-26 18:12:21 @furthlab I don’t think the problem is too many papers per scientist.

2022-10-26 16:30:14 @SteinmetzNeuro @OdedRechavi My hope would be to defund publishing as much as possible, though i agree that if there is money to be spent it should go to editors first and then reviewers.

2022-10-26 16:18:19 @cshperspectives @wjnawrocki i guess for widespread uptake by the community there would have to be a very user-friendly front end. (I have no idea how to interact with ORCID)

2022-10-26 16:08:31 @cshperspectives @wjnawrocki having a centralized repository for these reviews, along with a mechanism so that even anonymous reviews could remain linked to the reviewer, would be a great step forward. (also a way to up- and down-vote reviewers)

2022-10-26 15:03:48 @cshperspectives @wjnawrocki really? how would it work? if i were to write a 4 paragraph review of a published paper (or preprint), where would i post it and how would i get a DOI? Is there a "biorxiv-reviews"?

2022-10-26 14:55:57 @cshperspectives @wjnawrocki make reviews citable with their own DOIs...https://t.co/LG0CHdRAsH

2022-10-26 14:54:41 this would go a long way to solving the "how do we get enough reviewers?" problem! https://t.co/zrICWrvYD9

2022-10-26 14:50:32 @behrenstimb or maybe one (@bdanubius) of the authors has been thinking about the relationship btw AI, learning and evolution and that's what motivated them to do these expts and so they sharing their actual motivation?you may question whether it IS relevant but: https://t.co/vFHS5k2OAh

2022-10-26 14:37:14 @cshperspectives @wjnawrocki https://t.co/RfvokFe96j

2022-10-26 14:36:54 @OdedRechavi how about rewarding the reviewers w/o paying them? Set up a system so top reviewers could be acknowledged for service to the community--something they can put on their resumesAnd open up reviewing to everyone-->

2022-10-26 14:27:05 RT @joshdubnau: Do you think it is sound career advice to encourage a postdoc looking for a TT job or assistant professor hoping for tenure…

2022-10-24 19:39:27 @davidchalmers42 just something to think about https://t.co/ImGVmdx5td https://t.co/RaJkolrj4s

2022-10-24 19:16:18 I just contributed to @actblue But i am reluctant to contribute againWhy you ask? Since contributing i have been inundated with texts and emails. Literally more than TWO DOZEN since last night!!*** Plz provide opt out option AT SOURCE if you want continued engagement ***

2022-10-24 04:00:48 @mezaoptimizer @pfau @ylecun @KordingLab yeah no analogy is perfect but going with this one i'd say it's as though modern physicists argued "we can do all the physics we need to by just reading Feynman...no need to learn any math beyond what we absorb from that"

2022-10-24 03:48:28 @mezaoptimizer @pfau @ylecun @KordingLab i dont really know what "researching neuroAI" would mean. We can research neuro, and apply what we learn to AI (and vice versa). To do either requires deep knowledge of both

2022-10-24 03:22:40 @pfau @ylecun @KordingLab and yet that's kinda the point. Feynman benefitted from the deep understanding of math learned during his training so didnt need Theorem 6a from Acta Math. yet the fact that he didnt need to keep up with the latest doesnt mean that later physicists could ignore math right?

2022-10-24 03:09:53 @pfau @ylecun @KordingLab Touché!

2022-10-24 02:42:32 @ChurchillMic luckily we have a just the analogy for you in the white paperbriefly: The Wright brothers werent trying to achieve "bird-level flight," ie birds that can swoop into the water to catch a fishAGI is a misnomer. What people want is AHI. ("general" -->

2022-10-24 01:50:13 @memotv @pfau also different from a major point of the white paper which was:"Historically many people who made major contributions to AI attribute their ideas to neuro. Nowadays fewer young AI researchers know/care about neuro. It'd be nice if there were more bilingual researchers

2022-10-24 01:20:38 I think there would be a lot less animosity in Twitter debates if they let you write “I think” without it counting toward your character limit.Just my opinion

2022-10-24 00:22:32 @pfau @ylecun @KordingLab I would say this is like asking a physicist what recent paper in math they read that enabled some result:"Hey Feynman, Did you ever read a paper in Acta Mathematica that directly changed the way you did something??"If no, then no need for physicists to learn any math, eh?

2022-10-24 00:17:39 @criticalneuro @tyrell_turing i think @pfau denies that "historically neuro contributed to AI"@gershbrain is also kind of a contribution-denier, though willing to concede the possibility of "soft intellectual contributions" https://t.co/ByyUFfunjj

2022-10-24 00:06:52 @criticalneuro @neuroetho @NicoleCRust IMO depends on what you mean by "advances". Agree that 99.9% of papers at NeurIPS do not require neuroBig ideas from neurosci might take 100 NeurIPS units to become useful bcs SOTA is so goodSo q is if all future big advances are endogenous or if neuro still has more to offer

2022-10-23 14:22:56 @neuroetho @criticalneuro @NicoleCRust yes I think many are arguing against hoping some specific Fig. 6a of some paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &

2022-10-23 14:20:54 In prepping for this upcoming discussion on LFPs @NicoleCRust reminds us of this 1999 special issue of Neuron all about oscillations and the binding problemhttps://t.co/zpkVADGSlK https://t.co/0rGLQim1hm

2022-10-23 14:17:12 @davidchalmers42 i dont think there is a single linear metric by which we can rank cognitive capacities, which is why the "general" in AGI is misleading. what we really mean is A-Human-IntelligenceBees are incredible but if we want to mimic HUMAN intel mice are closerhttps://t.co/A61XAQC4z5

2022-10-23 14:09:09 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab so i'm not sure that i disagree with what is written. I think they are talking about what i would call phenotypic behavioral discontinuities, whereas if one is building a system what matters is how much you need to tweak the parts and overall design

2022-10-23 14:06:07 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab I think the point is that you can have a discontinuity in abilities with only a few tweaks to the underlying structuresFinches going from hunting soft bugs to cracking hard seeds is a huge behavioral discontinuity but happened v. fasthttps://t.co/AgCLTUuHJ3

2022-10-23 13:10:35 @pfau @martingoodson i think you are arguing against hoping some specific Fig. 6a of some neuro paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &

2022-10-23 12:29:42 @neuropoetic @NeuroChooser @KordingLab Yes. un-nuanced provocation is a good way to build engagementI should try posting "Neuroscience is all you need. AI is off the rails and needs a reset. Scale is useless" and see what happens

2022-10-23 12:23:14 @Isinlor @gershbrain the amazing abilities of a bee, with <

2022-10-23 04:04:22 @jeffrey_bowers @aniketvartak @KordingLab Do you imagine the discontinuity occurred before or after we diverged from chimps (4 Myrs ago)? Although i happen believe a lot of what happened since then is due to language, my fundamental point (that our divergence is but an evolutionary tweak, like finch beaks) still holds

2022-10-23 01:31:13 @aniketvartak @jeffrey_bowers @KordingLab Lots humans can do animals can't (and vice versa)But most of the interesting ones are IMO coupled to language which likely evolved 100K-1M yrs ago--a blink Thus a few tweaks enabled a large change in abilityLike qualitative differences in Galapagos finch beak abilities https://t.co/S5B2t1dBB2

2022-10-22 20:00:49 @skeptencephalon I agree. One of the goals of rekindling interest in NeuroAI is to tap in to all the things we've learned in neuroscience in the last 3 decades

2022-10-22 19:55:13 @MatteoPerino_ @aemonten @mbeisen Right now, editors only tap established people However when it comes to establishing technical validity, a good postdoc or even senior grad student could do the job, greatly expanding the possible poolWe will need a system for assessing reviewer quality

2022-10-22 19:47:55 RT @ylecun: @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &

2022-10-22 19:01:03 A sad day. Chuck was one of the greats. He was an inspiration as a scientist and a mentor.His contributions over more than half a century of neuroscience were broad and deep. He will be missed. https://t.co/LWXF9rUDYY

2022-10-22 16:07:09 @neuropoetic @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau @ChenSun92 biological plausibility is important for the application of AI to neuro, but doesnt really come up for the application of neuro to AI

2022-10-22 15:50:55 @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau This question comes up in funding biology. Why bother funding basic stuff--let's just solve cancer!It turns out that ideas take years or decades to percolate from basic science to the clinic. So the understanding the influences will always seem like archeology

2022-10-22 15:23:13 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau Indeed, AI is SO intertwined with Neuro that it doesnt make any sense to try to disentangle them historically. The whole point is we need people trained in both fields(BTW, that's only true of modern AI/ML/ANNs. GOFAI "advanced" w/minimal influence form neuro)

2022-10-22 14:53:12 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau but transformers solve a problem posed by RNNs, which were definitely neuro-inspired. And given links btw (neuro-inspired) Hopfield nets and transformers, perhaps the connection to neuro is stronger than usually appreciated?

2022-10-22 14:40:50 @garrface hmm. If memory serves, S&

2022-10-22 14:35:31 @criticalneuro @gershbrain my view is that the NeuroAI history involves big ideas slowly percolating from neuro to AI. Sometimes it takes years or decades for them to be engineered into something useful. But unless you think "scale is all you need" we are gonna keep needing new ideas for a while

2022-10-22 14:33:32 @criticalneuro @gershbrain @gershbrain can weigh in about whether i misunderstood his tweet...if so, then i wasted 30 minutes summarizing my view of the history of NeuroAI, which hopefully some people might find interestingbut he also raises an interesting question about future neuro-->

2022-10-22 14:17:23 @NeuroChooser @KordingLab i would agree except i dont think it's "just" engineering. Engineering is an essential and equal partner to the underlying inspiration...without proper implementation and development, those ideas are useless

2022-10-22 14:11:12 @KordingLab NoNeuro has historically been essential for many/most of the major advances. Unless you think "scale is all you need" then it's a great way to find hints as to what path to followhttps://t.co/Q8QczhC3zu

2022-10-22 14:08:01 @gershbrain @josephdviviano i agree with that (much weaker) formulation...neuro is not about delivering "widgets" to AI. Neuro can inspire big ideas. It can hint about what the right path is. But to make these ideas work requires engineering

2022-10-22 13:56:00 i should have cited this very nice summary of the history https://t.co/PVwiZBa2yFFIN+1

2022-10-22 13:53:39 @gershbrain yes i do think that... https://t.co/wLvNYYHiH4

2022-10-22 13:52:47 But stepping back: I think it's not coincidental that the early, major, advances in ANNs were made by people with feet in both communities. When NeurIPS was founded, the ANN community was indistinguishable from comp neuro

2022-10-22 02:02:19 @benj_cardiff @KordingLab He is not the first to say that! Luckily, we addressed that by arguing that we would be well advised to study ornithology if our goal were to endow a machine with "bird-level flight", eg "the ability to fly through dense forest foliage and alight gently on a branch" https://t.co/yo7JnGGVSG

2022-10-22 01:18:13 @neurograce @VenkRamaswamy @nidhi_s91 Cosyne is attracting more AI these days too

2022-10-22 00:37:53 @nidhi_s91 here, specifically we are talking about the energy efficiency of neural processing. A brain can do eg object recognition with a lot less power than a computerMy belief, shared with many, is that spiking (along with perhaps stochasticity, eg of synaptic transmission) is key

2022-10-22 00:35:30 @nidhi_s91 love to hear about it.To some extent, this is a call for AI to return to an earlier time when neuro and AI were much tighter. As a grad student in comp neuro, NeurIPS was my go-to meeting...neural networks and comp neuro used to be very tightly integrated

2022-10-22 00:06:27 @nidhi_s91 agree. all important and interesting fields

2022-10-22 00:05:50 @nidhi_s91 studying real animal bodies and how they interact with the environment is key to building robots. Inspired by "How to walk on water and climb up walls"https://t.co/INYhrLWmDD

2022-10-22 00:01:45 @nidhi_s91 that said, i am greatly inspired by ethology and agree it has a great deal to contribute

2022-10-21 23:59:34 @nidhi_s91 the overall goal of the paper is to galvanize excitement about NeuroAI. Historically neuro drove many key advances in AI, but one might ask what remains? Algos/circuits that address Moravec's Paradox (via embodiment) is one possible deliverable. Energy efficiency is another

2022-10-21 23:47:10 @nidhi_s91 the energy efficiency of neural circuits has indeed been studied for decades, eg this great paper by Laughlin. But studying energy efficiency of neural circuits does seem to fall squarely within the purview neuroscience, no? https://t.co/AZCQZ38NRF

2022-10-21 23:38:26 @criticalneuro @Abel_TorresM @summerfieldlab The primary target for funding would be govt not industry (though it'd be great if industry ponied up as well).

2022-10-21 23:34:20 @summerfieldlab AFAIK, there was little attempt in the Human Brain Project to "abstract the underlying principles"

2022-10-21 19:53:01 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical well in the Shannon information sense the information is there. How to decode is a separate questionIf you listened to the raw signal received by your cellphone it wouldnt mean anything to you. Luckily your phone knows how to decode it into an acoustic waveform

2022-10-21 19:22:56 @sanewman1 @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical i guess this reflects a very different understanding of how biology works from mine

2022-10-21 19:21:37 @SimsYStuart @PaulTopping @sanewman1 @ehud @kohn_gregory @GaryMarcus @SpeciesTypical I think the evidence for transgenerational epigenetic inheritance (Lamarckian evolution) playing an important role in humans (or most other animals) is very limited at best.Although Lamarck is a better algorithm, nature mostly seems to content itself with Darwin

2022-10-21 17:59:21 @kohn_gregory @GaryMarcus @sanewman1 @PaulTopping @ehud @SpeciesTypical i am using "information" in the technical (Shannon) sense, closely related to entropythere are other common uses of that word, and this might be at the root of some of the confusion here

2022-10-21 17:56:48 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical I am not clear how the fact that ink patterns might as well be stains is relevant here...can you clarify?

2022-10-21 17:52:37 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical it is semantics in that we know a lot about how these things work, so we're discussing what words to describe how it happens.There was a recent discussion about whether it's correct to call cells "machines" which imo was also just semantics.

2022-10-21 17:47:04 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical If i hand you a long set of instructions in Hungarian, i expect they will be challenging for you to follow (assuming you dont speak Hungarian). Nonetheless, i would say that the information is still there in the instructions. (not a perfect analogy but perhaps useful?)

2022-10-21 17:42:38 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical is there some other word that captures your understanding of the relationship btw geno/phenotype better? As a fellow biologist i assume we mostly agree on what that relationship is, so i guess we are just discussing word choice/semantics?

2022-10-21 17:37:35 @sanewman1 @GaryMarcus @PaulTopping @ehud @kohn_gregory @SpeciesTypical well, my phenotype includes being primarily bipedal, whereas my dog is mostly quadrupedalwould you not say that his genes "determined" his (quadrupedal) phenotype?

2022-10-20 20:48:08 @MelMitchell1 @mpshanahan @LucyCheke yes good point! We should add that to the next iteration

2022-10-20 20:13:50 @DavidJonesBrain @jeffrey_bowers @KordingLab I would include neurology as part of neuroscience. #bigtent

2022-10-20 18:39:15 @patrickmineault @KordingLab @seanescola ?

2022-10-20 18:37:41 @jeffrey_bowers @KordingLab my view is that much of what is needed is already present in animals (Moravec's paradox), which is not the primary focus of most psychology work today https://t.co/nTWXd3JGuB

2022-10-20 14:21:01 White paper —Rallying cry for NeuroAI to work toward Embodied Turing Test !Let’s overcome Moravec’s paradox: Tasks “uniquely” human like chess and even language/reasoning are much easier for machines than “easy” interaction with the world which all animals all perform. https://t.co/ehKRWl7rgJ

2022-10-19 21:40:40 @PessoaBrain @MillerLabMIT @LKayChicago @NicoleCRust By parts, I meant, synapses, channels, neurons. We know an awful lot about molecular and cellular neuroscience. How they are organized into higher level units like areas etc I agree is a bit less clear.

2022-10-19 20:47:25 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust sure it's all about figuring out how computation emerges from those parts...but IMO, it's worth keeping all that we learned about those parts (and how they are organized into circuits, etc) in mind as constraints...

2022-10-19 20:35:52 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust i think we know an awful lot about the parts that make up brains. Just not how they compute.... https://t.co/P2FGaui07C

2022-10-19 19:57:40 @jonasobleser @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust flattered to be compared with the GOAT but i'm not sure that most people who know me would characterize my discussion style as #ropeadope.

2022-10-19 19:55:26 @LKayChicago @MillerLabMIT @PessoaBrain @NicoleCRust hopefully we will all walk away with a shared understanding of what words like "organizing effects", "cause" and "epiphenomenon" mean in this context....

2022-10-29 14:22:11 @IntuitMachine for better or worse AFAIK there is no technical use for that word so we can abuse it at will

2022-10-29 14:18:39 @IntuitMachine perhaps a worthwhile campaign to have started in 1950 to nip possible misunderstandings in the bud but i think that ship has sailed #mixedmetaphorsat this point i think it's best to just define words clearly and move one

2022-10-29 14:11:49 @robwilliamsiii @MillerLabMIT https://t.co/SlhsxSrP53

2022-10-29 14:10:30 @IntuitMachine That said, the word "information" appears prominently on page 2. And within just a few years everyone was calling it "information theory," including eg EEs like Robert Fano (1950)https://t.co/qpp07m1Su9 https://t.co/BHpLY9TTKs

2022-10-29 14:05:38 @IntuitMachine I only use information in the formal Shannon sense. A useful concept but can be misusedAlways a risk when a popular word acquires a technical meaning like "significance" in statsOr even "temperature"...40F skiing in Utah *feels* a lot less cold than on a foggy day in SF!

2022-10-29 13:03:26 Conclusions from latest MAPseq paper https://t.co/vsX9T7m4K6

2022-10-29 13:02:32 RT @dinanthos: This organization enables parallel computations and further cross-referencing, since olfactory information reaches a given t…

2022-10-29 13:02:29 RT @dinanthos: We propose that olfactory information leaving the bulb is relayed into parallel processing streams (perception, valence and…

2022-10-29 12:38:48 @tdietterich @ylecun I imagine it is largely preprogrammed Just as human empathy is largely preprogrammed

2022-10-29 12:24:29 RT @kevincollier: This is as good as everybody says, really feels like the single most essential reading on today's big news.https://t.co/

2022-10-28 19:04:15 RT @kohn_gregory: There's been a lot of attention surrounding this study, which shows that zebrafish lacking action potentials still develo…

2022-10-28 12:43:40 @kendmil @WiringTheBrain @bdanubius

2022-10-28 12:43:25 @WiringTheBrain @bdanubius

2022-10-28 04:12:08 Exciting application of MAPseq in olfactory cortex with Xiaoyin Chen, @joe6783 and @dinanthos https://t.co/TFYGOSnQc3

2022-10-26 22:30:31 @LKayChicago @MillerLabMIT @vferrera @PessoaBrain @NicoleCRust Exactly“The Wave” is generated by a simple local rule. Nothing magical. https://t.co/HKeLLhHt1R

2022-10-26 20:38:21 @PessoaBrain Indeed this is a great example of how simple local rules--stand up &

2022-10-26 19:19:28 @furthlab This rewards people for doing the public service of reviewing. To gamify it people would compete for providing *valuable/useful* reviews. And allowing any interested reader to self-select as a (post pub) reviewer

2022-10-26 19:11:51 @furthlab @_dim_ma_ There is currently no system for saying that across journals your reviews are considered to be among the top 1% most valuable of all reviewers by readers. Especially in a way that allows a reviewer to remain anonymous

2022-10-26 18:12:21 @furthlab I don’t think the problem is too many papers per scientist.

2022-10-26 16:30:14 @SteinmetzNeuro @OdedRechavi My hope would be to defund publishing as much as possible, though i agree that if there is money to be spent it should go to editors first and then reviewers.

2022-10-26 16:18:19 @cshperspectives @wjnawrocki i guess for widespread uptake by the community there would have to be a very user-friendly front end. (I have no idea how to interact with ORCID)

2022-10-26 16:08:31 @cshperspectives @wjnawrocki having a centralized repository for these reviews, along with a mechanism so that even anonymous reviews could remain linked to the reviewer, would be a great step forward. (also a way to up- and down-vote reviewers)

2022-10-26 15:03:48 @cshperspectives @wjnawrocki really? how would it work? if i were to write a 4 paragraph review of a published paper (or preprint), where would i post it and how would i get a DOI? Is there a "biorxiv-reviews"?

2022-10-26 14:55:57 @cshperspectives @wjnawrocki make reviews citable with their own DOIs...https://t.co/LG0CHdRAsH

2022-10-26 14:54:41 this would go a long way to solving the "how do we get enough reviewers?" problem! https://t.co/zrICWrvYD9

2022-10-26 14:50:32 @behrenstimb or maybe one (@bdanubius) of the authors has been thinking about the relationship btw AI, learning and evolution and that's what motivated them to do these expts and so they sharing their actual motivation?you may question whether it IS relevant but: https://t.co/vFHS5k2OAh

2022-10-26 14:37:14 @cshperspectives @wjnawrocki https://t.co/RfvokFe96j

2022-10-26 14:36:54 @OdedRechavi how about rewarding the reviewers w/o paying them? Set up a system so top reviewers could be acknowledged for service to the community--something they can put on their resumesAnd open up reviewing to everyone-->

2022-10-26 14:27:05 RT @joshdubnau: Do you think it is sound career advice to encourage a postdoc looking for a TT job or assistant professor hoping for tenure…

2022-10-24 19:39:27 @davidchalmers42 just something to think about https://t.co/ImGVmdx5td https://t.co/RaJkolrj4s

2022-10-24 19:16:18 I just contributed to @actblue But i am reluctant to contribute againWhy you ask? Since contributing i have been inundated with texts and emails. Literally more than TWO DOZEN since last night!!*** Plz provide opt out option AT SOURCE if you want continued engagement ***

2022-10-24 04:00:48 @mezaoptimizer @pfau @ylecun @KordingLab yeah no analogy is perfect but going with this one i'd say it's as though modern physicists argued "we can do all the physics we need to by just reading Feynman...no need to learn any math beyond what we absorb from that"

2022-10-24 03:48:28 @mezaoptimizer @pfau @ylecun @KordingLab i dont really know what "researching neuroAI" would mean. We can research neuro, and apply what we learn to AI (and vice versa). To do either requires deep knowledge of both

2022-10-24 03:22:40 @pfau @ylecun @KordingLab and yet that's kinda the point. Feynman benefitted from the deep understanding of math learned during his training so didnt need Theorem 6a from Acta Math. yet the fact that he didnt need to keep up with the latest doesnt mean that later physicists could ignore math right?

2022-10-24 03:09:53 @pfau @ylecun @KordingLab Touché!

2022-10-24 02:42:32 @ChurchillMic luckily we have a just the analogy for you in the white paperbriefly: The Wright brothers werent trying to achieve "bird-level flight," ie birds that can swoop into the water to catch a fishAGI is a misnomer. What people want is AHI. ("general" -->

2022-10-24 01:50:13 @memotv @pfau also different from a major point of the white paper which was:"Historically many people who made major contributions to AI attribute their ideas to neuro. Nowadays fewer young AI researchers know/care about neuro. It'd be nice if there were more bilingual researchers

2022-10-24 01:20:38 I think there would be a lot less animosity in Twitter debates if they let you write “I think” without it counting toward your character limit.Just my opinion

2022-10-24 00:22:32 @pfau @ylecun @KordingLab I would say this is like asking a physicist what recent paper in math they read that enabled some result:"Hey Feynman, Did you ever read a paper in Acta Mathematica that directly changed the way you did something??"If no, then no need for physicists to learn any math, eh?

2022-10-24 00:17:39 @criticalneuro @tyrell_turing i think @pfau denies that "historically neuro contributed to AI"@gershbrain is also kind of a contribution-denier, though willing to concede the possibility of "soft intellectual contributions" https://t.co/ByyUFfunjj

2022-10-24 00:06:52 @criticalneuro @neuroetho @NicoleCRust IMO depends on what you mean by "advances". Agree that 99.9% of papers at NeurIPS do not require neuroBig ideas from neurosci might take 100 NeurIPS units to become useful bcs SOTA is so goodSo q is if all future big advances are endogenous or if neuro still has more to offer

2022-10-23 14:22:56 @neuroetho @criticalneuro @NicoleCRust yes I think many are arguing against hoping some specific Fig. 6a of some paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &

2022-10-23 14:20:54 In prepping for this upcoming discussion on LFPs @NicoleCRust reminds us of this 1999 special issue of Neuron all about oscillations and the binding problemhttps://t.co/zpkVADGSlK https://t.co/0rGLQim1hm

2022-10-23 14:17:12 @davidchalmers42 i dont think there is a single linear metric by which we can rank cognitive capacities, which is why the "general" in AGI is misleading. what we really mean is A-Human-IntelligenceBees are incredible but if we want to mimic HUMAN intel mice are closerhttps://t.co/A61XAQC4z5

2022-10-23 14:09:09 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab so i'm not sure that i disagree with what is written. I think they are talking about what i would call phenotypic behavioral discontinuities, whereas if one is building a system what matters is how much you need to tweak the parts and overall design

2022-10-23 14:06:07 @jeffrey_bowers @aniketvartak @PaulTopping @KordingLab I think the point is that you can have a discontinuity in abilities with only a few tweaks to the underlying structuresFinches going from hunting soft bugs to cracking hard seeds is a huge behavioral discontinuity but happened v. fasthttps://t.co/AgCLTUuHJ3

2022-10-23 13:10:35 @pfau @martingoodson i think you are arguing against hoping some specific Fig. 6a of some neuro paper will directly lead to a x2 increase in some algoI agree. That's not how it worksRather, we peek into neuro for inspiration &

2022-10-23 12:29:42 @neuropoetic @NeuroChooser @KordingLab Yes. un-nuanced provocation is a good way to build engagementI should try posting "Neuroscience is all you need. AI is off the rails and needs a reset. Scale is useless" and see what happens

2022-10-23 12:23:14 @Isinlor @gershbrain the amazing abilities of a bee, with <

2022-10-23 04:04:22 @jeffrey_bowers @aniketvartak @KordingLab Do you imagine the discontinuity occurred before or after we diverged from chimps (4 Myrs ago)? Although i happen believe a lot of what happened since then is due to language, my fundamental point (that our divergence is but an evolutionary tweak, like finch beaks) still holds

2022-10-23 01:31:13 @aniketvartak @jeffrey_bowers @KordingLab Lots humans can do animals can't (and vice versa)But most of the interesting ones are IMO coupled to language which likely evolved 100K-1M yrs ago--a blink Thus a few tweaks enabled a large change in abilityLike qualitative differences in Galapagos finch beak abilities https://t.co/S5B2t1dBB2

2022-10-22 20:00:49 @skeptencephalon I agree. One of the goals of rekindling interest in NeuroAI is to tap in to all the things we've learned in neuroscience in the last 3 decades

2022-10-22 19:55:13 @MatteoPerino_ @aemonten @mbeisen Right now, editors only tap established people However when it comes to establishing technical validity, a good postdoc or even senior grad student could do the job, greatly expanding the possible poolWe will need a system for assessing reviewer quality

2022-10-22 19:47:55 RT @ylecun: @pfau @KordingLab You are wrong.Neuroscience greatly influenced me (there is a direct line from Hubel &

2022-10-22 19:01:03 A sad day. Chuck was one of the greats. He was an inspiration as a scientist and a mentor.His contributions over more than half a century of neuroscience were broad and deep. He will be missed. https://t.co/LWXF9rUDYY

2022-10-22 16:07:09 @neuropoetic @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau @ChenSun92 biological plausibility is important for the application of AI to neuro, but doesnt really come up for the application of neuro to AI

2022-10-22 15:50:55 @criticalneuro @summerfieldlab @KordingLab @gershbrain @NicoleCRust @pfau This question comes up in funding biology. Why bother funding basic stuff--let's just solve cancer!It turns out that ideas take years or decades to percolate from basic science to the clinic. So the understanding the influences will always seem like archeology

2022-10-22 15:23:13 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau Indeed, AI is SO intertwined with Neuro that it doesnt make any sense to try to disentangle them historically. The whole point is we need people trained in both fields(BTW, that's only true of modern AI/ML/ANNs. GOFAI "advanced" w/minimal influence form neuro)

2022-10-22 14:53:12 @summerfieldlab @KordingLab @gershbrain @criticalneuro @NicoleCRust @pfau but transformers solve a problem posed by RNNs, which were definitely neuro-inspired. And given links btw (neuro-inspired) Hopfield nets and transformers, perhaps the connection to neuro is stronger than usually appreciated?

2022-10-22 14:40:50 @garrface hmm. If memory serves, S&

2022-10-22 14:35:31 @criticalneuro @gershbrain my view is that the NeuroAI history involves big ideas slowly percolating from neuro to AI. Sometimes it takes years or decades for them to be engineered into something useful. But unless you think "scale is all you need" we are gonna keep needing new ideas for a while

2022-10-22 14:33:32 @criticalneuro @gershbrain @gershbrain can weigh in about whether i misunderstood his tweet...if so, then i wasted 30 minutes summarizing my view of the history of NeuroAI, which hopefully some people might find interestingbut he also raises an interesting question about future neuro-->

2022-10-22 14:17:23 @NeuroChooser @KordingLab i would agree except i dont think it's "just" engineering. Engineering is an essential and equal partner to the underlying inspiration...without proper implementation and development, those ideas are useless

2022-10-22 14:11:12 @KordingLab NoNeuro has historically been essential for many/most of the major advances. Unless you think "scale is all you need" then it's a great way to find hints as to what path to followhttps://t.co/Q8QczhC3zu

2022-10-22 14:08:01 @gershbrain @josephdviviano i agree with that (much weaker) formulation...neuro is not about delivering "widgets" to AI. Neuro can inspire big ideas. It can hint about what the right path is. But to make these ideas work requires engineering

2022-10-22 13:56:00 i should have cited this very nice summary of the history https://t.co/PVwiZBa2yFFIN+1

2022-10-22 13:53:39 @gershbrain yes i do think that... https://t.co/wLvNYYHiH4

2022-10-22 13:52:47 But stepping back: I think it's not coincidental that the early, major, advances in ANNs were made by people with feet in both communities. When NeurIPS was founded, the ANN community was indistinguishable from comp neuro

2022-10-22 02:02:19 @benj_cardiff @KordingLab He is not the first to say that! Luckily, we addressed that by arguing that we would be well advised to study ornithology if our goal were to endow a machine with "bird-level flight", eg "the ability to fly through dense forest foliage and alight gently on a branch" https://t.co/yo7JnGGVSG

2022-10-22 01:18:13 @neurograce @VenkRamaswamy @nidhi_s91 Cosyne is attracting more AI these days too

2022-10-22 00:37:53 @nidhi_s91 here, specifically we are talking about the energy efficiency of neural processing. A brain can do eg object recognition with a lot less power than a computerMy belief, shared with many, is that spiking (along with perhaps stochasticity, eg of synaptic transmission) is key

2022-10-22 00:35:30 @nidhi_s91 love to hear about it.To some extent, this is a call for AI to return to an earlier time when neuro and AI were much tighter. As a grad student in comp neuro, NeurIPS was my go-to meeting...neural networks and comp neuro used to be very tightly integrated

2022-10-22 00:06:27 @nidhi_s91 agree. all important and interesting fields

2022-10-22 00:05:50 @nidhi_s91 studying real animal bodies and how they interact with the environment is key to building robots. Inspired by "How to walk on water and climb up walls"https://t.co/INYhrLWmDD

2022-10-22 00:01:45 @nidhi_s91 that said, i am greatly inspired by ethology and agree it has a great deal to contribute

2022-10-21 23:59:34 @nidhi_s91 the overall goal of the paper is to galvanize excitement about NeuroAI. Historically neuro drove many key advances in AI, but one might ask what remains? Algos/circuits that address Moravec's Paradox (via embodiment) is one possible deliverable. Energy efficiency is another

2022-10-21 23:47:10 @nidhi_s91 the energy efficiency of neural circuits has indeed been studied for decades, eg this great paper by Laughlin. But studying energy efficiency of neural circuits does seem to fall squarely within the purview neuroscience, no? https://t.co/AZCQZ38NRF

2022-10-21 23:38:26 @criticalneuro @Abel_TorresM @summerfieldlab The primary target for funding would be govt not industry (though it'd be great if industry ponied up as well).

2022-10-21 23:34:20 @summerfieldlab AFAIK, there was little attempt in the Human Brain Project to "abstract the underlying principles"

2022-10-21 19:53:01 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical well in the Shannon information sense the information is there. How to decode is a separate questionIf you listened to the raw signal received by your cellphone it wouldnt mean anything to you. Luckily your phone knows how to decode it into an acoustic waveform

2022-10-21 19:22:56 @sanewman1 @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical i guess this reflects a very different understanding of how biology works from mine

2022-10-21 19:21:37 @SimsYStuart @PaulTopping @sanewman1 @ehud @kohn_gregory @GaryMarcus @SpeciesTypical I think the evidence for transgenerational epigenetic inheritance (Lamarckian evolution) playing an important role in humans (or most other animals) is very limited at best.Although Lamarck is a better algorithm, nature mostly seems to content itself with Darwin

2022-10-21 17:59:21 @kohn_gregory @GaryMarcus @sanewman1 @PaulTopping @ehud @SpeciesTypical i am using "information" in the technical (Shannon) sense, closely related to entropythere are other common uses of that word, and this might be at the root of some of the confusion here

2022-10-21 17:56:48 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical I am not clear how the fact that ink patterns might as well be stains is relevant here...can you clarify?

2022-10-21 17:52:37 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical it is semantics in that we know a lot about how these things work, so we're discussing what words to describe how it happens.There was a recent discussion about whether it's correct to call cells "machines" which imo was also just semantics.

2022-10-21 17:47:04 @kohn_gregory @sanewman1 @PaulTopping @ehud @GaryMarcus @SpeciesTypical If i hand you a long set of instructions in Hungarian, i expect they will be challenging for you to follow (assuming you dont speak Hungarian). Nonetheless, i would say that the information is still there in the instructions. (not a perfect analogy but perhaps useful?)

2022-10-21 17:42:38 @sanewman1 @SimsYStuart @PaulTopping @ehud @kohn_gregory @GaryMarcus @SpeciesTypical is there some other word that captures your understanding of the relationship btw geno/phenotype better? As a fellow biologist i assume we mostly agree on what that relationship is, so i guess we are just discussing word choice/semantics?

2022-10-21 17:37:35 @sanewman1 @GaryMarcus @PaulTopping @ehud @kohn_gregory @SpeciesTypical well, my phenotype includes being primarily bipedal, whereas my dog is mostly quadrupedalwould you not say that his genes "determined" his (quadrupedal) phenotype?

2022-10-20 20:48:08 @MelMitchell1 @mpshanahan @LucyCheke yes good point! We should add that to the next iteration

2022-10-20 20:13:50 @DavidJonesBrain @jeffrey_bowers @KordingLab I would include neurology as part of neuroscience. #bigtent

2022-10-20 18:39:15 @patrickmineault @KordingLab @seanescola ?

2022-10-20 18:37:41 @jeffrey_bowers @KordingLab my view is that much of what is needed is already present in animals (Moravec's paradox), which is not the primary focus of most psychology work today https://t.co/nTWXd3JGuB

2022-10-20 14:21:01 White paper —Rallying cry for NeuroAI to work toward Embodied Turing Test !Let’s overcome Moravec’s paradox: Tasks “uniquely” human like chess and even language/reasoning are much easier for machines than “easy” interaction with the world which all animals all perform. https://t.co/ehKRWl7rgJ

2022-10-19 21:40:40 @PessoaBrain @MillerLabMIT @LKayChicago @NicoleCRust By parts, I meant, synapses, channels, neurons. We know an awful lot about molecular and cellular neuroscience. How they are organized into higher level units like areas etc I agree is a bit less clear.

2022-10-19 20:47:25 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust sure it's all about figuring out how computation emerges from those parts...but IMO, it's worth keeping all that we learned about those parts (and how they are organized into circuits, etc) in mind as constraints...

2022-10-19 20:35:52 @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust i think we know an awful lot about the parts that make up brains. Just not how they compute.... https://t.co/P2FGaui07C

2022-10-19 19:57:40 @jonasobleser @MillerLabMIT @PessoaBrain @LKayChicago @NicoleCRust flattered to be compared with the GOAT but i'm not sure that most people who know me would characterize my discussion style as #ropeadope.

2022-10-19 19:55:26 @LKayChicago @MillerLabMIT @PessoaBrain @NicoleCRust hopefully we will all walk away with a shared understanding of what words like "organizing effects", "cause" and "epiphenomenon" mean in this context....

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!

2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong

2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?

2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9

2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!

2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong

2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?

2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD

2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…

2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!

2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas

2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&

2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline

2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")

2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome

2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &

2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)

2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?

2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9

2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!

2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong

2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?

2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD

2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…

2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!

2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas

2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&

2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline

2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")

2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome

2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &

2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)

2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?

2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9

2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!

2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong

2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?

2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD

2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…

2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!

2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas

2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&

2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline

2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")

2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome

2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &

2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)

2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?

2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9

2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!

2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong

2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-11-28 23:50:30 RT @JustusKebschull: Postdoc opportunity: In collaboration with @yavinshaham and with NIH funding, we are recruiting a postdoc with backgro…

2022-11-28 21:42:27 @joshdubnau @Hoosierflyman yes, having 3 specific games is definitely more fun!

2022-11-28 21:25:01 @isEdgarGalvan @bleepbeepbzzz thx!

2022-11-28 16:40:47 @captgouda24 58% of adults would choose not to buy a particular model bcs it is less safe than other models. And this in a world where all cars are pretty safe (x5 safer than 60 years ago) https://t.co/5LOEaWrpF1 https://t.co/bW6Wa53jPo

2022-11-28 16:36:22 @captgouda24 Consumers have clearly indicated that they are willing to pay for safety. Contrary to what Iacocca claimed, consumers DO care about safety, which is why safety is often prominently displayed in ads Seatbelts/airbags/ABS/drunk driving laws are very popular

2022-11-28 16:32:51 @captgouda24 Not sure details in this case, but regulators are often in bed with the industries they regulate (promise of high paying jobs in private sector), so the regulations do not necessarily reflect the "public will" (hard to determine) but rather what is good for the industry

2022-11-28 16:27:18 @captgouda24 i agree it is sad that the American regulators are so beholden to the industries they regulate that they fail to protect consumer interests, meaning that fear of litigation becomes a major driving force.

2022-11-28 16:24:55 @captgouda24 what was the reputational damage to Ford? American auto makers had a reputation for making unreliable and unsafe cars, which is part of why Japanese makers surpassed them in the 1980s. Turns out drivers don't want to die in car crashes https://t.co/jvHVXjHB0o

2022-11-28 16:18:29 @captgouda24 of course not *all* safeguarding risk is worthwhile. "all" is a strawman. But i think Ford learned the hard way that society wanted a different tradeoff from the one they chose.

2022-11-28 16:17:09 @bleepbeepbzzz indeed! we are tweaking the compression algorithm to encourage formation of modules etc. Stay tuned!

2022-11-28 16:15:53 @captgouda24 from a financial POV Ford clearly made the wrong decision (massive punitive damages) In the 70s there was major pushback from auto manufacturers against seatbelts and airbags bcs they were too expensive. Turns out they were wrong

2022-11-28 15:02:10 @captgouda24 i think from a legal/ethical POV what mattered was Ford's "state of mind". The cars were legal but they thought they were dangerous and decided they weren't worth fixing. The massive $$ damages against them serves as a warning to future companies that that is a bad decision

2022-11-28 14:27:40 @captgouda24 "Ford had a decision to make. Its car was in compliance with industry standards of the time, so it was not breaking any laws. But its own research had proved the car was unsafe, and even deadly." https://t.co/f4QLNoLg12 https://t.co/zlF5Uof8yY

2022-11-28 14:14:49 @bleepbeepbzzz We have used this idea to develop a "genomic bottleneck algorithm" in which the compression of the circuitry into a "genome" acts as a regularizer. https://t.co/Vx624XCEyp https://t.co/7HguPojgSt

2022-11-28 14:12:00 @bleepbeepbzzz Indeed, we have argued that: In animals, there are two nested optimization processes: an outer “evolution” loop acting on a generational timescale, and an inner “learning” loop, which acts on the lifetime of a single individual. https://t.co/9i0NnpnZhE

2022-11-25 15:37:19 RT @MorePerfectUS: Elon Musk has spent decades building something big: himself. And it’s worked. The myth of Elon Musk as the “good billio…

2022-11-22 02:50:13 @kohn_gregory @WiringTheBrain i guess this is converging to a semantic discussion about the precise meaning of the word "causal" in this context. AFAIK most traits (eye color, bipedality, etc) are determined by DNA not oocytic factors no?

2022-11-22 01:23:33 @kohn_gregory @WiringTheBrain well, we can modify @WiringTheBrain's question a bit and control for issues arising from incompatible oocytes So if we put a chihuahua nucleus in a St Bernard oocyte we get a basically a chihuahua, right? And certainly the next generation will perfect chihuahua, no? thoughts? https://t.co/WDliPyZQAD

2022-11-22 00:15:39 RT @HopfieldJohn: Francis 'Frank' Schmitt already had an amazing view in 1962 of where neuroscience needed to go if you were serious about…

2022-11-21 21:39:10 @KanakaRajanPhD @IcahnMountSinai @SinaiBrain great news--congratulations!

2022-11-21 15:59:15 @kohn_gregory @NaturalSkeptik @WiringTheBrain i'm still lost. i guess sometimes twitter isnt the ideal medium for exchanging scientific ideas

2022-11-21 15:31:14 @kohn_gregory @NaturalSkeptik @WiringTheBrain i dont understand what you are saying. Using your example, given m&

2022-11-21 15:28:15 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Sadly, i see no evidence that the effectiveness of (mis)information rests on it appearing to be from a reputable scientific source. The viral stuff is usually a trustworthy-looking talking head spouting nonsense. or a headline

2022-11-21 15:10:27 @GaryMarcus @CriticalAI @AwokeKnowing @ASteckley @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Again: how we assess possible impacts? Was email a mistake? the internet? And would your assessments have been the same in 1990? 2000? and 2010? (I'm old enough to remember when we thought the internet was democratizing. See eg "Spring, Arab")

2022-11-21 15:07:12 @kohn_gregory @NaturalSkeptik @WiringTheBrain similarly if i specify the genome but not whether the conserved factors (CF) are species matched, your uncertainty about the outcome will be smaller than if i specify the CF but not the genome So the genome contains a lot more information about the final outcome

2022-11-21 15:03:00 @kohn_gregory @NaturalSkeptik @WiringTheBrain One can formulate the question of the relative importance of m &

2022-11-21 14:42:37 @kohn_gregory @NaturalSkeptik @WiringTheBrain in what sense do these conserved factors outside the genome contribute "as much"? Because my intuition is that if we could quantify their contribution it would be relatively tiny, but i confess that i'm not exactly sure how to quantify properly (though i have some ideas)

2022-11-21 14:36:35 @CriticalAI @AwokeKnowing @ASteckley @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian From a public policy POV, do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-21 03:03:30 @p_christodoulou @GaryMarcus @ylecun @ASteckley @CriticalAI @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian so do the benefits of email outweigh the harms? of the internet? for whom? How do we assess this? and would your answer have been the same in 1990? 2000? 2010? predictions are hard, especially about the future. and historical judgments even harder

2022-11-20 15:40:21 @CriticalAI @ylecun @GaryMarcus @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Great analogy. We have a well-defined gold standard (RDBPC) for comparing risk/benefit profile of new drugs Is there a comparable well-defined standard rolling out tech like LLMs? How do we compare risks and benefits rigorously?

2022-11-20 14:45:34 @ronfleix @sciliz plants definitely do get tumors https://t.co/xIkIRNvEw9

2022-11-20 03:38:56 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian Fair enough! I guess in hindsight we should have said a big "no"to email cuz of spamming!

2022-11-19 23:35:08 @noUpside @GaryMarcus @ylecun @mrgreene1977 @katestarbird @sinanaral @ProfNoahGian i had always assumed that the cost of running a troll farm (or DIY organic artisanal trolling) was pretty low, but maybe i'm wrong

2022-11-19 23:28:41 @GaryMarcus @ylecun @mrgreene1977 @noUpside @katestarbird @sinanaral @ProfNoahGian I guess this assumes a supply-side model of disinformation, ie what limits the amount of disinformation in the world is supply I'm more of a Keynesian--i think the limiting factor is mostly demand with perhaps additional constraints imposed by supply chains (ie social networks)

2022-11-18 17:27:44 @ylecun How hard would it be to modify LLMs so they can retain an accurate internal estimate of the veracity of their factual claims? While I enjoy playing with LLMs claiming I was born in UK, but it would be nice if they could report confidence https://t.co/rLI2Sv7kck

2022-11-18 01:38:26 @GaryMarcus Yeah I miss the good old days when the mark of education was the ability to recite the Iliad from memory--true test of brilliance. Dang new fangled written scrolls done ruined all that!

2022-11-17 23:38:28 Great opportunity! https://t.co/BgQ58750Se

2022-11-17 23:25:33 @NotionHQ Any comments on how it differs from @lexdotpage which I have been enjoying?

2022-11-17 23:03:33 @joshdubnau The causes for this increase remain poorly understood

2022-11-17 19:23:57 @pchiusano yeah, I guess the question is what is the proposed use case? As I believe @ylecun tweeted you don’t let your hands off the steering wheel but it’s potentially helpful for letting you drive.

2022-11-17 16:06:10 @GaryMarcus @MetaAI no, it has a lossily compressed version of the facts of the web in its system. If it had every fact perfectly it would be lossless compression --far less. Just like you form a compressed version of what you learn in class, which is assessed during a closed book test

2022-11-17 15:46:36 @GaryMarcus @MetaAI you are asking it to take a closed-book test, but perform as well as it would if the test were open book. I dunno, it seems to me it's doing pretty good a job BS-ing. (Reminds me of tests i took in high school when i didnt study. Answers mostly reasonable but wrong)

2022-11-16 15:09:22 it was great fun talking to @Embodied_AI ! Far ranging and thoughtful discussion. https://t.co/gqIcsjt4d7

2022-11-15 18:24:32 RT @TonyZador: Check out Li Yuan's poster on axonal BARseqProjections of >

2022-11-15 02:28:18 @philippschmitt Hopfield 84 has somewhat better figsNeurons with Graded Response Have Collective Computational Properties like Those of Two-State NeuronsJ. J. HopfieldPNAS 1984

2022-11-15 02:27:02 @philippschmitt Neural Networks and Physical Systems with Emergent Collective Computational AbilitiesJ. J. HopfieldPNAS 1982

2022-11-14 22:33:54 @PessoaBrain @NicoleCRust one could argue that we are all having ideas all the time...would more grant $$ cause us to have more ideas?like: Child: "There is Mother's and Father's Day, but why not Children's Day?"Parent: "Every day is Children's Day"

2022-11-14 20:32:34 @philippschmitt Cool! But i'm surprised not to see a figure from Hopfield 1982...

2022-11-14 18:15:54 @NicoleCRust @PessoaBrain "Progress depends on the interplay of techniques, discoveries, and ideas, probably in that order of decreasing importance" -- Sydney BrennerClearly new techniques--shiny or not--aren't sufficient. But IMO they are necessary

2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?

2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!

2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience

2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence

2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW

2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric

2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer

2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"

2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication

2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk

2022-12-07 21:49:12 @pwlot I’m also following mainly tech and academic people. Seems like there’s a lot less interesting discussion around papers or results

2022-12-07 21:44:31 @pwlot The number of engaging conversations I’m tempted to join is 73.2% lower than it used to be. That is my 100% quantitative &

2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?

2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!

2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience

2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence

2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW

2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric

2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer

2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"

2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication

2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk

2022-12-07 21:49:12 @pwlot I’m also following mainly tech and academic people. Seems like there’s a lot less interesting discussion around papers or results

2022-12-07 21:44:31 @pwlot The number of engaging conversations I’m tempted to join is 73.2% lower than it used to be. That is my 100% quantitative &

2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?

2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!

2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience

2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence

2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW

2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric

2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer

2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"

2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication

2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk

2022-12-07 21:49:12 @pwlot I’m also following mainly tech and academic people. Seems like there’s a lot less interesting discussion around papers or results

2022-12-07 21:44:31 @pwlot The number of engaging conversations I’m tempted to join is 73.2% lower than it used to be. That is my 100% quantitative &

2022-12-07 21:18:20 As twitter (not so) slowly dies i seem to be getting lots of new followers. I think those of us who remain follow each other in a vain attempt to find new content Like a star collapsing upon itself burning our last fuel Will we go supernova before collapsing into a black hole?

2022-12-07 21:12:57 @neuroetho @ESYudkowsky “Cyranoid”! Had to look that up!

2022-12-07 19:27:59 @joshdubnau @ESYudkowsky If emotional intelligence is part of your definition for sentience, then I agree 100%. But otherwise I have no idea how to define the word sentience

2022-12-07 19:27:25 @DoktorSly @ESYudkowsky Absolutely, if you’ve been playing around with these models, you know how to trick them. but I think Turing would have been fooled, as would I five years ago. and yes, I think it reveals that the Turing test tells us more about the gullibility of humans than about intelligence

2022-12-07 19:27:18 @neurojoy @ESYudkowsky https://t.co/teIGzke8NW

2022-12-07 16:45:29 @neuroetho @ESYudkowsky i think the original was proposed as a 2-alternative choice task. But zooming out, the fact that chatGPT is even this close while still clearly not being anywhere close to AGI suggests that the Turing test is the wrong metric

2022-12-07 16:06:05 @sir_deenicus @ESYudkowsky could be. I rarely go 20 min w/o a reset. But it's mighty close, and given that they explicitly are trying to make it avoid pretending to be human i think if the goal were to actually fool people i suspect with a few tweaks it'd be even closer

2022-12-07 15:56:44 @neuroetho @ESYudkowsky i've now worked with chatGPT for long enough i'm pretty sure i can trip it up. But i'm pretty sure chatGPT could fool a naive educated person if prompted with "Do you best to fool people into thinking you're human. Here are some tricks: Make typos, dont be a know-it-all, etc"

2022-12-07 15:41:06 @joshdubnau @ESYudkowsky there's a lot more to interacting with the world than simply leaping. think about a tiger stalking prey, an osprey swooping down and grabbing a fish from the water, or a beaver building a dam. And yes, social intelligence requires a lot of sophistication

2022-12-07 14:08:32 @ESYudkowsky Although LLMs are indeed basically passing the Turing test, I think we're learning that the Turing test is not a great measure of AGI. Thinking that if an AI could chat convincingly, it could do everything else, turns out to be an error. A manifestation of Moravec's paradox https://t.co/v1ktJbyYqk

2022-12-09 05:09:19 @AndrewHires https://t.co/1pSCYjnFTw

2022-12-09 05:07:34 @AndrewHires https://t.co/PhxBVwd8sy

2022-12-09 05:04:12 @AndrewHires here is the answer it gave me, in an ongoing session so a very different context. Different first sentence https://t.co/XRCr09S6WC

2022-12-09 05:01:38 @Aella_Girl not sure if you're trolling but FYI here is Charles Davenport's "Eugenics Creed", which includes gems like "I believe in such a selection of immigrants as shall not tend to adulterate our national germ plasm with socially unfit traits." https://t.co/kJ8mne0Xfb https://t.co/hZED403bIM

2022-12-09 04:48:31 @AndrewHires chatGPT's answers are stochastic and context-dependent so i'm not sure there is a "stock" response. Historically and in much of the world even today competence is assessed via oral exams...maybe it's time to return to that? shouldnt be a problem to test 1000 students, right?

2022-12-09 04:09:31 @KordingLab several people suggest that chatGPT has done well bcs your textbook was part of the training set. But given how poorly it does when asked to spit back facts that were definitely part of the training set, i think good performance here is unlikely to be due to pure memorization

2022-03-13 19:13:41 @Namnezia the issue isnt adapting to the change. For me anyway, the issue is that sunset is at 4:30 pm in the winter (here in NY). I would much rather have sunset at 5:30, even if it meant that sunrise was at 8:15 am rather than 7:15 am. 2022-03-13 01:49:31 @chrisXrodgers one step at a time! 2022-03-12 23:52:45 Let's make Daylight Savings Time permanent! https://t.co/GxqKunw5KW 2022-03-12 08:11:00 CAFIAC FIX 2022-01-21 00:03:13 @tyrell_turing @doristsao 2022-01-20 21:56:47 Interested in NeuroAI? Friday Jan 21 is the deadline to submit to NAISys! Abstracts are short (~2900 characters max) We have a great line up of speakers, and still hope to meet in person (April 5-9), omicron willing. ***Please retweet*** https://t.co/arh3vPyqy2 https://t.co/TfG7jdpzj4 2022-01-19 22:11:54 Building more complex embodied AI inspired by systems neuroscience! https://t.co/xnmDZF9tZA 2022-01-18 22:23:41 RT @JustusKebschull: We are looking for a postdoc to develop the next generation of barcode-based brain mapping tools to enable single-cell… 2022-01-18 22:17:59 @AndrBarrosCarv1 Thanks, this is very cool. But i think even the authors do not think it explains the incredible precision with which structures stop growing at the right size...they posit "other mechanisms" https://t.co/DOUy1J4NDp 2022-01-17 08:11:00 CAFIAC FIX 2022-01-11 08:11:00 CAFIAC FIX 2022-01-03 23:43:20 @sheacshl not really sure...but if it's an important talk, be sure to keep checking your emails... https://t.co/cudC0oGdNh 2022-01-03 00:59:47 RT @PLOSGenetics: Cytoplasmic aggregation of TDP-43 occurs in #ALS, #Dementia, and #Alzheimer disease Azpurua et al @joshdubnau screen for… 2021-12-28 18:38:10 Neural Circuit Architectural Priors for swimming https://t.co/hJd8vWMovO 2021-12-28 01:48:54 Looks like saliva may be more sensitive than nasal for omicron https://t.co/TOur9diwJs 2021-12-27 08:20:00 CAFIAC FIX 2021-12-20 19:03:24 useful trick https://t.co/DO9SYlYUVD 2021-12-20 16:35:34 @GaryMarcus @WiringTheBrain @ylecun So if you agree in principle it's possible, then the issue is just that we dont yet have the right architectures etc, right? In the same way that we dont even have the right architectures to match the performance of a spider or a worm... 2021-12-20 16:30:59 @GaryMarcus @WiringTheBrain @ylecun I read the Algrebraic Mind and still dont understand why you believe that in principle there cannot exist an ANN that can manipulate symbols, even though brains do it. Or are you just saying that we havent yet figured out how to make and ANN that manipulates symbols? 2021-12-20 16:24:09 @GaryMarcus @ylecun I agree that innate structure, which greatly constrains the search space over weight matrices, is important. But I dont see how that explains why you think ANNs cannot manipulate symbols. 2021-12-20 15:57:06 @GaryMarcus @ylecun what i have never understood about this argument is that our brains somehow manipulate symbols on an ANN architecture that is kinda similar to what brains have. So are you saying there is some crucial difference (eg recurrent connections?) btw ANNs and brains, and if so what? 2021-12-20 15:45:44 @AdrianoAguzzi there is a machine that provides "PCR-quality" results for $75. But each new sample is $49. https://t.co/uR4R3R6Gdo 2021-12-20 14:10:24 @morungos I think few animals are capable of rational thought. Spiders seem to do fine without it And computers play chess without a model of the physical world But in humans they are entangled. As Moravec puts it, reason is "supported" by the much older/more powerful systems. 2021-12-19 22:15:55 @ylecun @AndreTI A great discussion of this and a lot more in "Genius in the Shadows: A Biography of Leo Szilard, the Man Behind the Bomb" by William Lanouette 2021-12-19 22:13:35 @ylecun @AndreTI "in 1934, [Szilárd] patented the idea of a nuclear chain reaction via neutrons. The patent also introduced the term critical mass ... In a very real sense, Szilárd was the father of the atomic bomb academically." https://t.co/EMLjLNvk0G 2021-12-19 18:34:59 @MarkRober 2021-12-19 18:32:24 Dogs stacking tires is cool bcs it seems to require some kind of reasoning. But i think even more impressive is the squirrel-apault. Watch the squirrel get launched, and nail the landing onto some a pole. Jump to 17:50 if you're impatient. https://t.co/rKpgyjuCqE 2021-12-19 18:24:51 @ylecun @AndreTI Not to quibble, but I always thought Szilard gets credit for the chain reaction: "In October 1933, Szilard first conceived a nuclear chain reaction on the streetside corner of London." https://t.co/rWZ6fVSscR 2021-12-19 17:02:39 Indeed! Time to remember Moravec's Paradox (1988) again. Reason, logic--the stuff we humans are so proud of--is the easy part. The hard part is what we share with animals, which took almost 1B yrs to evolve. https://t.co/FPXZOA5JPH https://t.co/LGKep7C30P 2021-12-18 21:10:04 For the first time in my life, I am now motivated to start sending out Christmas cards https://t.co/6R4h8ZRS5o 2021-12-17 23:55:00 @andpru @Partitio_nBlues @drmichaellevin How? 2021-12-15 20:02:33 Good news/bad news https://t.co/iO3Ctxi8C2 2021-12-15 19:06:44 @bbaserdem that is what i was thinking. 2021-12-15 15:20:49 @joe6783 There were still many susceptible (covid-naive) hosts at the end of each of the first few waves. If that weren't true, then most cases would be in vaccinated or previously infected people, which was not the case (at least until Omicron) 2021-12-15 12:48:37 @aromgar I think with flu the answer is that there are immunologically distinct variants every year. But we've seen waves of covid even before Omicron 2021-12-15 03:55:57 @pait if people change their behavior in proportion to the prevalence then it's easy to see how waves occur. But i think in many places people do not change behavior, and yet waves occur 2021-12-15 02:52:50 @cdk007 https://t.co/xMKbeNyfMJ 2021-12-15 02:51:46 @clhurtt @cdk007 Exactly 2021-12-15 02:26:53 Covid wave particle duality https://t.co/d27S1MKPXH 2021-12-15 02:01:23 @cdk007 Waves end even in eg red States where I'm pretty sure most people don't change their behavior 2021-12-15 01:45:28 Basic question: Why do covid infections come in waves? Ie what terminates a wave? Naively one would think either 1) run out of susceptible hosts (but no) 2021-12-15 01:39:42 Great thread on antibody responses to wt, Delta and omicron strains, as a function of vaccination status. Bottom line: get boosted. https://t.co/Kg3mw8jjcU 2021-12-15 01:29:08 @viktor_thoth @neurojoy Still pretty cool! 2021-12-14 12:35:27 @jgvfwstone Yeah the problem is it's hard to specify computational work. Cuz what I'd really want is something like "passages parsed"/joule or something like that 2021-12-14 03:10:19 @AndrewHires Hmm. I guess you're right. I was fooled by the video and the section entitled "Train to Shoot" but didnt read the conclusions carefully https://t.co/ZzBM7g9WMW 2021-12-14 02:59:58 @AndrewHires i dont think the Tank lab trained their rats to kill zombies but maybe i missed that part? 2021-12-14 02:32:25 how did i miss this? This guy trained rats to play Doom! https://t.co/x36I9L3OvB https://t.co/Lb0t63MJvm 2021-12-12 17:22:56 @KordingLab of course we dont know yet, but my working model is that Omicron is about as transmissible in naive populations as Delta, but is outcompeting it bcs partly evading immunity and the apparent drop in severity is a statistical illusion, as explained here https://t.co/U4BF7NWa46 2021-12-12 15:55:01 RT @svoboda314: Are you interested in neural computation and neural dynamics in the context of defined neural circuits? The Allen Institute… 2021-12-12 14:45:05 @djintwt Both synaptic release and spikes are energetically costly, but only synapses are intrinsically noisy. When needed, cortical spikes can be very precise too. See this thread https://t.co/r9fvAvb8Sb 2021-12-12 14:37:38 @JrKibs @aoargunsah @alex_ander It's hard to know exactly how language priors are encoded in the genome, for the same reason that it's hard to know how *any* behavior (like mating rituals) is encoded. Or, for that matter, how any complex phenotype is encoded 2021-12-12 03:58:10 @TimoWitten @Timothy0Leary well, then i guess you should count the cost of designing the GPUs and mining the silicon. 2021-12-11 20:52:35 @Timothy0Leary I agree there are several apple-to-orange comparisons, including training vs performance, in the intro. what i was highlighting in the subsequent tweets were the new results regarding the (to me) unexpectedly high energy cost of vesicular acidification https://t.co/25on4vwS2d 2021-12-11 19:53:44 @aoargunsah @alex_ander Yes i think the main reason that humans can acquire language with orders of magnitude less data is that we have priors, encoded in our genome, that predispose us to acquire language efficiently. 2021-12-11 19:32:03 @aoargunsah @alex_ander I'm surprised! that is a lot of talking! But still, at that rate it would take 10,000 yrs before a child is exposed to as much training data as GPT3 2021-12-11 19:13:07 @aoargunsah @alex_ander there are about 3e7 seconds/yr. So if they heard 1 word/second nonstop for the first year, they'd still be 4 orders of magnitude short. 2021-12-11 18:31:24 @hb_cell @ylecun @AllenInstitute I think neuropil is pretty densely packed, no? https://t.co/oHxuKD4V6L https://t.co/dicSSMYriE 2021-12-11 18:06:28 @StephaneDeny @ylecun @SarahLizHarvey Cool! Hadn't seen that. Thx 2021-12-11 17:37:36 @alex_ander Luckily children don't need to read 500B words to learn a language 2021-12-11 17:31:20 @ylecun All true. But another key difference is that digital electronic circuits expend a lot of energy to ensure that 0s and 1s are distinct, with extremely low error. Synapses are extremely noisy, yet BNNs compute effectively We don't yet know how to compute in that regime 2021-12-11 17:01:02 @mkturkcan I don't understand what you are suggesting 2021-12-11 15:21:36 Wolfgang Maass, among others, has some interesting thoughts about how to compute with noisy synapses. But there is still a lot of work to be done! https://t.co/zj2X4waDcj 5/n 2021-12-11 15:01:02 However, it turns out that other major energy burden is synaptic release. This cool new paper identifies sources of energy use in resting brains. Turns out a lot of energy is used keeping synaptic vesicles acidified. https://t.co/SGNusBle86 3/n 2021-12-11 15:01:01 1/n 2021-12-11 15:01:00 Biological neural networks (BNNs) are much more energy efficient than artificial NNs. The human brain uses about 15-20W, whereas eg training a big ANN causes the lights to dim in Boston for a day or 2. Why? 2021-12-10 23:53:52 @NeuroPolarbear @andpru i dont understand what you are saying 2021-12-10 22:31:44 @aeminorhan Those "few insignificant nutjobs" are working hard to lay the foundations for a coup in 2024. https://t.co/8ag1jXkaJW 2021-12-10 22:27:05 @StolfiAlberto that makes it all the easier to execute this strategy...no change in party reg needed. but there would still need to be a grassroots campaign to coordinate voters to do this. 2021-12-10 22:25:38 @NeuroPolarbear @andpru A 51% district is not safe...The ideal gerrymandering is to give a margin of error, eg 70% R, and then pack the Ds into a few 95% districts. 2021-12-10 18:05:23 @furthlab yes i guess this is basically tactical voting. 2021-12-10 18:04:46 @NeuroPolarbear My proposal will not prevent conservative Rs from winning solid red districts. But it might mean the difference btw someone who merely supports tax cuts for the wealthy, and someone who riffs about space lasers. 2021-12-10 18:02:10 @NeuroPolarbear @andpru I did not think it was controversial that gerrymandering increases the chances that an extreme candidate wins bcs their districts are safe. I guess whether this represents a "problem" could be debated. 2021-12-10 17:58:41 @MatteoCarandini Say what you will about his politics, there's no denying that Trotsky was a smart fella 2021-12-10 17:56:18 @NeuroPolarbear A generation ago that might have worked, and in swing districts that is still worth pursuing. But nowadays there are many safe Red districts where chances of even a conservative Dem winning in the general election is approximately 0 2021-12-10 16:28:48 @GaneshNatesh Yes I think execution would be a serious challenge, especially as it would likely require going counter to the D party establishment That said--and I'm no organizer--but I hear that a lot of people can be reached by social media these days 2021-12-10 16:10:14 @GaneshNatesh It's not clear how they could stop this. There is nothing to prevent people from changing party registration as many times as they like. There are AFAIK no loyalty oaths required to register. 2021-12-10 16:07:57 @GaneshNatesh maybe. That's seems like it would be a 3rd order effect but who knows? 2021-12-10 16:06:54 @cdk007 that's a good point. But personally, if i lived in an extreme district, I would be willing to do my part. 2021-12-10 16:05:23 @jgoldschrafe I think this would complement open primaries. What is missing though is a campaign to suggest to Ds in solid R districts to converge on a less extreme R in the primary. I doubt it would occur to most people to do that. 2021-12-10 15:51:49 @GaneshNatesh More extreme than we already have? I think there is a ceiling effect to crazy but maybe that's just wishful thinking. 2021-12-10 15:50:22 @GaneshNatesh Why would it be hard to convince D party faithful to change registration, given the plan? Except for the fact that the D party itself would not likely endorse? And rank choice voting would changes in the law that are least likely to occur in states with a lot of safe districts 2021-12-10 15:34:42 However I think there is a simple solution: Dems in safe Rep districts should register as R, and vote for the less extreme candidate. Dem voters can still vote D in the general election. And Rs will still win the district. But the winning R will be a bit less extreme. 3/n 2021-12-10 15:34:41 I don't usually tweet politics, but I have an idea I would like to share We (in the US) have a serious problem: Due to gerrymandering, many congressional elections are "safe": Such districts are decided in the Republican primary rather than the general election. 1/n 2021-12-09 21:45:22 @MatthewLennig @ybarrap or, we could just start a movement to convince Dems in solidly R districts to register R and vote in primaries. Rs would still win, but the winning Rs might be less crazy. This has the advantage that it could be done today, without any hard-to-pass legislation. 2021-12-06 23:36:22 @KordingLab Here is my monthly PSA that not all neurons have Poisson variance, and not even all cortical neurons. (Yes, Virginia, there are brain regions other than cortex) In auditory cortex, neurons often have count variability as low as mathematically possible https://t.co/tSseDSWR49 https://t.co/PcU4DKwCAb 2021-12-05 14:00:58 @Kit_Yates_Maths I think this assumes that results from PCR and LFD are independent This would be true of PCR false negs arising from eg sample mishandling. But since PCR is more sensitive & 2021-12-04 23:57:23 @cdk007 We agree that 100% vaccination would be ideal. But sadly that does not seem likely 2021-12-04 22:46:12 @cdk007 Vaccine hesitancy is causing vaccination rates to plateau so in practice R0 will drop below 1 only once enough people are infected. Given that, it would be better that they get infected with a less deadly strain, no? (Not yet clear if Omicron is indeed less deadly) 2021-12-04 22:40:28 @KordingLab I don't know about fitbit, but I have compared my applewatch to a chest monitor and they are in close agreement during a range of activities 2021-12-04 20:49:38 Last chance https://t.co/S52oZb6uYC 2021-12-02 14:59:32 @apoorva_nyc @priscillagilman @rushidesai But if Dr. Moore is an expert i am happy to defer to him...I would be really interested in a follow up explanation from him on the putative mechanism whereby a booster could interfere with response to an omicron-specific followup (other than original antigenic sin). thx 2021-12-02 14:55:53 @apoorva_nyc @priscillagilman @rushidesai My interpretation is that if the recommendation is to eg wait 6 months btw 1st & Not clear how adding a 3rd or 4th dose could reduce response. 2021-12-02 14:46:58 @apoorva_nyc @priscillagilman @rushidesai I'm not clear what "it" here refers to...the general risks of repeated boosting, or the specific mechanism of "original antigenic sin" (sorry--I skimmed the rather long PDF and wasnt clear where to look for the warning.) 2021-12-02 14:41:40 @priscillagilman @rushidesai @apoorva_nyc The only way i can interpret the WaPo claim that "“training” the immune system repeatedly on the original variant — as the current boosters do — may prove to be counterproductive." is invoking "original antigenic sin". (Seems very theoretical to me) https://t.co/Rc4mkvhkBM 2021-12-01 16:51:28 I'm usually a covid optimist, but this really seems like a disaster in the making. https://t.co/A1XwsRPw6Y 2021-11-30 22:30:55 @curiouswavefn Horace Barlow 2021-11-29 22:59:56 @meganinlisbon Completely agree. Syanptic facilitation and depression operating on time scales of tens or hundreds of millisecs can cause synaptic strength to vary by an order of magnitude or more. 2021-11-29 21:58:39 Closing borders to try to slow down the spread of omicron is just Covid theater https://t.co/NLlTZk43AO 2021-11-29 19:24:52 @whatishealth21 wow! 2021-11-27 16:58:38 @KremkowL @NeuralEnsemble @BorisBarbour @WiringTheBrain @zmainen wow. cool paper! 2021-11-27 15:50:57 here is the original https://t.co/mPAH5SauUJ 2021-11-27 15:47:48 "Have you ever noticed that anybody who wears masks where you wouldn't is an idiot, and anyone who doesn't wear them where you would is a maniac?" George Carlin (1937-2008) American Comedian 2021-11-27 02:56:16 Sage advice https://t.co/HV1VnJETAj 2021-11-26 18:50:23 @kendmil @Timothy0Leary @WiringTheBrain @zmainen In practice, when you block synaptic transmission in a slice, the remaining Vm fluctuations are tiny. Even in a quiet slice in vitro, most of the fluctuations arise from mini EPSCs. Synaptic noise is orders of magnitude larger by any reasonable measure. 2021-11-26 18:45:37 @kendmil @Timothy0Leary @WiringTheBrain @zmainen in this paper, the synaptic currents were generated directly from simulating either pure E, or balanced E/I, currents whose sizes were measured directly in the same prep. Look's very much like Zach's data. https://t.co/ogIr4xIHEi https://t.co/pPpmuxXit7 2021-11-26 18:40:22 @kendmil @Timothy0Leary @WiringTheBrain @zmainen You also see large voltage fluctuations in S1. here is a beautiful paper by Okun & https://t.co/xvohBaOgNP https://t.co/R27Dk9qUmn 2021-11-26 17:32:35 @kendmil @Timothy0Leary @WiringTheBrain @zmainen the argument is that the dominant source of noise is synaptic variability so that whether spikes are or are not reliable trial to trial depends on the correlational structure of the inputs 2021-11-26 17:18:49 @kendmil @Timothy0Leary @WiringTheBrain @zmainen What in vivo counterexamples do you have in mind that do not show large current fluctuations, whether sparse or not ? 2021-11-26 17:08:05 @kendmil @Timothy0Leary @WiringTheBrain @zmainen also, i'm not quite sure about "25 mV". The input currents are specified in pA. Reliability is already high for var=50 pA^2, which is pretty small given cortical EPSCs of 5-10 pA 2021-11-26 17:02:14 @cnl_mbu_iisc @WiringTheBrain @zmainen Yes indeed, input synchrony is key in cortex! "Input synchrony and the irregular firing of cortical neurons" https://t.co/MBbhcQGl2j 2021-11-26 16:59:00 @kendmil @Timothy0Leary @WiringTheBrain @zmainen so what Schneidmann could show is that Na channels are the dominant source of noise in the spike generator, but not that this noise is limiting. Even digital circuits have noise, but operate in a regime where you can ignore it. In cortex synapses are the dominant source of noise 2021-11-26 16:54:55 @kendmil @Timothy0Leary @WiringTheBrain @zmainen the fidelity of a communication channel can only be assessed relative to an input ensemble. What the relevant ensemble is for neurons? In vivo cortical recordings indicate that currents do indeed have large fluctuations. https://t.co/J5qLifBpvs https://t.co/gByfHWp0wy 2021-11-26 16:28:08 @cnl_mbu_iisc @WiringTheBrain @zmainen I think you are using "noise" in 2 different ways. But here is a more precise formulation: The generator is low noise (ie produces low-jitter repeatable spike trains) when driven by (biologically relevant) fluctuating currents arising from summed synaptic currents 2021-11-26 14:53:07 @WiringTheBrain @zmainen For historical completeness, also note this earlier figure in Bryant-Segundo 1976 showing the reliability of the spike generator. https://t.co/hluTPBsy56 https://t.co/6P0wsDCFsJ 2021-11-25 20:15:53 @BorisBarbour @NeuralEnsemble @WiringTheBrain @zmainen The effect of Ca/Mg on induction of LTP is interesting but not I think directly relevant to the main thread here about sources of noise in neural circuits and how they might serve as substrate for God's hand or free will 2021-11-25 19:49:05 @BorisBarbour @NeuralEnsemble @WiringTheBrain @zmainen I am unclear what you disagree about. Are you saying that synaptic physiologist do not understand the effect of calcium and magnesium on synaptic release probability? Or that neuro modulators do not represent an important difference in vivo and in vitro? 2021-11-25 19:39:32 @NeuralEnsemble @BorisBarbour @WiringTheBrain @zmainen i am not sure this is a valid generalization. In vitro synaptic physiologists routinely vary Ca/Mg depending on what they are studying. The effects are well-understood What i worry about when comparing in vivo to in vitro is modulators like adenosine, which alter release prob. 2021-11-25 18:10:09 @WiringTheBrain Here are some traces showing unreliable synaptic responses to 8 successive paired pulse stimuli in slice. All 8 show no release after first stim, indicating p< https://t.co/GuDXFV4Lug https://t.co/d9y64RusQd 2021-11-25 18:04:18 @NeuralEnsemble @BorisBarbour @WiringTheBrain @zmainen Slice physiologists manipulate divalents (Ca/Mg) over a range that has a large effect on release probability, as described by the Dodge-Rahamimoff eq https://t.co/3zfXXfHHnc 2021-11-25 18:02:08 @NeuralEnsemble @BorisBarbour @WiringTheBrain @zmainen Skimming the Borst paper, it looks like he is focusing on the Calyx of Held, which is tuned for highly reliable transmission. In the CNS in slice, quantal analysis reveals p < https://t.co/GuDXFV4Lug 2021-11-25 17:57:03 @NeuralEnsemble @BorisBarbour @WiringTheBrain @zmainen yes, synaptic noise is clearly dominant, as first shown by Calvin & However, i'm not sure why you say in vitro expts underestimate synaptic failures? release p can be < https://t.co/ujVjfQt5vO 2021-11-25 17:53:54 @itamarlandau @MatteoCarandini @mattsmear @skepticalDre @ampanmdagaba @WiringTheBrain @ZeroNoiseLab I'm not exactly sure what you are saying, but I think I couldnt possibly fail to disagree less 2021-11-25 17:52:53 @itamarlandau @MatteoCarandini @mattsmear @skepticalDre @ampanmdagaba @WiringTheBrain @ZeroNoiseLab yes such highly reliable low fano factor responses are also found in the awake cortex, along with a wide range of other responses as well---a veritable cortical zoo https://t.co/lb20yihIe4 2021-11-25 17:14:28 @adamimos @MatteoCarandini @mattsmear @skepticalDre @ampanmdagaba @WiringTheBrain @ZeroNoiseLab Bair & https://t.co/TLnLoRd4HW Also Shadlen-Newsome 1994 https://t.co/acrDaAN9Xn 2021-11-25 15:47:22 @MatteoCarandini @mattsmear @skepticalDre @ampanmdagaba @WiringTheBrain @ZeroNoiseLab We replicated the time-locked, high Fano Factor regime in monkey MT. But monkey visual cortex seems to be an outlier in terms of FF. Under the right conditions, many other cortical areas (eg barrel cortex) can show time-locked responses with 0 FF https://t.co/Cl0Tl5JsiQ https://t.co/gzGju1P3dY 2021-11-25 15:43:28 @MatteoCarandini @mattsmear @skepticalDre @ampanmdagaba @WiringTheBrain @ZeroNoiseLab In the Newsome random dot data, the responses became time-locked, but the trial-to-trial variability (Fano factor) remained Poisson, consistent with a time-varying Poisson process. The auditory responses above have 0 FF hence inconsistent with Poisson. 2021-11-25 15:40:23 @superkash @WiringTheBrain @zmainen Yes, the spike generating mechanism is low noise in most neurons. Calvin& https://t.co/ujVjfQt5vO 2021-11-25 15:24:29 @BorisBarbour @WiringTheBrain @zmainen i agree that in many circuits, the dynamics seem to find a way to overcome this inherently noisy synaptic substrate. 2021-11-25 15:23:00 @WiringTheBrain @zmainen Eccles believed that God intervenes in human thought by determining the outcome of each random release of synaptic vesicles between neurons. This has the advantage of being fundamentally untestable 2021-11-25 15:16:36 @WiringTheBrain Biophysically we know that the neuronal spike generating mechanism is very low noise. See famous figure from @zmainen OTOH, synapses are inherently stochastic, leading Nobel Laureate Eccles (a dualist) to posit them as the seat of the soul https://t.co/Tgn5jH71mr https://t.co/27ib0gXscK 2021-11-25 15:11:12 @MatteoCarandini @mattsmear @skepticalDre @ampanmdagaba @WiringTheBrain @ZeroNoiseLab And sometimes neurons can be low noise, even in the presence of such other inputs. Eg tone-evoked responses in auditory cortex often have no trial-to-trial variability. Here is an example https://t.co/tSseDSWR49 https://t.co/WvoLtsZD91 2021-11-24 13:58:52 And this is why i study rodent brains (even though I would love to understand human brains): I think we fundamentally don't understand how rodent brains work, but that once we understand rodent brains the step to human brains will not be that big 6/6 2021-11-24 13:58:51 And we have some specializations related to dexterity (which enables tool use) and bipedality. And our theory of mind is really sophisticated. We primates are really good at predicting others' actions 2/n 2021-11-24 13:58:50 This is a really fun question. I think there are clearly plenty of behaviors that are so far developed in humans as to be effectively unique. I am particularly impressed by language, which enables social cooperation and the accumulation of knowledge through generations. 1/n https://t.co/6R6tkDIjsM 2021-11-24 13:09:14 @zmainen @behrenstimb @KordingLab @gershbrain But ultimately I find this formulation confusing. Every species has unique specializations. Eg bats are really good at ultrasound. So are you asking whether there is anything that humans can do that is more off the beaten path of generic mammalian capacities than eg bats? 2021-11-24 13:00:07 @zmainen @behrenstimb @KordingLab @gershbrain (We dont "understand" language in the same sense in which we mostly dont understand how anything is implemented in pretty much any animal...i dont think it's uniquely mysterious) 2021-11-24 12:58:43 @zmainen @behrenstimb @KordingLab @gershbrain IMO, what is really special about humans is language, which enables flexible social cooperation and the accumulation of knowledge across generations.  We clearly have specialized circuits for language, which have antecedents in other primates and which we mostly dont understand 2021-11-22 16:37:33 @KennethHayworth @skepticalDre @EricMTrautmann we know that properly made back up copies work. I think a better analogy would be to propose to take an electron micrograph of the surface of your failing hard disk, or perhaps use some related new technology, and hope we can figure out how to reconstruct the contents 2021-11-22 14:39:36 @hb_cell @KennethHayworth As i noted elsewhere: Personally, i would be willing to take the risk that quantum states are not necessary to preserve my consciousness. Moreover, dilithium crystals powering a Heisenberg compensator can resolve quantum uncertainty... 2021-11-21 20:41:55 @KennethHayworth @RWerpachowski @prokraustinator Do you have a cost estimate of preserving a few billion brains for a few centruries? 2021-11-21 20:23:09 @KennethHayworth @RWerpachowski @prokraustinator Given that the current medical system (at least in the US) does not guarantee access even to proven life saving medical treatments, I'm not sure how one could argue that a completely speculative procedure could be a "right" 2021-11-21 20:12:02 @KennethHayworth @RWerpachowski @prokraustinator I am unclear exactly what you are proposing. AFAIK it's not possible to upload the memories of any mammal even under the most controlled conditions. What would it look like to address this idea "seriously"? 2021-11-21 18:22:52 @EricMTrautmann @KennethHayworth actually, i think a Theseus ship strategy is our best route to ensuring continuity of consciousness. 2021-11-21 18:21:05 @RWerpachowski @KennethHayworth Personally, i would be willing to take the risk that quantum states are not necessary to preserve my consciousness. Moreover, dilithium crystals powering a Heisenberg compensator can resolve quantum uncertainty... 2021-11-20 21:55:45 @KennethHayworth I think the key here is "in principle". I definitely believe in uploading, if you could copy a brain, atom-by-atom, and implant it in a copied body. But short of that i would say we (the field) cannot be sure exactly what granularity you would need to copy to succeed. 2021-11-20 01:25:28 Retweeting this for a friend who joined twitter just to post this. They have a really cool idea and would like to build a team. Contact them if you're interested and plz retweet! https://t.co/hWM0P4sIFZ 2021-11-17 05:02:53 RT @Labrigger: Neuro? Love GLMs? Card-carrying Bayesian? Have strong opinions about ML/AI? Want to start your own lab at the beach? Know so… 2021-11-17 02:42:57 @cdk007 @drugmonkeyblog I guess it depends on the experiment. But if stimulation causes different effects depending on the exact population of neurons targeted, even when +/- stim trials not differentially rewarded, then probably not merely "sensing" the perturbation. see eg https://t.co/ktceYYaL5J 2021-11-16 21:58:35 RT @CosyneMeeting: Last week to work on your abstract submissions! 2021-11-16 20:46:40 @drugmonkeyblog This has long been recognized known as a potential confound requiring controls such animals injected with GFP instead of opsin 2021-11-15 17:57:43 @neuralreckoning I think I agree. But of course I have no idea how one can define “best" 2021-11-15 14:06:22 Scientists would take more risks if grants didn't force them to be conservative https://t.co/xtQMoViEIq https://t.co/v3pqGCvzsT 2021-11-14 22:10:14 RT @elandhuis: Tools & 2021-11-14 22:09:48 RT @HongkuiZeng: Thanks Esther for the wonderful article highlighting the latest molecular tools for probing neural circuits! It was fun ta… 2021-11-14 14:56:25 Portugal passed a law forbidding employers from contacting employees after work hours? wow... @Neuro_CF @zmainen @NeuroChooser https://t.co/s0pyNLP3q9 2021-11-14 13:50:03 RT @I_CRY2: The work of @TonyZador and his group at @CSHL is both beautiful and stunning. An expanding molecular toolbox untangles neural c… 2021-11-13 15:52:08 RT @ipeikon: .@CajalNeuro is recruiting for target validation scientists to help us dig deep on some of the new biology uncovered by our pl… 2021-11-12 22:53:01 RT @petrzzz: The CSHL PhD program was an amazing environment to learn to be a scientist. I would highly recommend it to anyone passionate a… 2021-11-12 15:39:38 stunning video depicting how mRNA Covid vaccines work https://t.co/FONqAMbeyE 2021-11-12 13:52:54 @joadelas According to Fred (not sure he's on twitter), a place called Bagel opened just around the corner from SWC. "The bagels are fantastic, really outstanding" He denies that he walked out after his first purchase with tears of nostalgia streaming down his face 2021-11-11 20:20:22 @rikeijames It is not clear that anyone else would have come up with the Hodgkin-Huxley equations for propagation of action potential if they hadnt Those papers were so beautiful, and so far ahead of their time A very well-deserved Nobel Prize. https://t.co/VDVAiBOZQf 2021-11-11 17:47:51 apologies to @cdk007 and all the other alumni and faculty i didnt tag. 2021-11-11 17:32:41 CSHL is a great place to do a PhD! Apps due Dec 1. Spread the word! If you teach undergrads, please let them know. (and alumni--feel free to pipe in about how great it was! #lovefest ) @tollkuhn @sheacshl @ipeikon @jbkinney @JustusKebschull @petrzzz https://t.co/gaM34lcMWi 2021-11-11 15:03:09 @hippopedoid yes it does...looks it's time to update wiki! https://t.co/TMoooK2B7k 2021-11-11 14:27:33 TIL that the "vinculum" (Latin "chain") is a "horizontal line drawn over a group of terms in a mathematical expression to indicate they are to be operated on as a single entity." I learned this while helping my son with Algebra. I wonder what else I missed the first go round? https://t.co/DkphpEMClI 2021-11-10 13:34:17 Great application of MAPseq to cortical development! https://t.co/684tGndgBV 2021-11-10 02:21:59 RT @USBrainAlliance: New @Nature feature about using molecular tools to untangle neural circuits highlights research from numerous BRAIN-fu… 2021-11-10 01:23:25 Nature has a nice little piece about BARseq & https://t.co/kL1aZOcMaw 2021-11-07 22:39:31 MAPseq of habenula from Kenny lab! https://t.co/27HUdTHTML 2021-11-06 23:20:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-08-22 17:05:17 @IrisVanRooij I really enjoyed Les Valiant's talk on evolution as an algorithm. That was a great recommendation. thanks! 2021-08-18 11:35:11 Simpson's Paradox: After accounting for vaccination rates and stratifying by age, Israeli data show vaccines retain high efficacy (85-95%) vs. severe disease.... For preventing severe disease, Pfizer is still doing very well vs. Delta, even in Israel https://t.co/JatYBagYap 2021-08-17 23:29:20 @JasonSynaptic @melcregor But if recall/expression is via synapses then in principle it should be possible to read it out (and collect the $100k prize) 2021-08-17 22:29:17 @NunezKant AI--> 2021-08-17 22:15:10 What's funny is that in the 80s there was at least as much excitement about Hopfield nets as about backprop. Snowbird & But not a peep about them these days. #Dustbinofhistory https://t.co/dnd8OHsfXM 2021-08-17 18:59:43 @Datta_Lab @JasonSynaptic @dchughes62 But all of these are potential mechs for establishing and maintaining synaptic connections and strength no? Hence consistent with the synaptic hypothesis. No "other " so far. 2021-08-17 18:13:16 @kaznatcheev @KordingLab @neuroecology @neurograce As social history I think the "myth" is largely correct BCS the various communities weren't talking back then. 2021-08-17 18:11:33 @KordingLab @kaznatcheev @neuroecology @neurograce Backprop (ie the chain rule) had been applied in the 60s in control theory and by werbos in 1974 for ANNs But AFAIK the fact that backprop solves Xor was popularized in Hinton rumelhart williams chapter in the PDP book https://t.co/Fzbgf70NSd 2021-08-17 13:48:52 @jackrlovell Hopefully soon...watch this space.... 2021-08-17 13:48:17 @KordingLab well, i guess there is no reason to think that just bcs lightening struck a bunch of times in the same place (ANNs, CNNs, RL, etc) that it will keep striking there... 2021-08-17 13:12:32 @KordingLab if what is possible? 2021-08-17 13:04:02 CSHL is recruiting NeuroAI Scholars. PhD-level independent researchers trained in AI who spend 2 years at CSHL working with neuroscientists, applying insights from neuro to AI (not AI to neuro) Conversion to tenure track possible Spread the word! https://t.co/DWORFIbhJp 2021-08-17 12:41:55 @ryrobyrne @neuralreckoning retina. Only the ganglion cells spike (with a few exceptions). 2021-08-17 04:46:02 @JasonSynaptic @json_dirs @tollkuhn @AndrewHires @gershbrain Possibly. Although we know that (1) LTP-like mechanisms are involved in the storage of memories in several paradigms, including fear conditioning, hipp place learning, and aud operant conditioning So why posit a stochastic somatic mechanism? 2021-08-17 04:36:10 @JasonSynaptic @json_dirs @tollkuhn @AndrewHires @gershbrain The synaptic hypothesis is merely that memory of your teacher is available in the strengths of your synapses. I think you are arguing that these strengths might be somehow maintained in the nucleus, rather than locally at the synapses, right? 2021-08-17 04:33:13 @JasonSynaptic @json_dirs @tollkuhn @AndrewHires @gershbrain if you ask me to name my second grade teacher, and i answer 500msec seconds later, I think we can agree that the mechanism did not involve eg rapidly decoding the epigenetic state of chromatin, right? 2021-08-17 03:56:37 @JasonSynaptic @HistedLab @AndrewHires @gershbrain @NUro_science The developmental rules set up everything. But in a large circuit--eg a mammalian brain with 10^11 neurons--the tricky part is to select among the (10^11)^2 = 10^22 possible connectivity patterns. 2021-08-17 03:49:29 @JasonSynaptic @HistedLab @AndrewHires @gershbrain @NUro_science i believe that the genome encodes stochastic developmental rules that (along with activity) lead to circuits (synaptic strengths, including W=0) that mediate innate behaviors. 2021-08-17 03:45:46 @JasonSynaptic @AndrewHires @gershbrain i'm not sure what you mean by "only requirement". Eg i dont think i would say that mitochondria are a necessary requirement, but things break down pretty fast if you poison mitochondria. 2021-08-17 03:43:09 @JasonSynaptic @HistedLab @AndrewHires @gershbrain @NUro_science I dont think it's dogma. I think it's (1) there is a lot of evidence in favor of the idea 2021-08-17 03:36:48 @JasonSynaptic @AndrewHires @gershbrain If you are saying that not all changes in synaptic strength/connectivity are due to LTP/LTD, yes. But that is different from arguing that memories are due to something other than synaptic changes---changes underlying which are diverse molec mechs. 2021-08-17 03:07:37 @JasonSynaptic @AndrewHires @gershbrain there have been gain and loss of function experiments in several systems. obv it is not possible to establish that it is true in ALL systems, but the evidence is strong in several systems including hippocampus, amygdala fear conditioning and auditory operant conditioning. 2021-08-17 02:52:45 In this classic 1967 letter to Science, they describe pounding an oscilloscope with a Sears ball peen hammer (Cat. No. 28B4652), passing the pieces through a 007-mesh nylon stocking, and sprinkling the pieces over another oscilloscope to transfer the persistence pattern https://t.co/zsTjlA1rLa 2021-08-17 02:19:42 @JamesMHyman @aidanhorner @gershbrain The experiment is: 2p in vivo to figure out single neuron receptive fields Technically challenging but not outlandish. 2021-08-17 02:14:40 @gershbrain I don't know of any serious alternative to the synaptic theory of memory. People used to think RNA could encode memory (feed one planaria to another). This led to a brilliant letter in Science about "persistence transfer" btw oscilloscopes https://t.co/kjH0yzSnB2 2021-08-17 01:49:43 @sidgwicked @gershbrain Well in at least one case -- reversal of a learned auditory association--different synapses are involved https://t.co/zdfIpCGnEG 2021-08-17 01:46:22 @aidanhorner @JamesMHyman @gershbrain Topographic sensory maps provide one way to know which neurons/synapses responded to which stimuli. Another way is in vivo 2p imaging. In the context of behavior 2021-08-16 17:50:19 really cool animation of coronavirus entering a cell. https://t.co/XEFfWRpDlU 2021-08-16 17:36:49 @neurobongo interesting...any idea whether "neck antibodies" would be neutralizing ? 2021-08-15 23:42:13 @KennethHayworth @DavidBeniaguev @KordingLab Yes there are acute brain slices. (The paper also has complementary in vivo experiments) 2021-08-15 21:38:17 @KennethHayworth @DavidBeniaguev @KordingLab we could use synaptic strength distinguish whether high vs low sounds paired with L vs R sounds with 100% accuracy. How about generalizing that to more complex sounds? https://t.co/GgxX1jtPdl 2021-08-15 19:13:10 @KennethHayworth @mameister4 @blsabatini @QiaojieXiong "Natural" is tricky criterion to define...i guess whisker clipping is out, but would repeated stimulation of single whiskers (or combinations of single whiskers) be allowed? 2021-08-15 18:07:52 @mameister4 @blsabatini @KennethHayworth @QiaojieXiong It's tricky to restrict stim to single glomeruli with natural odors, right? But each nat odor elicits a unique pattern of glom. Do you think you have the resolution to distinguish more than 64 odors by post mortem glom activation signature? 2021-08-15 18:03:59 @blsabatini @KennethHayworth @QiaojieXiong @mameister4 Yeah in another subthread we discussed whisker clipping. There are about 2^5 whiskers per side so you could probably decode at least 6 bits per animal. Probably just vigorous stim would be enough. 2021-08-15 14:37:34 @DavidBeniaguev @KennethHayworth Yeah i think those gross differences should be easier to decode than subtle differences like "did the animal associate a high or low sound with the R reward?" But i feel like these subtler differences might bring us closer to what we mean by "decoding a memory" 2021-08-15 14:10:38 @DavidBeniaguev @KennethHayworth yes that's a good point. One could probably decode quite a few bits in a whisker trimming experiment...assuming a rat has about 2^5 whiskers per side, one could probably decode 5 bits pretty easily 2021-08-15 13:49:07 @DavidBeniaguev @KennethHayworth you have to structure the animal's experiences carefully...teach it either X or Y, and then look at synapses to figure out which it learned. 2021-08-15 13:42:21 @KennethHayworth @AspirationNeuro @petrzz @QiaojieXiong So maybe the criterion should be: How many bits per brain can you decode about a memory? This is well-defined, and rewards decoding complex memories. 2021-08-15 13:40:26 @KennethHayworth @AspirationNeuro @petrzz and @QiaojieXiong achieved 100% accuracy decoding whether animal learned to associate high or low sounds with R vs L reward, using strength of cortico-striatal synapses in acute brain slices So 1 bit of info about memory per brain. https://t.co/U15IReb7HK 2021-08-15 13:31:17 How well can we decode memories formed during a lifetime from the postmortem brain? @KennethHayworth proposes a $100K prize to find out! https://t.co/DnpA8YxcUN 2021-08-14 08:51:21 The true test of self driving cars: Italy. (Not Arizona) This sad (human) driver got his new Ferrari stuck in an alley. Good thing it had a sunroof or he'd still be there. https://t.co/WpjJ0CyPQa 2021-08-12 12:54:11 RT @xkcd: Average Familiarity https://t.co/KNPJMsq3vL https://t.co/TZC2bPgjp4 2021-08-03 13:53:14 @florian_krammer @apoorva_nyc also if you take fig 1 at face value, then you would conclude that there are considerable numbers of patients who are very infectious (red dots above Ct=30 and even 20) beyond day 20 or 30. So this would imply post-infection quarantine should last for > 2021-08-03 13:50:04 @florian_krammer @apoorva_nyc I thought that PCR is not a reliable marker for live virus or infectivity...as we discovered last spring, when there were confusing claims of "reinfections" after 1 month that were really just viral RNA getting cleared 2021-08-01 21:30:31 @DicksonWuML Wonderful write up! 2021-07-29 13:40:37 RT @USBrainAlliance: A new method called BARseq2, developed by the lab of @TonyZador in part with funds from the BRAIN Initiative, can map… 2021-07-27 13:51:09 @kendmil @KordingLab @tyrell_turing @recursus @ntraft @kaznatcheev @DavidBeniaguev @itamarlandau @PessoaBrain We are using "success" differently. You are bundling in some notion of value or long-term progeny. I was just pointing out that there are suddenly (last 10kyrs) a lot more humans than over most of our 100kyr history, even though we didnt get any smarter. So something changed. 2021-07-27 13:47:59 @kendmil @KordingLab @tyrell_turing @recursus @ntraft @kaznatcheev @DavidBeniaguev @itamarlandau @PessoaBrain Evolution is maximizing for something like progeny. it doesnt plan so it can't optimize for long-range future progeny, but it can still be said to be maximizing something. (But it is not maximizing by gradient descent). 2021-07-27 13:23:16 @kendmil @KordingLab @tyrell_turing @recursus @ntraft @kaznatcheev @DavidBeniaguev @itamarlandau @PessoaBrain Not sure how to compare evolutionary success across disparate species. But by any measure humans are by far most successful ape in ~20 Myr ape lineage. Yet human pop explosion only began ~10-20 kyrs ago, after at least 100kyrs of barely surviving. So it wasnt just our smarts. 2021-07-27 02:42:52 @tyrell_turing @recursus @ntraft @kaznatcheev @KordingLab @DavidBeniaguev @itamarlandau @PessoaBrain 2) plenty of other animals like a snow monkeys have cultural transmission. But the knowledge can't accumulate much without language 2021-07-27 02:41:22 @tyrell_turing @recursus @ntraft @kaznatcheev @KordingLab @DavidBeniaguev @itamarlandau @PessoaBrain 1) New knowledge can be acquired more quickly by sophisticated learning or slowly by trial and error. I don't know how much Stone age tech was by trial and error but I guess a lot. It took a long time for the tech to accumulate enough to be useful 2021-07-26 21:26:38 @tyrell_turing @recursus @ntraft @kaznatcheev @KordingLab @DavidBeniaguev @itamarlandau @PessoaBrain ...was that at some point our facility for language allowed us to start accumulating lots of useful knowledge across generations, thereby breaking the genomic bottleneck. 2021-07-26 21:25:48 @tyrell_turing @recursus @ntraft @kaznatcheev @KordingLab @DavidBeniaguev @itamarlandau @PessoaBrain i'm just not convinced that sophisticated learning offers such an evolutionary advantage. Eg until humans came along, primates were not really doing all that well. And even H. sapiens were struggling until about 20K years ago. What really tipped the balance for us was... 2021-07-26 21:22:48 @JohnKubie the 3rd tweet in the thread is a paper that estimates all biomass on earth... https://t.co/ekkPcnD5M1 2021-07-26 21:09:24 @tyrell_turing @recursus @ntraft @kaznatcheev @KordingLab @DavidBeniaguev @itamarlandau @PessoaBrain i think Baldwin pushes for efficient learning---learning as much as possible from fewer examples. Whether that is "sophisticated" or not is a separate question. Maybe it's just putting in the right priors, so that eg we can learn faces or languages faster... 2021-07-26 13:49:31 @KordingLab @DavidBeniaguev @ntraft @itamarlandau @tyrell_turing @PessoaBrain Stress could trigger increases in mutation rate. But that just affects mostly something like the variance associated with the random walk through genomic space, not the direction. 2021-07-26 12:43:57 @DavidBeniaguev @ntraft @KordingLab @itamarlandau @tyrell_turing @PessoaBrain I think it would be tricky to selectively increase the mutation rate only in beneficial genes. How would an organism known in advance which ones? But it is possible to increase the overall mutation rate 2021-07-23 02:17:09 RT @doristsao: Interested in helping run a lab to understand visual perception in primates? The Tsao lab @UCBerkeley is looking for a new l… 2021-07-20 20:26:01 @rob_gulli @KordingLab agree! most perturbations either do not influence fitness, or do not do so enough to undergo selection. 2021-07-20 19:50:25 @SotoCCNLab @KordingLab i think what Gould-Lewinton are saying is that not every trait is optimal. But i dont think they are arguing against the idea that organisms are optimizing for viability 2021-07-20 16:31:33 @KordingLab @itamarlandau @tyrell_turing @PessoaBrain I think there is a confusion between the goal of optimizing a function, and the algorithms by which one does so. Gradient descent is a great (class of) algorithms, but I don't see how evolution has access to the gradient. 2021-07-20 16:28:33 @AthenaAkrami @KordingLab @drkjjeffery Is this just saying that evolution (tautologically) maximizes fitness, which we can call a function "f"? 2021-07-19 23:46:16 @AVMiceliBarone @LucaAmb @KordingLab I think the analog for an ANN would be to perturb every weight in the network by epsilon and accept the changes to W, all or none, depending on whether error declined. For large weight vectors this is pretty inefficient 2021-07-19 23:32:12 @AVMiceliBarone @LucaAmb @KordingLab For evolution to follow a gradient, the genes would somehow have to "know" to mutate collectively so as to maximize size or whatever. I don't think gene additivity creates a gradient. It just makes it easier to maximize size without one. 2021-07-19 23:18:12 @AVMiceliBarone @LucaAmb @KordingLab I don't understand why the multigenic nature of phenotypes would imply a gradient ? 2021-07-19 21:15:24 @KordingLab @jeffrey_bowers @tyrell_turing @PessoaBrain Right. So each time we need to start with "many lifetimes of data." And we still end up with an artificial idiot savant, brilliant only at the task we trained it to perform. Wouldnt it make more sense to have all networks start with the basics? 2021-07-19 21:09:31 Another great quote from Moravec https://t.co/cYjijzg9ph 2021-07-19 21:09:08 RT @MaCroPhilosophy: @TonyZador Yes that. And moving a bit away from your original thread I've been inspired by this from a few pages later… 2021-07-19 21:08:23 @KordingLab @jeffrey_bowers @tyrell_turing @PessoaBrain We could try to put a lot more prior knowledge...but for some reason, so far we have mostly chosen the very odd strategy of starting tabula rasa (or nearly so) every time we train a network. (I think bcs early CS people were heavily influenced by psych, where tab rasa is king) 2021-07-19 21:06:10 @itamarlandau @tyrell_turing @PessoaBrain @KordingLab i agree. "optimizes" means that there is a function to be minimized. "Gradient descent" is just one way to minimize a function. 2021-07-19 21:04:31 @jeffrey_bowers @tyrell_turing @PessoaBrain @KordingLab Can you clarify what "can account for" means in this context? 2021-07-19 20:59:20 @Stefan_Mihalas evolution can be surprisingly fast. Whales evolved from a terrestrial quadruped (common ancestor of hippo) into something that looked pretty whale-like in about 10M years. Who'd've thought this quasi-tapir would end up competing with sharks and tunas? https://t.co/qitsSq9nBy 2021-07-19 20:44:10 @albertcardona @KordingLab yes, the pop approximately follows the gradient, by taking small random steps in 1B directions away from its current location. which is why we need 10000 moles of individuals. If each individual could follow the gradient, it'd would be lots more efficient. But alas, no. 2021-07-19 20:39:33 @gchrupala @KordingLab bcs eg some kinds of mutations are more common than others. Eg there are mutational "hotspots," and indeed there are many kinds of mutations. So i guess it would be more accurate to say that it's random, but that the distribution of steps might be somewhat complex to specify. 2021-07-19 20:37:19 @KordingLab That might be the mean...but in a 1e9 dimensional space, finding the mean by a random walk is a lot less efficient than using the gradient. 2021-07-19 20:34:04 @albertcardona @KordingLab if you take random steps in a high D space (10^9 D space defined by your genome), and then only keep those organisms that survive, i dont think that's following a gradient. To follow the gradient, you'd need to preferentially take steps in specific directions, no? 2021-07-19 20:30:39 @MaCroPhilosophy I'm not sure what exactly you have in mind from Moravec, but i love this quote by him: https://t.co/x2B8VCSb6X 2021-07-19 20:25:12 @Stefan_Mihalas Every animal that ever existed had the potential to generate descendants who might have outcompeted us. Even the ones who died without progeny sampled some point in genomic space, so their failure to have descendants is informative... 2021-07-19 20:09:04 @tyrell_turing @PessoaBrain @KordingLab I think it is almost tautological to say that evolution stochastically selects (optimizes) for fitness Tautological bcs fitness (in the context of some niche) is defined by what evolution selects for Is this really up for debate? (A very dangerous question to ask on twitter). 2021-07-19 18:23:16 @kaznatcheev @KordingLab I guess intuitively if the landscape is really craggy, the gradient doesnt help much. so you are arguing that evolutionary landscapes are craggy? 2021-07-19 17:59:14 @KordingLab It surprises me if evolution can approximate a gradient. I always thought of it a biased random walk, in which you take a random-ish step and then check if this was useful. Interested to learn more! 2021-07-19 16:22:31 @wholebrainsuite @petrzzz So the pnas paper says that 1e14 g of nematodes so indeed at 1 ug/nematode this is 1e20 So that’s consistent. And a lot of nematodes! 2021-07-19 15:43:57 @petrzzz how much does a nemtaode weigh? 2021-07-19 12:12:59 RT @KordingLab: Love this kind of calculation 2021-07-19 02:03:40 @joe6783 I was particularly interested here in the evolution of nervous system-based intelligence, so I started with the origin of multicellular animals. Also, I don’t know how to estimate the number of plants. Plant biomass is apparently concentrated in trees, which are really big 2021-07-18 21:57:14 And of course i'm assuming that the animal biomass today is what it was 100M or 500M years ago. Who knows? 8/n 2021-07-18 21:57:13 Different animals have a different ratio of C to other stuff, but let's take humans as typical. We are about 20% carbon. So 2Gt of C --> 4/n 2021-07-18 21:57:12 How many animals have ever lived on this earth? Ie since the first multicellular organisms arose about 600M yrs ago, how many individuals have there been? My (very ballpark) estimate: A lot. Like maybe 10^25 or more individuals. Why is this an interesting question? 1/n 2021-07-14 16:54:21 My 15 minutes of fame https://t.co/4qmwF97kq7 2021-07-04 01:35:02 @DrYohanJohn "Biophysics of Computation: Information Processing in Single Neurons" by Christof Koch is my favorite. https://t.co/zMdKCBsWCx 2021-05-22 02:17:44 RT @MatteoCarandini: The first paper authored by @IntlBrainLab et al. is out today in @eLife ! 1 task, 7 labs, 140 mice, 5 million choices.… 2021-05-17 15:48:43 Looks like a fun workshop, June 17, on evolving neural networks. With Pamela Lyon, Luis Puelles, Paul Cisek, @GuangyuRobert, Linda Wilbrecht, @moccalin, and @criticalneuro Registration: https://t.co/55rQzhQ7UL https://t.co/EW9kLJjCyC 2021-05-15 17:05:09 @DylanRMuir I agree. One can never prove "randomness" (the absence of structure). In this case (olfaction), there were claims of "randomness," and it was only by increasing throughput with MAPseq/BARseq that we had the statistical power to detect the structure. 2021-05-15 13:12:45 team effort with @joe6783 @dinanthos @JustusKebschull and others not on twitter. 2021-05-15 13:11:03 @dinanthos @joe6783 @JustusKebschull 2021-05-15 00:53:26 Single neuron mapping of > 2021-05-12 23:22:13 @neurovium @NatureNeuro And the first proof-of-principle was a mere 4 years later. https://t.co/8ETP2J6TQs https://t.co/zl6zesbvrl 2021-05-12 23:19:45 @neurovium @NatureNeuro well, not quite 10 yrs...we first published the idea here. https://t.co/ZvprkBqTqr 2021-05-12 15:14:43 We had the idea of using DNA barcodes to determine neural connectivity > Also thanks to brave early students: @dysruption, @ipeikon & 2021-05-11 13:43:40 @WiringTheBrain @gottapatchemall Great idea. We'll get right on it! 2021-05-10 18:52:18 RT @NatureNeuro: Development of #BARseq2, a technique that simultaneously maps projections and detects multiplexed gene expression by in si… 2021-05-10 16:34:11 BARseq2 now published and hot off the presses! Great work by the team, including @JesseAGillis and @starfishlu, and the other authors who i believe are not on twitter. https://t.co/xt5quO00rp https://t.co/5iUu0kJihW 2021-05-10 16:20:18 RT @CSHL: CSHL Professor @TonyZador, a member of @USBrainAlliance, developed BARseq2, the next generation of neuron-mapping technology, bri… 2021-04-12 15:21:36 Congratulations @CristianSoitu for receiving a 2021 HFSP Fellowship! https://t.co/QTbS6OeoCZ 2021-04-09 21:53:45 RT @L_andreae: Our paper on synaptic imbalance and homeostatic plasticity in prefrontal cortex of Chd8+/- mouse is just out in Molecular Ps… 2021-03-30 14:15:43 Exciting work by @bdanubius and @taliesinb ! https://t.co/ZuxsqEb2MR 2021-03-16 20:27:33 Also joint work with @SergeyAShuvaev (sorry, didnt know you were on Twitter). 2021-03-16 19:38:40 "Encoding innate ability through a genomic bottleneck" Joint work with @joe6783 https://t.co/Vx624XC6IR https://t.co/drSnu09hLs 2021-03-11 22:18:29 RT @black_in_ai: Great program by @CSHL! Link to posting: https://t.co/c7QY1uQg3b cc @BlackInNeuro https://t.co/cflqbeqVOk 2021-03-11 22:18:25 RT @CSHL: CSHL is training #AI scholars in modern neuroscience through their new NeuroAI program to achieve the next-generation of artifici… 2021-02-08 23:46:36 @cdk007 The evolutionary advantage of myopia? Sure, obvious! But ear hair? Sorry, i got nothin. 2021-02-05 22:21:04 @AdamMarblestone I would say we need less, not more, top down control. I would fund students and post docs directly and let them choose a lab. 2021-02-05 17:54:42 @blake_camp_1 @DavidBeniaguev @KordingLab @tyrell_turing @neuro_data @bradpwyble Are they arguing for Lamarck? Via transgenerational epigenetics? 2021-02-05 17:49:57 RT @C3H7NO3: Very excited to share our (@Landgraf10006, @cwbeetle, @CamZoology) @biorxivpreprint investigating reactive oxygen species as n… 2021-02-04 22:44:33 RT @RosieisaHolt: We are watching Groundhog Day and I’m so jealous he gets to do different things every day. 2021-02-04 03:37:18 @neuro_data @WiringTheBrain what do you mean by "ok"? 2021-02-04 02:53:48 @WiringTheBrain We can certainly measure the entropy of a genome. And by any sensible measure a mouse is more complex than E. coli But there seems to be some implicit value judgment favoring complexity. Why? Is complexity intrinsically better ? My laundry list has more entropy than “E=mc2” 2021-02-02 23:27:23 RT @svoboda314: 'What should we be measuring if only we had a way to'? Provide your suggestions with the Allen Frontier's Group to help dir… 2021-02-02 18:58:09 @neurobongo @IEEEorg Fighting the good fight! 2021-02-02 14:41:06 @Datta_Lab @KordingLab @bradpwyble @blake_camp_1 @tyrell_turing @neuro_data I think the null hypothesis is that most protein trafficking etc are governed by "homeostatic" mechs not specific to brains, and not "learning" but without rigorous definitions it's tricky to claim it's not "learning". Some would claim that lungs "learn" to breath. 2021-02-02 14:08:51 Almost exactly one hundred years ago, a play premiered that introduced a significant new word to the world: "robot" Karel Čapek's R.U.R. opened on January 25, 1921, at the National Theater in what is now the Czech Republic https://t.co/hKFgyI2PXN 2021-02-02 14:06:08 @tyrell_turing @skepticalDre @bradpwyble @blake_camp_1 @KordingLab @neuro_data would love to see that! 2021-02-02 03:51:56 @KordingLab @tyrell_turing @skepticalDre @bradpwyble @blake_camp_1 @neuro_data maybe...waiting for your next paper to demonstrate this 2021-02-02 00:44:50 @tyrell_turing @skepticalDre @bradpwyble @blake_camp_1 @KordingLab @neuro_data But overparamerization is not magic. It doesn’t work with arbitrary function classes. Eg try generalizing with a 1000 degree polynomial (not spline). ANNs are special (though not unique) in the high param regime. So does adding channels behave the same? Don’t see why it would 2021-02-01 16:58:04 @cian_neuro @KordingLab @bradpwyble @blake_camp_1 @tyrell_turing @neuro_data There is plenty of turnover at the protein level, and yet these structures are highly regulated and often stable for a long time. Eg individual glutamate receptors turn over on a scale of hours to days, but synapses can retain their strength for days, weeks or longer... 2021-02-01 16:45:05 @KordingLab @bradpwyble @blake_camp_1 @tyrell_turing @neuro_data "Learned"? Or "are plastic"? If "learned" then what are the data driving this learning? We already know that even for just the synaptic weights there is a paucity of data from the outside world... 2021-02-01 05:39:07 @neurobongo My guess is a bit of both, but mostly it's due to heterogeneous populations. Prob(infection) is dose dependent. But I think it turns out that the cases are more likely to happen in certain populations, like old people. So the prob curve is shifted very left for vulnerable pops 2021-01-31 22:00:33 @CT_Bergstrom @evokerr I guess it depends on how 90% efficacy is achieved. I think this assumes a vaccinated person has a 10% chance of getting infected per encounter...but could also result if 10% of vaccinated people remain vulnerable, in which case efficacy remains independent of prevalence 2021-01-31 18:26:32 @trailrunner402 @PaulFMcKay i'm guessing that the infected cells die pretty quickly, due to overexpression. 2021-01-30 04:01:51 @Kel_Nem that is a great slide! 2021-01-30 01:41:38 @MollyJongFast It's not a fair comparison. Single shot J& J& 2021-01-29 15:19:21 @HelenBranswell @matthewherper single dose J& = apples and oranges. There is a study running for double dose J& 2021-01-29 15:00:38 RT @KeisukeYonehara: DANDRITE @dandrite at @AarhusUni is looking for two Nordic EMBL Group Leaders (generous 1.6 M EUR startup, 5+4 years)… 2021-01-29 14:54:29 RT @HelenBranswell: 2. The J& 2021-01-29 14:45:56 Immigrants founded the majority of startups, and routinely account for the majority of US Nobel Prizes in STEM. https://t.co/cgZOaco4TH https://t.co/s4ceuefkYx 2021-01-27 19:29:27 The CSHL MAPseq facility is looking for a technician. Previous experience working in mobio a must. Also: responsibility, flexibility, excellent organizational skills, the ability to work independently, and a strong work ethic. Please spread the word! https://t.co/DDJ44OE3PE 2021-01-27 05:11:27 @WokeFreeScience @cheraghchi @KenjiEricLee Most scientists i know didnt go into science for big bucks. And the immigrants who stayed did so bcs American labs historically offered more resources, allowing them to do better science. But admittedly the pay structure of computer scientists differs from neuroscientists 2021-01-27 05:06:55 We can also thank immigrants for the success of CSHL Neuroscience. https://t.co/Q8WquOYkpK 2021-01-27 05:03:25 @WokeFreeScience @cheraghchi @KenjiEricLee the US population was well educated but (as detailed in the article) Americans were not really contributing much https://t.co/LZm6q04SiJ https://t.co/rnWBuZPxDz 2021-01-27 04:59:40 @WokeFreeScience @cheraghchi @KenjiEricLee In discussions of the importance of immigrants to US STEM, people sometimes lump in children of immagrants eg Richard Feynman & 2021-01-27 04:53:50 @WokeFreeScience @cheraghchi @KenjiEricLee The original post was about why, starting about 100 yrs ago, immigrants enabled the US to go from being a backwater to a world leader in STEM So it seems we'd need to compare the compensation of Jewish scientists in 1930s Germany vs America. I dont have those data handy 2021-01-27 04:48:00 @eflegara @KenjiEricLee The original post was about the crucial contribution of immigrants and their children to STEM in the US, not the fraction (or absolute number) of immigrants who make contributions. So we'd want eg fraction of Nobels to immigrants, or fraction of silicon valley founders 2021-01-27 04:44:40 @WokeFreeScience @cheraghchi @KenjiEricLee also, the super top-heavy salary structure in the US is recent--3-4 decades. CEOs used to be paid salaries like doctors, not x100 doctor salaries 2021-01-27 04:43:16 @WokeFreeScience @cheraghchi @KenjiEricLee historically, most of the successful immigrant scientists were not the "elite" but refugees (eg of Nazi germany) or the children of immigrants from eg Eastern Europe. 2021-01-27 04:17:30 RT @calebwatney: But we should go beyond these reactive measures and take a more proactive approach to talent immigration. Today we grudg… 2021-01-26 22:11:14 @albertcardona @IlanaWitten @anne_churchland I'm not actually sure how the "1 hr seminar" (or 30 min conference talk) became the standard. I actually find reading a paper to be more efficient. OTOH i find discussions/debates to be really enlightening. And interviews like @pgmid 2021-01-26 22:02:31 @albertcardona @IlanaWitten @anne_churchland yes, except that i think it should awaken one's inner movie director/producer... seminar:theater :: zoominar:movie 2021-01-26 21:55:12 Apparently, Desi & We still treat zoom seminars as live seminars with a camera...maybe we need to reinvent the scientific zoominar? https://t.co/geQjdIrZnf https://t.co/1FvCCiu4Iq 2021-01-26 21:44:57 @anne_churchland @antoniahamilton No not you...that comment was after my (in person) Cosyne talk around 2012. I dont think anyone has gotten 10 minutes into one of my online talks without checking email 2021-01-26 21:40:47 @anne_churchland @IlanaWitten I think if online seminars continue to be the norm after covid, in the long-run what i will do is try to record a single polished version of my talk, like the studio version of an album. (Or maybe lipsync it. ) Then maybe hang around for post-talk Q& 2021-01-26 21:33:42 @antoniahamilton @anne_churchland The highest praise i ever got after a talk was when a colleague said: "wow, i didnt starting checking my email until more than 10 minutes into your talk" 2021-01-26 21:32:25 @antoniahamilton @anne_churchland yeah that helps. But if the audience is more than about 10-20 people you can't really read the room. also, most because of the weird misalignment between screens and cameras you can never tell when people are looking at you or reading their email. 2021-01-26 21:29:59 @anne_churchland @IlanaWitten No, for me online seminars are still kind of like running on a treadmill instead of running outside...i'll do it if i have to on rainy days, and it's certainly more convenient, but no matter how much i get used to it, it's never fun like running outside. 2021-01-26 21:25:36 @anne_churchland I also dont much like watching talks online. However, i do enjoy the option of watching a pre-recorded talk at x1.75 speed, and then slowing it down and repeating as needed for the tricky parts. 2021-01-26 21:23:28 @anne_churchland I despise online seminars, especially as a speaker I enjoy speaking in front of a live audience but without realtime verbal and nonverbal feedback (I usually encourage questions during my talk) i lose inspiration The only thing worse for me is pre-recording my talk 2021-01-26 16:54:21 the success of American science over the last century is in large part due to attracting and welcoming great minds from around the world https://t.co/5AEFunlydH 2021-01-25 16:05:42 @_JaeeonLee_ @flickerfusion @basal_gang @SteinmetzNeuro @kennethd_harris @MatteoCarandini inputs from the auditory cortex to the auditory striatum are strong, are involved in auditory decision making (shown by @petrzzz): https://t.co/ql0WhGjfCB and undergo plasticity in a tonotopic fashion during learning (@QiaojieXiong & https://t.co/NqXVrGPzs3 2021-01-25 13:34:52 Super cool https://t.co/y2oPx3nMzT 2021-01-25 02:13:20 RT @GordonS56597971: We* dissected the excitatory cuneate → VPL → hand S1 → forelimb M1 circuits in the #mousehands “transcortical” loop.… 2021-01-24 16:34:02 To expand on this: My friend is an older scientist not on twitter who was helped by scholarships when he was a young man and would like to give back by donating a few $K to help "pay it forward"... https://t.co/h0dx5AJrPW 2021-01-24 16:19:38 @TrackingActions @pouget_alex @TrackingPlumes That is a very cruel and heartless tweet...were it not for the $*@!* pandemic I would be there...I blocked off the time on my calendar a year ago for a visit 2021-01-23 18:06:11 @IntuitMachine This looks like a great program, but it is NSF-funded and does not appear to be looking for charitable contributions https://t.co/6abqdCljRU 2021-01-23 17:39:37 Asking for a friend (really) who would like to make a small charitable contribution: "Do you know of a nonprofit that helps gifted high school graduates who are planning to focus on studying math/sciences at a 4-year college and who need financial support?" 2021-01-23 16:50:11 @lukesjulson @yael_niv @JohnKubie Very true. And when i hit the wards--especially neurosurgery--i came to realize how useful full mastery of those ink blots could be... 2021-01-22 22:56:08 @lukesjulson @tyrell_turing @WiringTheBrain That’s like saying temperature is the wrong way of thinking about the brain. Information is a well defined measure that can be applied in the context of neuroscience. One might not be interested, and it might be applied wrongly. But It’s not a way of thinking 2021-01-22 22:42:54 @TurrigianoLab @tyrell_turing @WiringTheBrain I’m not really sure it’s a limitation exactly Eg information is like temperature, a simple quantity that can be defined rigorously But if you want to know how to dress for the day, you might want to know wind chill etc. But the “RealFeel” TM is a lot more subjective 2021-01-22 22:28:54 @WiringTheBrain @tyrell_turing I think there is confusion between the colloquial meaning of information (more like “meaning “) And the technical definition. 2021-01-22 19:48:18 @AthenaAkrami @lukesjulson "From Neuron to Brain" was one of my favorites...but i have the impression that it has grown bloated over the years (though not as much as Kandel) 2021-01-22 19:38:57 @JohnKubie @lukesjulson Embarrassingly, I failed the med school neuroanatomy course. I read Kandel & 2021-01-22 19:34:47 This question (Intro to Neuroscience for engineers/physicists/AI people new to the field) comes up a lot... I also wonder whether there are good online lectures series available (youtube or wherever)? https://t.co/gPvFCnSv9e 2021-01-22 19:20:54 @JohnKubie @lukesjulson I agree it's important. I think Sterling& 2021-01-22 18:45:31 @lukesjulson and one of my favorites (though somewhat niche): Ion Channels -- Hille 2021-01-22 18:40:15 @lukesjulson Other obvious choices (all somewhat outdated): Spikes -- Bialek et al Theoretical neuroscience -- Dayan & Biophysics of Computation -- Koch Foundations of Cellular Neurophysiology -- Johnston 2021-01-22 18:36:56 @lukesjulson I've been pretty impressed with "Principles of Neural Design" by Sterling and Laughlin https://t.co/AYGEymKcd4 2021-01-21 16:12:56 @apoorva_nyc @lukesjulson @vineettiruvadi That was the question at the beginning of this thread--What testing is needed if you try to roll out a modified spike vaccine with a few nucleotides changed? I would guess just safety and antibody titer, which is quick. The challenge is phase 3 efficacy, but maybe not needed? 2021-01-21 15:55:01 @lukesjulson @apoorva_nyc @vineettiruvadi @10queues They say that it only takes a few weeks to design a new one. The rollout/manufacture on the other hand.... The real game changer next iteration will be nasal delivery 2021-01-21 15:50:38 @lukesjulson @apoorva_nyc @vineettiruvadi But if you "updated" a vaccine to optimally protect against any new strain, i think you would lose efficacy against the current dominant strain. Maybe a cocktail as you suggested. But i'm not sure if that's something they do even for flu, where there are are clear uncertainties 2021-01-21 14:43:16 @lukesjulson @apoorva_nyc @vineettiruvadi although the reduced binding of Abs to mutant strains is statistically significant (each line goes down), i'm not convinced that it is of great practical significance. The range across people/mAbs seems a lot greater than the effect on each Ab. https://t.co/NBdzDJqhBa https://t.co/iz5ZQWyBd9 2021-01-21 05:21:14 @lukesjulson @vineettiruvadi @apoorva_nyc hmm. Seems like it's easy to find resistant mutations..the hard part is to find resistant mutations that are also as or more infectious. Not sure i'd want to select for those even in a lab...especially since it's easy enough to modify the vaccine sequence once they emerge 2021-01-21 05:08:12 @lukesjulson @vineettiruvadi @apoorva_nyc "mix of mRNAs"? I thought the vaccine was a single pure sequence, decided on within a week or so of getting the original viral sequence, no? What you suggest sounds pretty cool, but maybe not the ideal strategy if you're racing against time... 2021-01-20 04:18:19 @K_dele @WSUPullman congratulations! 2021-01-19 23:17:42 @apoorva_nyc How do they approve the annual flu vaccine? They must do phase 1& I guess there must be some regulatory grey zone for approving tweaks w/o full phase 3? 2021-01-19 23:10:05 @apoorva_nyc how quickly could we pivot to a new vaccine? https://t.co/PbfQAKiJKM 2021-01-19 20:58:52 @mameister4 @UtopianCynic Long before people merely opened tabs on their browser as a substitute for reading papers, I trekked to the library (uphill both ways) and xeroxed papers, and then filed them (alphabetically by author) as a substitute for reading them. 2021-01-19 20:54:41 Inspired by @PamelaReinagel 's pioneering work https://t.co/qJXKiT7aAJ https://t.co/PngHCLCbTV 2021-01-19 20:48:47 Apparently, it would be quick (weeks) to make new mRNA vaccines to match the new variants. I am less clear on the regulatory issues? Would there need to be full phase 3 PCTs for efficacy? Or just smaller trials for safety? How does it work for the annual flu vaccines? 2021-01-19 20:16:54 @UtopianCynic https://t.co/cso49xy1Gh https://t.co/SmBxPNBTnm 2021-01-19 20:04:14 @UtopianCynic i'm pretty sure i still have it in my filing cabinet 2021-01-19 01:03:37 @SussilloDavid i agree that the latter belongs at Cosyne. Do you feel that it is being discouraged? Or even not encouraged enough? 2021-01-19 00:44:18 @memming @TrackingActions @SussilloDavid @AToliasLab @MatthiasBethge NAISys is currently slated to be at least every other year at CSHL, bcs CSHL doesn't typically do annual meetings. If there were interest one could imagine another venue for the odd years. 2021-01-19 00:42:08 @SussilloDavid i dont understand. How can the connection to neuro be "tenuous" and at the same time be "explicit ML work geared toward neuro"? Can you give an example? 2021-01-17 06:34:54 RT @mameister4: Frustrated that your mouse isn’t learning fast enough? Maybe give it something interesting to do and then get out of its wa… 2021-01-14 15:25:25 @erlichlab @sepalmerNeuro @neuroecology yes, that was the joke. Over the last week or so, a lot of bots were removed. 2021-01-14 14:57:36 @erlichlab @sepalmerNeuro But some say that we failed to pay the registration fees for the cosyne domain. We are checking into that theory as well. 2021-01-14 14:56:12 @erlichlab @sepalmerNeuro Turns out we are bots, shut down in the recent purge. We have been playing the long game. 2021-01-13 17:57:17 @JasonSynaptic @AOC @instagram @BU_Tweets did she give a talk? 2021-01-09 17:34:25 @DLBarack The Pfizer vaccine seems effective against the variants currently circulating. I dont think the other vaccines have been tested yet but this is promising. https://t.co/82lKANCT7W 2021-01-09 04:42:45 On Dec 21, Nostradamus predicts that terrorists will storm the capitol, and asks "How will January 6 be policed? We've already seen that the White House can effectively commandeer the DC police... Will Congress get the protection it needs?" https://t.co/4DhN25LO2d 2021-01-08 22:09:49 RT @jonmladd: One thing political scientists have found is that failed coups are often followed by successful coups. 2021-01-08 17:00:07 @dela3499 "I doubt rhyming is a product of biological evolution." I'm not sure what this means. Even if i believe that language is a product of evo, i'm not sure what it means to ask eg whether the sound "k" in particular is... 2021-01-08 02:05:09 RT @ramencult: calling it “Orwellian” when Nature rejects my manuscript 2021-01-07 01:57:55 RT @edokeefe: JUST IN: “This is not news we deliver lightly,” @margbrennan says as she reports: Trump Cabinet secretaries are discussing in… 2021-01-07 00:51:03 RT @Mrhflrs: this is a really strange way to find out that cops know how to not use deadly force 2021-01-06 20:43:52 @janexwang https://t.co/aLavlO8AGX 2021-01-05 03:28:47 @KeyPaganRush I am assuming the number of panels is fixed. The free variable is the space per panel vs white space btw panels 2021-01-05 01:51:20 How much space should one leave between panels in a multipanel figure? Should one pack panels together, and use all the real estate as efficiently as possible? Or should one leave a bit of space between panels to enhance readability? 2021-01-04 04:35:38 @srigsri23 @jenkins_paul @JasonSynaptic it's incredibly hard to predict what research will have an impact. 50 yrs ago, some guy was interested in an irrelevant question: how life can exist in hot springs. This led to PCR, and the rest is history. this is the rule not the exception https://t.co/rGDzxwELyK 2021-01-04 00:54:23 RT @joe6783: having been born and raised in a communist dictatorship I am taken aback by how many Americans, at the highest levels of gover… 2020-12-31 06:42:38 @JasonSynaptic the problem with all of these "objective" measures is Goodhart's law: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." This is especially true when eg citation counts etc are used for promotion and funding. 2020-12-31 06:27:06 RT @florian_krammer: This was exactly a year ago. But it feels more like ten years ago. https://t.co/3z9fW8xG61 2020-12-28 04:19:10 @daniela_witten Sometimes in practice it means "There is pretty much a 100% chance there will be a ~1 hr thunderstorm between 2&amp (I say this as a runner who tries to time my run to avoid summer storms) 2020-12-22 19:18:40 @PozzoMillerLab @pgolshani @AndrewHires @mbeisen he made occasional exceptions for collaborations with other PIs like Tom Sudhof 2020-12-22 19:07:23 @AndrewHires @mbeisen Back in the day, J. Physiol was the premiere journal. (People sometimes put an "abstract" into Nature, but JP was the serious one) They had a strict alphabetical order policy, which Chuck adopted for all journals, to avoid conflicts So I ended up as senior author: Stevens&amp 2020-12-22 16:48:40 @wolfejosh @ipeikon @ipeikon was already practicing his cold email skills 12 years ago, when he was still a Duke undergrad... luckily, I responded to his email, and then convinced him to come to CSHL as a grad student, and then to join my lab, where he changed the course of our research https://t.co/wC58qhq8j1 2020-12-22 14:27:25 @_stah Is this somehow corrected for the probability of having a confirmed case in the first place? Given that we dont do mass surveillance testing, presumably that probability varies with age. 2020-12-22 00:05:18 RT @lanewinfield: I always awkwardly struggle to get to the end call button on video calls. So I made this https://t.co/4z4zsxNkeQ 2020-12-21 17:16:32 Hey @DeWeeseLab Welcome to Twitter! I'm impressed you are able to read it on your flip phone. 2020-12-20 22:07:56 "Brined hippocampus" [...] "Simmer corticosteroids in the adult mouse cortex." https://t.co/URgE8yxA4q 2020-12-19 21:01:18 RT @GaneshNatesh: Came across this nice synthesis of a successful critical commentary attributed to Dennett, and it really resonated with m… 2020-12-17 23:22:50 RT @khats97: My perspective on the beautiful new Science paper by Kebschull et al on the evolution of the cerebellar nuclei is out. Adding… 2020-12-17 18:00:38 I have never once pretended to miss my long gone days at the rig. Instead, what I purport to pine away for is programming, having apparently forgotten how incredibly frustrating it was. "Oh my god, what I would give to have the time to do all the analyses on that data set!" https://t.co/RaEqNPL98A 2020-12-17 13:37:31 RT @svoboda314: New preprint from the lab: A midbrain - thalamus - cortex circuit reorganizes cortical dynamics to initiate planned movemen… 2020-12-17 02:37:43 @blamlab The advent of cheap sequencing, crispr, viral techniques etc reduces the advantage of using genetically accessible organisms. I think the pendulum is swinging back 2020-12-15 23:29:44 RT @_RobToews: This is one of the really big, outside-the-box ideas in AI today: should machine learning models have more intelligence "bui… 2020-12-15 05:04:42 RT @KameronDHarris: Basically the argument of "Are we smart enough to know how smart animals are?" book https://t.co/25jtPxSOVb 2020-12-14 22:35:59 @AthenaAkrami @WisalKhwaja Solstice and equinox define the "astronomical" calendar. But it turns out there is a separate "metereological" calendar that actually is more consistent with the weather, at least in NY. https://t.co/v3rR4B06l8 2020-12-14 22:03:22 @AthenaAkrami ? 2020-12-14 21:59:49 It's winter already! According to meteorologists: spring runs from March 1 to May 31 summer runs from June 1 to August 31 fall runs from September 1 to November 30 winter runs from December 1 to February 28 https://t.co/QyeeBJtPAE 2020-12-14 21:25:55 @zavaindar @_allstripes @Science37x Won’t they be able to sell every dose they manufacture for the next year or so at least? 2020-12-14 18:01:43 @BrianDDeAngelis yeah upon somewhat more careful reading of the paper they did not do anything to rule out the obvious possibility that these were just the usual artifactual chimeras one sees in high throughput sequencing data. they have better in vitro data but no obv clinical relevance 2020-12-14 17:00:07 @jdeclue @adamscrabble so true! And that's why we relegate the defense of the country to a patchwork of state militias, instead of funding a national Defense bureaucracy 2020-12-14 16:58:24 @AthenaAkrami @xseedling @joshdubnau @JasonSynaptic @TrackingActions Yes, didnt intend that as an exhaustive list of symptoms 2020-12-14 01:43:19 @xseedling @joshdubnau @JasonSynaptic @TrackingActions yes, the standard explanation for recurrent PCR positivity is persistent fragments. @joshdubnau was raising the possibility that this integration result might provide an alternative mechanism for long-lasting symptoms like fatigue and shortness of breath 2020-12-14 01:17:47 In the unlikely event that (1) there are people who follow me on twitter who are vaccine skeptics Let me state unequivocally for the record that I am very eager to take the vaccine once it's my turn 2020-12-14 01:08:32 @joshdubnau @JasonSynaptic @TrackingActions yeah you would think someone would have noticed it if it were true. @AthenaAkrami , do long haulers test PCR positive for extended periods? 2020-12-14 01:03:54 @joshdubnau @JasonSynaptic @TrackingActions do long haulers continue to test positive? 2020-12-14 01:02:12 RT @joshdubnau: @JasonSynaptic @TonyZador @TrackingActions 1/6. Super cool true. It could explain PCR positive results after recovery. I… 2020-12-14 01:01:57 @JasonSynaptic @mikemc43 @TrackingActions I agree. i think our tweets crossed. 2020-12-14 01:00:27 @mikemc43 @JasonSynaptic @TrackingActions even if this LINE1 mechanism for covid integration is real, seems like you need a virus to trigger high LINE1 expression. So maybe the vaccine would not elicit as much LINE1 expression? 2020-12-14 00:57:01 RT @svscarpino: Super-spreading is the most challenging &amp 2020-12-14 00:14:41 apparently they had had a falling out. Nabokov felt that Jacobson was cozying up to Stalinist Russia: "Frankly, I am unable to stomach your little trips to totalitarian countries." https://t.co/V5MM0A1h65 2020-12-14 00:09:25 @JasonSynaptic @TrackingActions i'm counting on @joshdubnau, who works on transposons, to comment on all this... 2020-12-14 00:04:35 @JasonSynaptic @TrackingActions there are a lot of sequencing artifacts that can cause chimeras. But that is such an obvious caveat, and this is a top lab, so i assume they addressed this. But i havent read it carefully... 2020-12-13 23:58:33 Comment attributed to the great linguist Roman Jakobson, when discussing whether the great author Vladimir Nabokov should be recruited Harvard Lit Dept: “Gentlemen, even if one allows that he is an important writer, are we next to invite an elephant to be Professor of Zoology? 2020-12-13 23:54:11 @TrackingActions @JasonSynaptic i dont think there is anything special about Covid that would explain why it would do this. If this turns out to be real, i bet it is a general feature of RNA viruses... not clear whether important or just a curiosity. but curious it indeed is... 2020-12-13 23:45:46 @TrackingActions As @JasonSynaptic suggests, it might not even be physiologically relevant. And even if it is, i think it is at most part of the explanation for why Covid is bad. Not sure i'm any more worried about it now than 30 minutes ago before i read this. it's just weird. 2020-12-13 23:43:21 Maybe not so novel? "More generally, our results suggest a novel aspect of infection possibly also for other common disease-causing RNA viruses such as Dengue, Zika or Influenza virus, which could be subject to retro-integration and perhaps affect disease progression" 2020-12-13 23:40:54 I wonder if this is unique to Covid? Maybe this also happens with all viruses sometimes but Covid is just so carefully studied... 2020-12-13 23:40:22 "This novel feature of SARS-CoV-2 infection may explain why patients can continue to produce viral RNA after recovery and suggests a new aspect of RNA virus replication." 2020-12-13 23:38:49 WTF!?! "we found chimeric transcripts [in] primary cells of patients" "SARS-CoV-2 RNAs can be reverse transcribed in human cells by RT from LINE-1 elements...integrated into the cell genome and subsequently be transcribed. ...LINE-1 expression was induced upon CoV-2 infection " https://t.co/U6mLsUPs9s 2020-12-13 23:07:07 @itamarlandau @Oren_Amsalem @YayonNadav @DavidBeniaguev can you summarize what he is saying? I am reluctant to sit through his 1 hr lecture (even though I enjoy his very nicely rolled Rs) for what i worry will turn out to be just standard ideas but re-packaged. Is there a new idea? 2020-12-13 22:51:43 @VenkRamaswamy @martisamuser @tyrell_turing @RaiaHadsell @neuromatch I think it would be really unfair to put it on Pirate Bay. This would discriminate against the many people from the older generation who do not know how to access Pirate Bay If one were going to do something so despicable, one might as well just post the talks on Youtube. 2020-12-13 17:09:46 @dkislyuk @DavidBeniaguev @itamarlandau @YayonNadav Point mutations are only a small part of the story. There are also a lot of duplications, deletions, etc. another important driver of genomic change are transposons, which account for 40-50% of our genome. A lot of "useless" DNA is transposons See eg https://t.co/e7SSJvuUuu 2020-12-13 14:25:53 @itamarlandau A lot is just brute force. There have been as many as 1e30 individual animals since multicellular life emerged &gt But, yes, the genotype/phenotype relationship is complex, and is itself subject to selection. Eg over body size is one knob, arms/leg ratio prob another 2020-12-13 05:51:21 @itamarlandau brute force 2020-12-11 13:50:58 @WiringTheBrain when is the French translation going to be done? 2020-12-10 17:32:45 singing mice are super cool! https://t.co/MYb16jzugq 2020-12-09 01:31:35 Just some of the useful stuff you will learn from "The biomass distribution on Earth" https://t.co/LYjQYNfWXq 2020-12-09 01:28:04 and of the 550 total Gigatons of carbon of biomass on earth, 450 GtC are plants, and only 2 GtC animals. 2020-12-09 01:26:04 Basically all land mammals are either humans or livestock. And chickens cumulatively weigh 3 times more than all wild birds combined. Otoh, fish+arthropods are 71% of all animal biomass https://t.co/Q3EcGg1NQE 2020-12-08 02:16:52 RT @GeoRebekah: 1/ There will be no update today. At 8:30 am this morning, state police came into my house and took all my hardware and t… 2020-12-07 16:07:49 @curiouswavefn fair enough. But I still think that even if Dyson had formulated the rule himself, he almost would have been forced to attribute it to someone else. Somehow related to Groucho Marx's quip that he would never want to be a member of a club that would have him as a member. 2020-12-07 15:01:41 @aquadude1231 i dont know 2020-12-07 14:51:10 It looks like my NeurIPS invited seminar is here: Thu, Dec 10th, 2020 @ 20:00 – 22:00 EST I will discuss my critique of pure learning, and how insights from neuroscience can guide AI. https://t.co/dDbtE4fEkc 2020-12-07 14:15:28 @curiouswavefn So most likely Freeman Dyson himself formulated the law, and (following the law) attributed credit to his (possibly even fictional) supervisor Smeed. 2020-12-07 01:25:27 @EricTopol I wonder why people with access aren't getting the monoclonal prophylactically? I imagine it would work pretty well... 2020-12-05 17:31:30 Getting ready to resubmit a paper https://t.co/9XQbXYExgw 2020-12-05 02:32:51 @neurobongo It's the assumption that the levels (in neuroscience) are independent that makes his view controversial. Evolution scrambles the levels in ways that no CS undergrad would be allowed to get away with 2020-12-05 02:17:46 @analog_ashley oddly enough, so am I 2020-12-05 02:14:22 @dileeplearning @tyrell_turing Yeah, I guess to some extent it would be useful to apply Marr's 3 levels of analysis to understanding how my Chrome browser works. 2020-12-05 01:54:34 @dileeplearning @tyrell_turing yeah but it's worse than that. If you are reverse engineering a computer program, there's a good chance that the programmer actually thought about 3 distinct levels. but evolution really tangles those levels up. 2020-12-05 01:26:12 Are you currently a grad student in AI looking for a summer internship? Do you believe insights from neuroscience can lead to better AI algorithms? Then consider spending the summer doing neuro at scenic CSHL, just an hour from NYC. https://t.co/sbSMHWcii1 https://t.co/VnWHz57fnc 2020-12-05 01:16:42 @neuroecology @KordingLab @tyrell_turing @anne_churchland @MHendr1cks @andpru i dont think Marr Level 1 is boring. But i think it's really hard to test empirically. It's like evolutionary explanations--most are just-so stories. They are often really fun and satisfying to talk about, but rarely practical to test. 2020-12-05 00:48:05 @KordingLab @tyrell_turing @anne_churchland @MHendr1cks @neuroecology @andpru i think level 1 chauvinism is a corollary of the complete separability of levels: If you can choose to study any level independently of the others, why wouldnt a grand theorist focus on the highest level &amp 2020-12-05 00:33:37 @tyrell_turing it's Marr's claim that the levels are independent that was so seductive, but ultimately misguided. Without that claim, the 3 levels are basically just Programming 101: Define the problem, write the algorithm in pseudocode, code it up. 2020-12-04 23:15:33 @dileeplearning @tyrell_turing Yeah, except if maximizing g() is a lot easier than maximizing f(), then you might go with g(). and which algo is better might depend on eg whether your computer charges you x50 more for multiplication than addition. Or whether it is slow but has nearly infinite memory. 2020-12-04 23:13:07 @KordingLab @tyrell_turing Einstein is reputed to have said: "A model should be as simple as possible, but not simpler" Spherical cows are the canonical example of a model that is "too simple" https://t.co/u1O9LPwNwb 2020-12-04 23:00:24 @tyrell_turing @dileeplearning I read someone about them in the Long Island Gazette. Sound cool. How exactly do they prove that Marr’s levels are useful? 2020-12-04 22:06:34 @tyrell_turing @anne_churchland @MHendr1cks @neuroecology @andpru My recollection is that Marr definitely felt that way--that was definitely the impression i walked away with. But admittedly it's been quite a while since i read this book, so maybe i'm misremembering... 2020-12-04 21:22:23 @tyrell_turing i didnt realize anyone still took Marr's levels seriously. Don't get me wrong--i am Marr's biggest fan. I am a neuroscientist today because at an impressionable age i read "Vision" and switched fields But separation of levels is not a useful fiction. It is a spherical cow. 2020-12-04 18:50:29 RT @AstroKatie: A corollary of “any sufficiently advanced technology is indistinguishable from magic” is that actually coaxing that technol… 2020-12-04 14:08:10 @HernocLs @OdedRechavi @madamscientist @arjunrajlab @biorxivpreprint Yes it’s a new model in which all papers would first be uploaded to biorxiv or similar, and then evaluated post publication. Potentially even by multiple “content aggregators”. Decouple dissemination from evaluation 2020-12-04 14:05:20 @HernocLs @OdedRechavi @madamscientist @arjunrajlab It may take a while. But it’s a good sign that elife, which is leading the charge, is backed by HHMI, the Wellcome and Max Planck, 3 of the biggest foundations funding “elite” scientists. Elife is in the purple circle 2020-12-04 14:00:38 @HernocLs @OdedRechavi @madamscientist @arjunrajlab In fact I currently rely a lot on my students and postdocs to identify interesting new papers for me to read 2020-12-04 13:58:20 @HernocLs @OdedRechavi @madamscientist @arjunrajlab Curators at the Louvre need access to the louvre so depend on the existing power structure. But there is no barrier to entry to become an influencer. I would happily follow a bright grad student if they consistently selected the most interesting articles. 2020-12-04 13:19:50 @madamscientist @arjunrajlab @OdedRechavi If all review is post publication, then editors and journals are no longer gatekeepers and no longer have the power to block publication. Journals will just express opinions. They will be “influencers.” 2020-12-04 01:39:48 wow, threat-induced cardiac arrest in flies?!?? https://t.co/UcwFL8cyWQ 2020-12-02 03:03:16 @lpachter I dont understand this. Are you saying that it's not solved bcs it's not a well-defined problem? Or are you positing that there exists a simpler "solution" to be discovered that will not only provide the right answer but also provide us with "understanding"? 2020-12-01 23:19:49 RT @LoogerL: Yes! These ARE the GCaMPs you're looking for. Much faster, much better SNR, bright, no signs of toxicity (relative to current… 2020-12-01 22:10:13 RT @kaznatcheev: I just can't stop watching this visualization. This really highlights how a good representation makes algorithms easy. htt… 2020-12-01 21:19:23 @behrenstimb https://t.co/kA2kBQHXrX 2020-12-01 21:15:18 woo hoo! The future has finally arrived! https://t.co/Q3HeNJpRiS 2020-12-01 21:12:26 @DavisSawyer2 @GaryMarcus @WiringTheBrain btw we've also seen this movie with machine translation, but the ending was different. I now routinely "read" newspapers from around the world, something utterly impossible 5 years ago. 2020-12-01 20:16:05 @GaryMarcus @WiringTheBrain @DavisSawyer2 also, there is no reason to think that 90% is the ceiling. presumably they and others will figure out tweaks to improve it, possibly by a lot. 2020-12-01 20:14:34 @GaryMarcus @WiringTheBrain @DavisSawyer2 it totally depends on what you want to do. A lot of biology is based on generating hypotheses through a (possibly inaccurate) screen, and then testing a small subset carefully. So 90% could be great for a lot for many purposes. 2020-12-01 18:46:28 @GaryMarcus @DavisSawyer2 my understanding is (if claims pan out) it will have a major impact on structural biology, and might also have major implications for drug discovery. In biology, this is about as close to "change everything" as you get in a single discovery. Maybe not quite crispr, but close. 2020-12-01 16:26:51 @GaryMarcus @DavisSawyer2 Can you list a few technical breakthroughs this year that you consider more significant and less oversold, from any field? (excluding anything Covid-related) 2020-12-01 15:26:59 @MingsChirps @HsinHaoYu Social interactions have been one of the main drivers of the evolution of intelligence in primates. It helps determine success in cooperation and competition. And ultimately it drove the evolution of language, which has been central to human success. 2020-12-01 15:10:43 @GaneshNatesh @manuelbaltieri A structural biologist colleague commented, half jokingly: "This is amazing. I'm going to lose my job, soon!" 2020-12-01 15:09:04 @GaneshNatesh @manuelbaltieri yes, i think that is the standard definition of "solving" a protein structure. And by that definition, Deep Mind has apparently succeeded (subject to caveats about peer review, maybe not all classes of proteins, etc) 2020-12-01 02:04:18 @GaneshNatesh @manuelbaltieri i dont know what would it would mean for it to be "solved"... 2020-11-30 20:10:14 @lukesjulson @cian_neuro @seanescola Did it actually solve the structure of any trans membrane proteins? 2020-11-30 18:58:14 amazing https://t.co/gC4AkX8xtX 2020-11-28 01:42:46 RT @togelius: Some people seriously believe in artificial general intelligence, and the arrival of superintelligence and a singularity. In… 2020-11-27 16:31:24 @FredBarrettPhD @neuro_data @criticalneuro @blamlab @ZennaTavares @IrisVanRooij I guess i've been a biologist for too long to remember that many people imagine that humans represent a qualitatively break with our evolutionary past, and so need to be reminded that animals "may" (??!??) share "some" (???!) characteristics with us... 2020-11-27 00:30:59 @criticalneuro @blamlab @DLBarack @neuro_data @ZennaTavares @IrisVanRooij yes there is nothing more satisfying than a good argument-by-analogy...it resolves nothing, but at least it shows you're paying attention to your interlocutor 2020-11-27 00:23:05 @blamlab @DLBarack @neuro_data @criticalneuro @ZennaTavares @IrisVanRooij Maybe more like how Janet Sobel used an earlier "version" of the drip painting technique that Pollock eventually became famous for https://t.co/t0714YQv9Q 2020-11-27 00:07:53 @blamlab @neuro_data @criticalneuro @ZennaTavares @IrisVanRooij i am not confident that we will get all the way to human cognition this way. Mouse cognition is not our final destination. But i know of no other reliable way of building a firm foundation...Other approaches run the risk of trying to build a ladder to the moon. 2020-11-26 23:55:07 @blamlab @neuro_data @criticalneuro @ZennaTavares @IrisVanRooij Dear John, Congratulations! Santa came early this year, and will be delivering an amazing book about animal cognition next week. "Are we smart enough to know how smart animals are?" https://t.co/iKWuSmh7G9 2020-11-26 23:36:57 @neuro_data @criticalneuro @blamlab @ZennaTavares @IrisVanRooij To the extent that my entire belief system about something i have spent my entire professional life thinking about can be summed up as part of a tweet that also attempts to summarize the beliefs of several other people---yes, good job, i agree. 2020-11-26 20:05:43 @lukesjulson @tyrell_turing I think people who don’t check their email should set their auto reply to inform people. It would save me the effort of sending out 3 more follow up emails. When I really need to reach someone, I sometimes threaten to call. Often elicits a response in 30 seconds 2020-11-26 00:32:06 I'm waiting for someone to give the updated version of this lecture: "Can Shannon learn causality?" 2020-11-26 00:31:34 Years ago i saw an amazing talk by Tali Tishby titled "Can Shannon learn semantics?" (Shannon as in Claude, father of info theory) I think GPT-3 kind of answers that: "not quite (but surprisingly close") https://t.co/JIGm5C6JBG 2020-11-25 04:15:41 @srikosuri Nature: 40 editors + 50 reporters (for "front matter")...not typical of most journals. But yes the traditional model of journals as gatekeepers is broken. Let's move to content aggregators and post-publication review Here is one model, from CS. https://t.co/Hzo93EXb35 2020-11-25 03:48:43 @_onionesque How about Douglas Prasher who cloned GFP, which won a Nobel for Tsien &amp But Prasher lost funding, left academia, and was driving an airport courtesy van at the time of the Nobel. https://t.co/5XV9Nr1r46 2020-11-24 13:16:59 super useful tweetorial on vaccines https://t.co/njbuYhq0yP 2020-11-22 23:18:14 @YasirSalas @yael_niv I think you maybe you're on to something, but maybe a different reason. Cuteness is mostly just juvenile features (also preserved by domestication) i wonder if marsupials branched off before placentals acquired some of the "adult" characteristics? 2020-11-22 21:17:23 @YasirSalas @yael_niv So your evolutionary theory is: there is a natural tendency to be cute, but it is somehow offset by predators? I guess the idea is that predators pick off the cutest? Interesting! 2020-11-22 20:09:31 @yael_niv Why are so many of the cutest animals marsupials? 2020-11-21 01:53:55 @GaryMarcus my views on the necessity of closing schools were also shaped by this article by @ProfEmilyOster https://t.co/5XRSToPjTI 2020-11-20 23:07:43 @AdamMarblestone what could possibly go wrong? 2020-11-20 22:04:15 @jmourabarbosa @joe6783 @skrish_13 @CSHL @RaiaHadsell @tyrell_turing @neuromatch That said, worth remembering that cshl is a nonprofit. The meetings &amp Yet cshl has retained everyone—-no layoffs. Kudos to them for prioritizing this. 2020-11-20 22:00:13 @jmourabarbosa @joe6783 @skrish_13 @CSHL @RaiaHadsell @tyrell_turing @neuromatch Yes as co-founder of cosyne I have leverage to help formulate sensible policies like that. Much trickier when trying to change a half century of tradition 2020-11-20 20:07:13 @jmourabarbosa @joe6783 @skrish_13 @CSHL @RaiaHadsell @tyrell_turing @neuromatch CSHL has been hosting weekly conferences for decades, mostly in biology. It turns out that biologists dont want results shared prior to publication, so CSHL promises to keep them secret. I have tried to get them to make an exception, but that's a lot of tradition to overcome 2020-11-20 05:02:07 RT @navlakha_lab: New post-doc training program in machine learning at @CSHL! Come work with us! @a_dobin @EngelTatiana @JesseAGillis @jb… 2020-11-20 04:14:17 @apoorva_nyc @profshanecrotty @SetteLab I completely agree that peer review does not guarantee quality I also agree that, on average, most papers are average, not excellent. Papers can true and even worthwhile, without being very interesting Does that make them "a waste of time, paper and effort"? 2020-11-19 23:33:40 @GaryMarcus but if you're asking about the effect of family structure, it's a hypothesis (people tend not to mask around their families) backed up by a bunch of studies. It was one reason Italy was hit so hard early on: Grandparents saw grandkids daily https://t.co/xlMpnKeH4i 2020-11-19 23:25:49 @GaryMarcus here is eg a careful study of the effect of reopening in Spain. https://t.co/Tu5zE2pWMf https://t.co/acQ3SiGtJZ 2020-11-19 23:16:20 @GaryMarcus a reference to the idea that correlation is not the same as causality? I think i must have read it in one of your critiques of machine learning, but i'll check 2020-11-19 23:12:12 @GaryMarcus This is a pretty in depth review. The upshot: a lot of variability among schools, and it matters Eg my 5th grader masks all day, protected by plexiglass, good ventilation, in a pod. Cases are up locally but (so far) not a single cluster traced to school https://t.co/xGJrYTxivM 2020-11-19 22:31:27 @GaryMarcus Also young kids on avg live in larger households, so any community increase will be spread back to them via older sibs or parents. Household structure is a key factor in Covid spread. 2020-11-19 22:28:17 @GaryMarcus Correlation is not causality. For causality you need contact tracing. Kids are clearly susceptible but there is good (but mixed) evidence that they do not spread as effectively. 2020-11-19 22:08:35 @GaryMarcus @tyrell_turing @DrLeanaWen In NYC, i think it's pretty clear that bars/restaurants/gyms should be shut down before K-8. 2020-11-19 22:05:57 @GaryMarcus https://t.co/38R9KuMODH 2020-11-19 22:04:22 @GaryMarcus importantly https://t.co/1BncD9DJap 2020-11-19 22:03:48 @GaryMarcus but then you also have countless reviews like this https://t.co/4AzJgdkMdQ 2020-11-19 18:02:30 @jtdudman @GaryMarcus @DrLeanaWen hmm. it would be interesting to understand why this is at odds with most of the other studies, which have generally concluded minimal transmission by kids &lt 2020-11-19 17:33:03 @GaryMarcus @DrLeanaWen "educational institutions"...i think that lumps together universities with high schools with k-8. Although the data aren't completely consistent, it looks like k-8 is mostly safe, universities clearly no, and high school probably mostly. 2020-11-19 16:32:02 @pait i was reminded by helping my kids with their math homework. 2020-11-19 16:18:44 RT @joe6783: #NAISys meeting @CSHL was a total blast! What an incredible group of people! Many thanks to @RaiaHadsell, Blake Richards @tyre… 2020-11-19 15:59:23 I was reminded today of Lockhart's Lament, a brilliant account of why math instruction causes so many kids to lose interest in math. https://t.co/XxGnTlh0vZ https://t.co/S0ptQP3V7l 2020-11-19 14:31:13 @DrugGovoruna @DavidBeniaguev i think that's an example of the more general issue that stats sometimes do not accurately reflect the thing you are trying to measure (in this case, mortality). Goodhart's law is the special case where someone hacks a stat that is being used as a proxy to motivate behavior 2020-11-19 05:39:03 Numbers and weight both correlated well in a pre-central plan scenario. After they are made targets (in different times and periods), they lose that value. 2020-11-19 05:38:28 The most famous examples of Goodhart's law are soviet factories which when given targets on the basis of numbers of nails produced many tiny useless nails and when given targets on basis of weight produced a few giant nails. https://t.co/diQtuAx5lS 2020-11-19 00:59:53 @DavidBeniaguev If nations actually cared to hack life expectancy stats it would be trivial, since there is always a lot of leeway in how you collect a statistic 2020-11-18 23:44:10 TIL Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure" In other words, as soon as you use an "objective" measure to assess performance, compensation etc, people will figure out how to game the system. https://t.co/HR9ZHH7JPd 2020-11-18 03:11:09 RT @michaelmina_lab: HERE IS THE PLAN TO GET US OUT OF THIS #COVID19 WAR • NO lockdowns • NO waiting for vaccines • Reverses cases in wee… 2020-11-18 00:48:42 RT @SaraASolla: UC Davis is making a major expansion in Computational Neuroscience and related fields. Their current faculty search in Comp… 2020-11-17 01:54:37 @dileeplearning @KordingLab Yes, Minsky can be blamed for killing most funding from late 60s to the mid-80s. The comeback started with Hopfield 1982/84. Those papers drew lots of physicists into the field. And then the real resurgence came with the PDP books in 1986 And then the field went dormant again 2020-11-17 01:30:53 @GoardMichael Even better: Fund trainees (students/pds)--salaries + research funds. Who better to pick the best PIs than the people whose careers literally depend on it? Added benefit: It would change the power dynamic. PIs would have to be nice to get funded by their postdocs 2020-11-16 01:57:26 @KordingLab he manages to tell the story without quite saying that Minsky and Papert killed neural network research by promising a shortcut with symbolic AI. (Apparently Minsky and Rosenblatt were high school rivals). It was the symbolic AI program that failed rather spectacularly 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired