Découvrez Les IA Experts
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Nando de Freitas | Chercheur chez Deepind | |
Nige Willson | Conférencier | |
Ria Pratyusha Kalluri | Chercheur, MIT | |
Ifeoma Ozoma | Directrice, Earthseed | |
Will Knight | Journaliste, Wired |
Profil AI Expert
Non Disponible
Les derniers messages de l'Expert:
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-05-05 16:01:00 Join us at @AI_KAUST! I seek #PhD &
2023-04-21 00:00:01 CAFIAC FIX
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-27 01:00:00 CAFIAC FIX
2023-02-09 17:00:30 Instead of trying to defend his paper on OpenReview (where he posted it), @ylecun made misleading statements about me in popular science venues. I am debunking his recent allegations in the new Addendum III of my critique https://t.co/S7pVlJshAo https://t.co/Dq0KrM2fdC
2023-01-30 01:00:00 CAFIAC FIX
2023-01-12 08:16:31 @yannx0130 sure, see the experiments
2023-01-12 08:00:15 Re: more biologically plausible "forward-only” deep learning. 1/3 of a century ago, my "neural economy” was local in space and time (backprop isn't). Competing neurons pay "weight substance” to neurons that activate them (Neural Bucket Brigade, 1989) https://t.co/Ms30TkUXHS https://t.co/0UhtPzeuKJ
2023-01-10 16:59:30 RT @hardmaru: New paper from IDSIA motivated by building an artificial scientist with World Models! A key idea is to get controller C to g…
2023-01-03 17:00:33 We address the two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Learning one abstract bit at a time through self-invented (thought) experiments encoded as neural networks https://t.co/bhTDM7XdXn https://t.co/IeDxdCvVPD
2022-12-31 13:00:04 As 2022 ends: 1/2 century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) much later called the Hopfield network (based on the original, century-old, non-learning Lenz-Ising recurrent network architecture, 1920-25) https://t.co/wfYYVcBobg https://t.co/bAErUtNdfN
2022-12-30 17:00:07 30 years ago in a journal: "distilling" a recurrent neural network (RNN) into another RNN. I called it “collapsing” in Neural Computation 4(2):234-242 (1992), Sec. 4. Greatly facilitated deep learning with 20+ virtual layers. The concept has become popular https://t.co/gMdQu7wpva https://t.co/HmIqbS9lNg
2022-12-23 17:00:04 Machine learning is the science of credit assignment. My new survey (also under arXiv:2212.11279) credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 deep learning survey): https://t.co/MfmqhEh8MA P.S. Happy Holidays! https://t.co/or6oxAOXgS
2022-12-20 17:00:09 Regarding recent work on more biologically plausible "forward-only" backprop-like methods: in 2021, our VSML net already meta-learned backprop-like learning algorithms running solely in forward-mode - no hardwired derivative calculation! https://t.co/zAZGZcYtmO https://t.co/zyPBD0bwUu
2022-12-14 08:00:05 Conference and journal publications of 2022 with my awesome PhD students, PostDocs, and colleagues https://t.co/0ngIruvase https://t.co/xviLUwEec0
2022-12-11 14:48:24 The @AI_KAUST booth has moved from #NeurIPS2022 (24 KAUST papers) in New Orleans to #EMNLP2022 in Abu Dhabi. Visit Booth#14. We keep hiring on all levels, in particular, for Natural Language Processing! https://t.co/w7OAFNlFZ9
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-19 15:51:12 30 years ago in NECO 1992: adversarial neural networks create disentangled representations in a minimax game. Published 2 years after the original GAN principle, where a "curious" probabilistic generator net fights a predictor net (1990). More at https://t.co/GvkmtauQmv https://t.co/b1YcHo6wuJ
2022-10-12 07:37:00 RT @globalaisummit: Dr. Jürgen Schmidhuber had a keynote on the evolution of AI, neural networks, empowering cities, and humanity at the #G…
2022-10-11 15:51:22 30 years ago in NECO 1992: very deep learning by unsupervised pre-training and distillation of neural networks. Today, both techniques are heavily used. Also: multiple levels of abstraction &
2022-10-03 16:03:18 30 years ago: Transformers with linearized self-attention in NECO 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993 https://t.co/m0hw6JJrbS https://t.co/8LfD98MIF4
2022-09-10 16:00:03 Healthcare revolution! On this day 10 years ago—when compute was 100 times more expensive—our DanNet was the first artificial neural net to win a medical imaging contest: the 2012 ICPR breast cancer detection contest. Today, this approach is heavily used. https://t.co/ow7OmFKxgv
2022-09-07 16:04:37 3/3: Our analysis draws inspiration from a wealth of research in neuroscience, cognitive psychology, and ML, and surveys relevant mechanisms, to identify a combination of inductive biases that may help symbolic information processing to emerge naturally in neural networks https://t.co/ItJo6hcK4R
2022-09-07 16:03:11 @vansteenkiste_s 2/3: We present a conceptual framework that connects these shortcomings of NNs to an inability to dynamically and flexibly bind information distributed throughout the network. We explore how this affects their capacity to acquire a compositional understanding of the world. https://t.co/uXFlwoRGoR
2022-09-07 15:53:25 1/3: “On the binding problem in artificial neural networks” with Klaus Greff and @vansteenkiste_s. An important paper from my lab that is of great relevance to the ongoing debate on symbolic reasoning and compositional generalization in neural networks: https://t.co/pOXGs89nrq https://t.co/vTOnyht5Hz
2022-08-10 16:04:47 Yesterday @nnaisense released EvoTorch (https://t.co/XAXLH9SDxn), a state-of-the-art evolutionary algorithm library built on @PyTorch, with GPU-acceleration and easy training on huge compute clusters using @raydistributed. (1/2)
2022-07-22 13:25:25 With Kazuki Irie and @robert_csordas at #ICML2022: any linear layer trained by gradient descent is a key-value/attention memory storing its entire training experience. This dual form helps us visualize how neural nets use training patterns at test time https://t.co/sViaXAlWU6 https://t.co/MmeCcgNPxx
2022-07-21 07:08:20 Our neural network learns to generate deep policies that achieve any desired return: a Fast Weight Programmer that overcomes limitations of Upside-Down Reinforcement Learning. Join @FaccioAI, @idivinci, A. Ramesh, @LouisKirschAI at @darl_icml on Friday https://t.co/exsj0hpHp4 https://t.co/aClHLFUdfJ
2022-07-19 07:10:15 @ylecun In 2011, our DanNet (named after my postdoc Dan Ciresan) was 2x better than humans, 3x better than the CNN of @ylecun’s team, and 6x better than the best non-neural method. LeCun’s CNN (based on Fukushima’s) had “no tail,” but let's not call it a dead end https://t.co/xcriF10Jz7
2022-07-19 07:00:58 I am the "French aviation buff” who touted French aviation pioneers 19 years ago in Nature &
2022-07-11 07:01:27 PS: in a 2016 @nytimes article “When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’,” the same LeCun claims that “Jürgen … keeps claiming credit he doesn't deserve for many, many things,” without providing a single example. And now this :-)https://t.co/HO14crXg03 https://t.co/EIqa745mv0
2022-07-08 15:55:20 @ylecun I have now also officially logged my concerns on the OpenReview: https://t.co/3hpLImkebg
2022-07-07 07:01:42 Lecun (@ylecun)’s 2022 paper on Autonomous Machine Intelligence rehashes but doesn’t cite essential work of 1990-2015. We’ve already published his “main original contributions:” learning subgoals, predictable abstract representations, multiple time scales…https://t.co/Mm4mtHq5CY
2022-06-10 15:00:11 2022: 25th anniversary of "A Computer Scientist's View of Life, the Universe, and Everything” (1997). Is the universe a simulation, a metaverse? It may be much cheaper to compute ALL possible metaverses, not just ours. @morgan_freeman had a TV doc on it https://t.co/BA4wpONbBS https://t.co/7RfzvcF8sI
2022-06-09 07:29:46 2022: 25th anniversary. 1997 papers: Long Short-Term Memory. All computable metaverses. Hierarchical Reinforcement Learning (RL). Meta-RL. Abstractions in generative adversarial RL. Soccer learning. Low-complexity neural nets. Low-complexity art... https://t.co/1DEFX06d45
2022-05-20 09:19:22 @rasbt Here is a little overview site on this: https://t.co/yIjL4YoCqG
2022-05-20 09:17:27 RT @rasbt: Currently looking into the origins of training neural nets (CNNs in particular) on GPUs. Usually, AlexNet is my go-to example fo…
2022-05-20 08:11:00 CAFIAC FIX
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn
2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &
2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &