Jurgen Schmidhuber

Profil AI Expert

Nationalité: 
Allemand(e)
AI spécialité: 
IA Stockastique
Neural Network
Neuro Science
Deep Learning
Apprentissage Machine
Occupation actuelle: 
Chercheur, NNAISENSE Professeur, Dalle Molle Institute for Artificial Intelligence Research
Taux IA (%): 
79.31'%'

TwitterID: 
@SchmidhuberAI
Tweet Visibility Status: 
Public

Description: 
Très tôt, l'objectif du professeur Jürgen Schmidhuber est de construire une intelligence artificielle capable de s'améliorer elle même et plus intelligente que lui, puis de partir à la retraite. Les réseaux neuronaux d'apprentissage profond de son laboratoire, ont révolutionné l'apprentissage automatique et l'IA. Il a aidé à améliorer la reconnaissance vocale sur tous les téléphones Android.Il a aussi contribuer à rendre plus efficace la traduction automatique via Google Translate et Facebook, Siri et Quicktype d'Apple sur tous les iPhones,ainsi qu' Alexa d'Amazon. Son équipe a été la première à remporter des concours officiels de vision par ordinateur grâce à des réseaux neuronaux profonds, avec des performances surhumaines. En 2012, ils ont eu le premier réseau de neurone profond qui à remporter un concours d'imagerie médicale. Il a introduit des réseaux de neurones adverses non supervisés qui se combattent dans un jeu minimax pour atteindre une curiosité artificielle. Il vise maintenant à créer la première IA pratique à usage général. L'expert IA Gary Marcus dit que grâce à Jürgen la communauté commence à accorder plus d'attention aux approches neurosymboliques de l'IA

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-05-05 16:01:00 Join us at @AI_KAUST! I seek #PhD &

2023-04-21 00:00:01 CAFIAC FIX

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-27 01:00:00 CAFIAC FIX

2023-02-09 17:00:30 Instead of trying to defend his paper on OpenReview (where he posted it), @ylecun made misleading statements about me in popular science venues. I am debunking his recent allegations in the new Addendum III of my critique https://t.co/S7pVlJshAo https://t.co/Dq0KrM2fdC

2023-01-30 01:00:00 CAFIAC FIX

2023-01-12 08:16:31 @yannx0130 sure, see the experiments

2023-01-12 08:00:15 Re: more biologically plausible "forward-only” deep learning. 1/3 of a century ago, my "neural economy” was local in space and time (backprop isn't). Competing neurons pay "weight substance” to neurons that activate them (Neural Bucket Brigade, 1989) https://t.co/Ms30TkUXHS https://t.co/0UhtPzeuKJ

2023-01-10 16:59:30 RT @hardmaru: New paper from IDSIA motivated by building an artificial scientist with World Models! A key idea is to get controller C to g…

2023-01-03 17:00:33 We address the two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Learning one abstract bit at a time through self-invented (thought) experiments encoded as neural networks https://t.co/bhTDM7XdXn https://t.co/IeDxdCvVPD

2022-12-31 13:00:04 As 2022 ends: 1/2 century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) much later called the Hopfield network (based on the original, century-old, non-learning Lenz-Ising recurrent network architecture, 1920-25) https://t.co/wfYYVcBobg https://t.co/bAErUtNdfN

2022-12-30 17:00:07 30 years ago in a journal: "distilling" a recurrent neural network (RNN) into another RNN. I called it “collapsing” in Neural Computation 4(2):234-242 (1992), Sec. 4. Greatly facilitated deep learning with 20+ virtual layers. The concept has become popular https://t.co/gMdQu7wpva https://t.co/HmIqbS9lNg

2022-12-23 17:00:04 Machine learning is the science of credit assignment. My new survey (also under arXiv:2212.11279) credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 deep learning survey): https://t.co/MfmqhEh8MA P.S. Happy Holidays! https://t.co/or6oxAOXgS

2022-12-20 17:00:09 Regarding recent work on more biologically plausible "forward-only" backprop-like methods: in 2021, our VSML net already meta-learned backprop-like learning algorithms running solely in forward-mode - no hardwired derivative calculation! https://t.co/zAZGZcYtmO https://t.co/zyPBD0bwUu

2022-12-14 08:00:05 Conference and journal publications of 2022 with my awesome PhD students, PostDocs, and colleagues https://t.co/0ngIruvase https://t.co/xviLUwEec0

2022-12-11 14:48:24 The @AI_KAUST booth has moved from #NeurIPS2022 (24 KAUST papers) in New Orleans to #EMNLP2022 in Abu Dhabi. Visit Booth#14. We keep hiring on all levels, in particular, for Natural Language Processing! https://t.co/w7OAFNlFZ9

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn

2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF

2022-10-19 15:51:12 30 years ago in NECO 1992: adversarial neural networks create disentangled representations in a minimax game. Published 2 years after the original GAN principle, where a "curious" probabilistic generator net fights a predictor net (1990). More at https://t.co/GvkmtauQmv https://t.co/b1YcHo6wuJ

2022-10-12 07:37:00 RT @globalaisummit: Dr. Jürgen Schmidhuber had a keynote on the evolution of AI, neural networks, empowering cities, and humanity at the #G…

2022-10-11 15:51:22 30 years ago in NECO 1992: very deep learning by unsupervised pre-training and distillation of neural networks. Today, both techniques are heavily used. Also: multiple levels of abstraction &

2022-10-03 16:03:18 30 years ago: Transformers with linearized self-attention in NECO 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993 https://t.co/m0hw6JJrbS https://t.co/8LfD98MIF4

2022-09-10 16:00:03 Healthcare revolution! On this day 10 years ago—when compute was 100 times more expensive—our DanNet was the first artificial neural net to win a medical imaging contest: the 2012 ICPR breast cancer detection contest. Today, this approach is heavily used. https://t.co/ow7OmFKxgv

2022-09-07 16:04:37 3/3: Our analysis draws inspiration from a wealth of research in neuroscience, cognitive psychology, and ML, and surveys relevant mechanisms, to identify a combination of inductive biases that may help symbolic information processing to emerge naturally in neural networks https://t.co/ItJo6hcK4R

2022-09-07 16:03:11 @vansteenkiste_s 2/3: We present a conceptual framework that connects these shortcomings of NNs to an inability to dynamically and flexibly bind information distributed throughout the network. We explore how this affects their capacity to acquire a compositional understanding of the world. https://t.co/uXFlwoRGoR

2022-09-07 15:53:25 1/3: “On the binding problem in artificial neural networks” with Klaus Greff and @vansteenkiste_s. An important paper from my lab that is of great relevance to the ongoing debate on symbolic reasoning and compositional generalization in neural networks: https://t.co/pOXGs89nrq https://t.co/vTOnyht5Hz

2022-08-10 16:04:47 Yesterday @nnaisense released EvoTorch (https://t.co/XAXLH9SDxn), a state-of-the-art evolutionary algorithm library built on @PyTorch, with GPU-acceleration and easy training on huge compute clusters using @raydistributed. (1/2)

2022-07-22 13:25:25 With Kazuki Irie and @robert_csordas at #ICML2022: any linear layer trained by gradient descent is a key-value/attention memory storing its entire training experience. This dual form helps us visualize how neural nets use training patterns at test time https://t.co/sViaXAlWU6 https://t.co/MmeCcgNPxx

2022-07-21 07:08:20 Our neural network learns to generate deep policies that achieve any desired return: a Fast Weight Programmer that overcomes limitations of Upside-Down Reinforcement Learning. Join @FaccioAI, @idivinci, A. Ramesh, @LouisKirschAI at @darl_icml on Friday https://t.co/exsj0hpHp4 https://t.co/aClHLFUdfJ

2022-07-19 07:10:15 @ylecun In 2011, our DanNet (named after my postdoc Dan Ciresan) was 2x better than humans, 3x better than the CNN of @ylecun’s team, and 6x better than the best non-neural method. LeCun’s CNN (based on Fukushima’s) had “no tail,” but let's not call it a dead end https://t.co/xcriF10Jz7

2022-07-19 07:00:58 I am the "French aviation buff” who touted French aviation pioneers 19 years ago in Nature &

2022-07-11 07:01:27 PS: in a 2016 @nytimes article “When A.I. Matures, It May Call Jürgen Schmidhuber ‘Dad’,” the same LeCun claims that “Jürgen … keeps claiming credit he doesn't deserve for many, many things,” without providing a single example. And now this :-)https://t.co/HO14crXg03 https://t.co/EIqa745mv0

2022-07-08 15:55:20 @ylecun I have now also officially logged my concerns on the OpenReview: https://t.co/3hpLImkebg

2022-07-07 07:01:42 Lecun (@ylecun)’s 2022 paper on Autonomous Machine Intelligence rehashes but doesn’t cite essential work of 1990-2015. We’ve already published his “main original contributions:” learning subgoals, predictable abstract representations, multiple time scales…https://t.co/Mm4mtHq5CY

2022-06-10 15:00:11 2022: 25th anniversary of "A Computer Scientist's View of Life, the Universe, and Everything” (1997). Is the universe a simulation, a metaverse? It may be much cheaper to compute ALL possible metaverses, not just ours. @morgan_freeman had a TV doc on it https://t.co/BA4wpONbBS https://t.co/7RfzvcF8sI

2022-06-09 07:29:46 2022: 25th anniversary. 1997 papers: Long Short-Term Memory. All computable metaverses. Hierarchical Reinforcement Learning (RL). Meta-RL. Abstractions in generative adversarial RL. Soccer learning. Low-complexity neural nets. Low-complexity art... https://t.co/1DEFX06d45

2022-05-20 09:19:22 @rasbt Here is a little overview site on this: https://t.co/yIjL4YoCqG

2022-05-20 09:17:27 RT @rasbt: Currently looking into the origins of training neural nets (CNNs in particular) on GPUs. Usually, AlexNet is my go-to example fo…

2022-05-20 08:11:00 CAFIAC FIX

2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn

2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF

2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn

2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF

2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn

2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF

2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn

2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF

2022-10-28 07:26:29 Present at the 2nd KAUST Rising Stars in AI Symposium 2023! Did you recently publish at a top AI conference? Then apply by Nov 16, 2022: https://t.co/ojlCmhWZc1. The selected speakers will have their flights and hotel expenses covered. More: https://t.co/6nmhbLnxaA https://t.co/cUDnsWmICn

2022-10-24 16:00:21 Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI https://t.co/zAZGZcYtmO https://t.co/aGK8h8n0yF

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-11-22 08:02:15 LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack

2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &

2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &

2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &

2022-12-07 19:00:23 At the #EMNLPmeeting 2022 with @robert_csordas &

2022-03-12 08:11:00 CAFIAC FIX 2022-01-17 10:00:53 Our company @NNAISENSE developed an industry-first solution with AI in #AdditiveManufacturing for @EOSGmbH. CEO Faustino Gomez explains our “Deep Digital Twin” or "World Model" in the @Emerj AI in Business Podcast https://t.co/QAaIgMz305 2022-01-17 08:11:00 CAFIAC FIX 2022-01-11 08:11:00 CAFIAC FIX 2021-12-27 16:46:05 Now on YouTube: “Modern Artificial Intelligence 1980s-2021 and Beyond.” My talk at AIJ 2020 (Moscow), also presented at NVIDIA GTC 2021 (US), ML Summit 2021 (Beijing), Big Data and AI (Toronto), IFIC (China), AI Boost (Lithuania), ICONIP 2021 (Jakarta) https://t.co/SpxDopU0O2 https://t.co/QaDd9V1U1E 2021-12-27 08:20:00 CAFIAC FIX 2021-11-06 23:20:00 CAFIAC FIX 2021-12-06 17:00:40 25th anniversary of the LSTM at #NeurIPS2021. reVIeWeR 2 - who rejected it from NeurIPS1995 - was thankfully MIA. The subsequent journal publication in Neural Computation has become the most cited neural network paper of the 20th century: https://t.co/p2jLeZNeiu https://t.co/7Pce0AeWCL 2021-11-23 16:00:16 KAUST (17 full papers at #NeurIPS2021) and its environment are now offering huge resources to advance both fundamental and applied AI research. We are hiring outstanding professors, postdocs, and PhD students: https://t.co/X4EGIHLKZH https://t.co/bUfOFnjj3d 2021-11-06 23:20:00 CAFIAC FIX 2021-11-06 19:50:00 CAFIAC FIX 2021-11-06 18:59:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-10-13 06:05:32 Kunihiko Fukushima was awarded the 2021 Bower Award for his enormous contributions to deep learning, particularly his highly influential convolutional neural network architecture. My laudation of Kunihiko at the 2021 award ceremony is on YouTube: https://t.co/bYl3QpbR9N https://t.co/zKPVsBZk8H 2021-09-24 07:00:09 Critique of 2021 Turing Lecture, 2018 Turing Award: three Europeans went to North America, where they republished methods and concepts first published by other Europeans whom they did not cite - not even in later surveys. https://t.co/JlzqF9Ddxf 2021-09-14 15:47:14 @MaksimSipos She is mentioned - read the post 2021-09-14 07:01:42 I was invited to write a piece about Alan M. Turing. While he made significant contributions to computer science, their importance and impact is often greatly exaggerated - at the expense of the field's pioneers. It's not Turing's fault, though. https://t.co/WpejtK8Cfo 2021-09-08 07:05:20 The most cited neural nets all build on our work: LSTM. ResNet (open-gated Highway Net). AlexNet & 2021-09-02 07:03:21 2021: Directing AI Initiative at #KAUST, university with highest impact per faculty. Keeping current affiliations. Hiring on all levels. Great research conditions. Photographed dolphin on a snorkeling trip off the coast of KAUST https://t.co/ry2EoPEsVW 2021-08-20 07:00:03 80th anniversary of Konrad Zuse's crowning achievement: Z3, the world's first functional program-controlled general purpose computer (1941), based on his patent application from 1936 (just published in @Weltwoche 8/19/2021) https://t.co/OyTQ9NEjLB 2021-08-09 08:31:34 @JMelegati @debayan That's why this is about gold only. In fact, EU has a handicap: cannot form EU teams for team events. Hence fewer gold medals for team events such as 4x100m relays. 2021-08-09 07:39:56 @DiffidenteIl In fact, EU has a handicap: cannot form EU teams for team events. Hence fewer gold medals for team events such as 4x100m relays. Team events favor big countries. A superior athlete from a tiny EU country has little chance of winning a team event, for lack of excellent comrades. 2021-07-30 09:05:01 @IUILab would mean: fewer gold medals for team events such as 4x100m relays. Team events favor big countries. A superior athlete from a tiny country has little chance of winning a team event, for lack of excellent comrades. 2021-07-30 07:00:23 Shrunken EU is leading Tokyo gold medal count: 26 18 15 14 (despite handicap of being unable to form EU teams for team events). Original Olympic charter forbade medal rankings, but few care. All time gold count up to 2012: https://t.co/DnumjhHrVW 2021-07-20 13:03:05 The human space age started 60 years ago when Sergei Korolev's team brought Yuri Gagarin into orbit on 12 April 1961. 40 years later, in 2001: first space tourism through Roscocosmos. 2021: private companies engage in space tourism. 2021-07-20 13:02:32 The space age officially started one lifetime ago on 20 June 1944 when the first man-made object crossed the 100 km line. The MW 18014 of Wernher von Braun's team kept climbing, reaching 176 km, higher than some current satellites. Attribution: Bundesarchiv CC-BY-SA 3.0 https://t.co/YtrCYkV7Au 2021-07-12 07:42:24 In 1942, one lifetime ago, after many failed trials, the first man-made object (A4) was sent to the edge of space (84 km). In 2021, @richardbranson became the first man to reach this altitude in his own spaceship. Attribution: Bundesarchiv CC-BY-SA 3.0 https://t.co/0XU28lqN8l 2021-06-17 06:00:43 90th anniversary of Kurt Gödel's 1931 paper which laid the foundations of theoretical computer science, identifying fundamental limitations of algorithmic theorem proving, computing, AI, logics, and math itself (just published in FAZ @faznet 16/6/2021) https://t.co/bScKQNRysG 2021-06-17 06:00:43 90th anniversary of Kurt Gödel's 1931 paper which laid the foundations of theoretical computer science, identifying fundamental limitations of algorithmic theorem proving, computing, AI, logics, and math itself (just published in FAZ @faznet 16/6/2021) https://t.co/bScKQNRysG 2021-05-20 06:16:20 375th birthday of Leibniz, founder of computer science (just published in FAZ, 17/5/2021): 1st machine with a memory (1673) 2021-04-15 16:14:03 Hiring for EU project AIDD https://t.co/qlUQ0DX7Da on AI for chemical and pharmaceutical research. 15 PhD positions for outstanding students with an interest in machine learning and chemistry, one of them in my group. Deadline on Sunday! Apply here https://t.co/iQxCMYlcjn 2021-04-13 16:22:01 Busy day! First, as Chief Scientist of @NNAISENSE, I gave a talk about use cases of industrial AI at the world's largest trade fair: @Hannover_Messe (opened by Angela Merkel). Then I spoke about "Modern AI 1980s-2021 and beyond" at #GTC21 by @NVIDIA. Sign up to see the talks. https://t.co/1OamQHbGck 2021-04-07 16:28:41 In 2001, I discovered how to make very stable rings from only rectangular LEGO bricks. Natural tilting angles between LEGO pieces define ring diameters. The resulting low-complexity artworks reflect the formal theory of beauty/creativity/curiosity: https://t.co/BogUKc60f6 2021-03-26 07:15:40 26 March 1991: Neural nets learn to program neural nets with fast weights - like today’s Transformer variants. Deep learning through additive weight changes. 2021: New work with Imanol & 2021-03-18 08:02:45 3 decades of artificial curiosity & 2021-01-26 08:00:08 30-year anniversary of Very #DeepLearning (1991). Unsupervised hierarchical #PredictiveCoding finds compact internal representations to facilitate downstream learning. 1993: solving problems of depth 1000. Hierarchy can be distilled into a single deep net https://t.co/IQ5s8ZQQir 2021-01-15 15:54:19 Our 5 submissions to ICLR 2021 got accepted. Congrats to @FaccioAI @LouisKirschAI @agopal42 @vansteenkiste_s @robert_csordas @aleks_stanic @TsendeeMTS as well as Imanol Schlag, Đorđe Miladinović, Stefan Bauer, Joachim Buhmann! https://t.co/kg3dhwolUb 2021-01-14 07:46:45 2021: 10-year anniversary of deep CNN revolution through DanNet (2011), named after my outstanding postdoc Dan Ciresan. Won 4 computer vision contests in a row before other CNNs joined the party. 1st superhuman result in 2011. Now everybody is using this https://t.co/6axIXknzjl 2020-12-31 08:17:41 30-year anniversary of #Planning & 2020-12-24 08:36:03 1/3 century anniversary of thesis on #metalearning (1987). For its cover I drew a robot that bootstraps itself. 1992-: gradient descent-based neural metalearning. 1994-: meta-RL with self-modifying policies. 2003-: optimal Gödel Machine. 2020: new stuff! https://t.co/xSyMtbUuqN 2020-12-17 07:19:59 10-year anniversary: Deep Reinforcement Learning with Policy Gradients for LSTM. Applications: @DeepMind’s Starcraft player 2020-12-04 16:57:02 @mmbronstein No. Homology is already covered by PSI-BLAST for extending the training set. Sepp et al. really predicted fold classes based on 1D sequences: 1st successful deep learning for protein structure (aka folding) prediction. AlphaFold predicts ALL possible classes though, even new ones 2020-12-02 15:49:46 Big news #AlphaFold uses #DeepLearning for #ProteinFolding prediction. This approach was pioneered by Sepp Hochreiter et al. in 2007 when compute was 1000 times more expensive than today. Their LSTM was orders of magnitude faster than the competitors. https://t.co/Xybwu8EUD5 2020-12-01 08:00:06 2020: 1/2 century of #backpropagation, the reverse mode of #AutomaticDifferentiation. Published in 1970 by Finnish master student Seppo Linnainmaa. Today, this is driving #Tensorflow etc. Plus: 60-year anniversary of Henry J. Kelley’s precursor (1960) https://t.co/V4GraNqqOf 2020-11-25 08:00:48 5-year anniversary of Highway Nets (May 2015), 1st working very deep feedforward neural nets with over 100 layers. Highway Nets excel at #ImageNet & 2020-11-16 17:31:12 25-year anniversary of neural #PredictiveCoding for #AutoRegressive #LanguageModels and #Neural #TextCompression. With Stefan Heil! Published at N(eur)IPS 1995 when I arrived in #Switzerland (picture taken on the train) and IEEETNN 1996 #DeepLearning https://t.co/Xcm3XK3sp1 https://t.co/JZfEFlqWn6 2020-11-13 08:19:26 15-year anniversary: first paper with "learn deep" in the title (2005). On deep #ReinforcementLearning & 2020-10-29 07:49:36 25-year anniversary of reinforcement learning with intrinsic motivation through information gain or "Bayesian Surprise.” Our 3rd paper on artificial curiosity since 1990. With Jan & 2020-10-27 09:28:40 Quarter-century anniversary: 25 years ago we received a message from N(eur)IPS 1995 informing us that our submission on LSTM got rejected. (Don’t worry about rejections. They mean little.) #NeurIPS2020 https://t.co/ZHDGVA9bv1 https://t.co/mhwMgJLbJr 2020-10-20 07:00:06 30-year anniversary of end-to-end differentiable sequential neural attention. Plus goal-conditional reinforcement learning. #deeplearning https://t.co/5sk9RtfSpp 2020-09-02 06:59:38 10-year anniversary of our deep multilayer perceptrons trained by plain gradient descent on GPU, outperforming all previous methods on a famous benchmark. This deep learning revolution quickly spread from Europe to North America and Asia. #deeplearning https://t.co/rI0cZFRf5n 2020-07-23 16:41:12 Congrats to the awesome Sepp Hochreiter for the well-deserved 2021 IEEE Neural Networks Pioneer Award! It was my great honor to be Sepp's nominator. https://t.co/fOYQXSNnu4 2020-06-25 07:42:28 ACM lauds the awardees for work that did not cite the origins of the used methods. I correct ACM's distortions of deep learning history and mention 8 of our direct priority disputes with Bengio & 2020-04-30 07:07:39 GANs are special cases of Artificial Curiosity (1990) and also closely related to Predictability Minimization (1991). Now published in Neural Networks 127:58-66, 2020. #selfcorrectingscience #plagiarism Open Access: https://t.co/QpKd8eQuKb Preprint: https://t.co/mFSeCzBFnP https://t.co/5phJUmsYEJ 2020-04-21 07:07:48 Stop crediting the wrong people for inventions made by others. At least in science, the facts will always win in the end. As long as the facts have not yet won, it is not yet the end. No fancy award can ever change that. #selfcorrectingscience #plagiarism https://t.co/2AiRUCxFRX 2020-04-16 07:53:26 AI v Covid-19: unprecedented worldwide scientific collaboration. I made a little cartoon and notes with references and links to the recent ELLIS workshops & 2020-04-03 09:16:49 Pandemics have greatly influenced the rise and fall of empires. How will the current pandemic impact China’s rise as a technocratic superpower (which I’ve been following for decades)? I wrote a short article on this. #COVID19 #Coronavirus #Geopolitics https://t.co/07qOofbRhL 2020-02-20 08:11:43 The 2010s: Our Decade of Deep Learning / Outlook on the 2020s (also addressing privacy and data markets) https://t.co/iolkcociva 2019-10-04 08:21:28 In 2020, we will celebrate that many of the basic ideas behind the Deep Learning Revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" 1990-1991: https://t.co/rph9OP76m9 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired