Découvrez Les IA Experts
Nando de Freitas | Researcher at Deepind | |
Nige Willson | Speaker | |
Ria Pratyusha Kalluri | Researcher, MIT | |
Ifeoma Ozoma | Director, Earthseed | |
Will Knight | Journalist, Wired |
Nando de Freitas | Researcher at Deepind | |
Nige Willson | Speaker | |
Ria Pratyusha Kalluri | Researcher, MIT | |
Ifeoma Ozoma | Director, Earthseed | |
Will Knight | Journalist, Wired |
Profil AI Expert
Not Available
Les derniers messages de l'Expert:
2024-11-15 15:57:12 The first results from a π collaboration! https://t.co/GMvwnfcsbO
2024-11-08 14:12:01 At @corl_conf, I’m giving a talk on π₀ post-training, i.e. how we got the robot to fold laundry. Tomorrow/Saturday at 1:45 pm at the WCBM workshop: https://t.co/6f4QqnWlBe https://t.co/5aSyejBKuz https://t.co/zvZDknXfKW
2024-11-01 00:35:34 RT @kvablack: It's been 6 months since I slammed the brakes on several PhD research projects to go work at π... super excited to finally…
2024-10-31 20:08:27 RT @michael_equi: Excited to share what we've been up in the past 8 months @physical_int! We trained a 3B vision-action-language flow match…
2024-10-31 19:18:03 RT @chris_j_paxton: Easily the most impressive uncut autonomous video I've ever seen, from @physical_int https://t.co/Nlb79y0ZTZ
2024-10-31 17:42:01 We’ve brought together an amazing group of people at Pi. This was a huge team effort spanning hardware, data collection, ML infra, algorithms, and experimental research. If you think this is cool &
2024-10-07 15:33:08 Open-ended object retrieval combining: - sim2real for low-level locomotion skills - VLMs for high-level semantic understanding Project page: https://t.co/ZelaTV331O Open-source code: https://t.co/beZkjcFyIH https://t.co/C2hdnBcbhC
2024-07-16 20:35:00 Project led by @jwbkim. With @tonyzzhao @SRSchmidgall, Anton Deguet, Marin Kobilarov, Axel Krieger. A really fun collaboration with @JohnsHopkins!
2024-07-16 20:34:59 Performing surgical tasks on the DaVinci robot is hard. The robot has imprecise joint measurements, but surgical tasks require precision! Imitation learning w/ transformers + relative action formulation allows robots to do this https://t.co/YQaCDyTFtp https://t.co/eT0VMnXl3V
2024-06-14 19:56:43 While parameter efficient fine-tuning techniques didn't work for VLM ->
2024-06-14 19:56:42 Really excited to share OpenVLA! - state-of-the-art robotic foundation model - outperforms RT-2-X in our evals, despite being nearly 10x smaller - code + data + weights open-source Webpage: https://t.co/Y0XU6kX3hl https://t.co/wqQbgG5z8I
2024-06-14 03:17:55 How can we train full-size humanoid robots? New paper introducing: - learned controller for shadowing humans - imitation learning of demos collected via shadowing Website with code &
2024-03-01 00:00:00 CAFIAC FIX
2024-03-11 00:00:00 CAFIAC FIX
2023-05-19 19:00:00 CAFIAC FIX
2023-05-21 19:00:00 CAFIAC FIX
2023-04-21 00:00:01 CAFIAC FIX
2023-04-01 15:50:50 In light of tremendous AI advances &
2023-03-27 22:31:54 @danijarh Thanks @danijarh! The demos include ~15cm of object position variation, and the policies generalize to that degree. Need more &
2023-03-27 17:04:26 But wait, there’s more! Website &
2023-03-27 17:04:25 Can the robot do all this on its own? We train the robot to predict actions in chunks, rather than one at a time. Recipe: action chunking + transformers + only 50 demonstrations The robot *autonomously* completes fine manipulation skills. https://t.co/x57dGirzl0
2023-03-27 17:04:23 First, the hardware: We use simple puppeteering. Just copy the joint angles from the leader to the follower robot. No tactile or force feedback. The manufacturer (@trossenrobotics) didn’t know these tasks were possible. https://t.co/OonfUnLXyc
2023-03-27 17:04:22 We introduce a system for fine-grained robotic manipulation! What’s new? * We can control cheap robots to do surprisingly dexterous tasks * New technique that allows robots to learn fine motor skills A short thread https://t.co/frEOm9BtlX
2023-03-27 02:24:36 @simonkalouche One example I'm aware of shows that it's a lot harder (but still possible) for people to light a match without a sense of touch: https://t.co/NPCVgcfaW5
2023-03-22 20:02:43 I had fun chatting with Pieter on @therobotbrains podcast! Check out the episode for my perspectives on big challenges in AI, research I'm excited about, and other misc topics. https://t.co/IfTBbvSdo6
2023-03-10 23:04:57 The order of features in a neural net doesn’t affect its function. But, hypernetworks &
2023-03-08 21:48:33 Recently gave a talk at Harvard @hseas on how neural nets make stuff up &
2023-03-05 14:33:56 @owais_chunawala We are far from being able to do egg peeling autonomously with this robot. But, there are other tasks the robot can do by itself, e.g. see the video below https://t.co/2Iuch6E330
2023-03-05 10:00:00 CAFIAC FIX
2023-03-02 22:00:00 CAFIAC FIX
2023-02-27 20:47:08 On a previously proposed simulation benchmark, NeRF-based augmentation provides strong improvements. It even outperforms methods that make additional assumptions. https://t.co/7Ddlte388O
2023-02-27 20:47:07 For wrist cameras, changes in arm pose correspond to novel viewpoints. Thus, we can: 1. Collect some demonstrations 2. Train a NeRF for each demo 3. Use each NeRF to generate corrective perturbations 4. Train policy on augmented data https://t.co/xb8n9DVn7N
2023-02-27 20:47:06 Turns out NeRFs are super useful for robot grasping! We use NeRFs for data augmentation, for imitation learning with wrist cameras. ->
2023-02-27 20:35:34 RT @siddkaramcheti: How can we use language supervision to learn better visual representations for robotics? Introducing Voltron: Language…
2023-02-27 01:00:00 CAFIAC FIX
2023-02-16 03:18:45 Thank you @SloanFoundation for recognizing and supporting our research! I'm grateful to have the opportunity to work with amazing students. The fellowship will support them. https://t.co/6QKbRbJG8F
2023-02-14 03:25:39 Not sure how best to use your pre-trained model? Try projecting your features onto a low-dim basis before adding a linear head. A fun collab with @_anniechen_ @yoonholeee @setlur_amrith and @svlevine, which arose from convos at @NeurIPSConf https://t.co/xHRXMfZjrY
2023-02-01 04:14:59 Want to try out DetectGPT? We just released a demo — try it out here: https://t.co/nlnX8tSx0a See if you can fool it and let us know your feedback. https://t.co/oN0kR7iG6K
2023-01-30 01:00:00 CAFIAC FIX
2022-12-08 13:00:00 CAFIAC FIX
2022-12-07 08:00:00 CAFIAC FIX
2022-11-08 06:41:44 I gave a talk in the @CMU_Robotics seminar on* robot generalization via broader data* generalizing beyond the train env through adaptation &
2022-11-02 01:13:00 Check out the paper for more analysis, experimental comparisons, &
2022-11-02 01:12:59 Multiple works have made progress in endowing robots with greater autonomy during learning.But most assume the environment is fully reversible, i.e. that it is possible for a robot to recover from a mistake.What if the robot pushes an object out of reach or flips over? https://t.co/iMOdIP9zO6
2022-11-02 01:12:57 Tired of constantly monitoring your robot learning?RL is supposed to allow robots to learn on their own, but, in practice, the robot needs constant oversight!PAINT allows robots to *proactively* ask for interventions.#NeurIPS2022 paper: https://t.co/UVfzwm4OHeA short https://t.co/Mpt6meO1DP
2022-10-27 04:56:42 @deliprao @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Of course, there is possibly a whole hierarchy of latent features. And the results suggest that perhaps oranges vs. lemons (from BREEDS Entity-30) are latent features closer to Y than to X.
2022-10-27 04:55:55 @deliprao @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Let Z be latent features where P(X,Y) = \int P(X,Y,Z) dz &
2022-10-27 02:19:16 @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Hard to categorize shifts, but:- "input-level" shifts (eg CIFAR->
2022-10-27 02:00:36 @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Thanks for the pointer! Hadn’t seen it &
2022-10-27 01:29:52 The paper has more:- an example where first layer fine-tuning provably outperforms last layer or full fine-tuning- different metrics for determining which layer to fine-tuneA wonderful collab w/ @yoonholeee @_anniechen_ @FahimTajwar10 @ananyaku @HuaxiuYaoML @percyliang https://t.co/N1px1Dxq2I
2022-10-27 01:29:51 Why might this be the case?We don't know.But, perhaps neural nets approximately invert the causal process &
2022-10-27 01:29:50 One of the most reliable ways to handle distr. shift is to fine-tune on a small amt. of data.We find that the best layers to fine-tune depends on the *type* of shift!Compared to fine-tuning the whole network, fine-tuning just one block achieves similar or higher accuracy. https://t.co/zp0pD3omv8
2022-10-27 01:29:49 Common fine-tuning wisdom is to adapt the last layer or the entire neural net.We find that, sometimes, fine-tuning *only* the first layers or middle layers works best.Paper: https://t.co/68rHObOf7FA short https://t.co/sxq00gWdML
2022-10-19 18:44:56 Mixup now works for regression!Code: https://t.co/7XVeamEi1ZPaper: https://t.co/AW4meCB4KD https://t.co/LBcUnQTzh4
2022-10-19 03:08:10 Guiding the agent towards prior experiences during fine-tuning helps the agent recover when stuck. (And, the cheetah learns to reach the goal in one episode!)It also leads to higher success &
2022-10-19 03:08:08 Simply fine-tuning with RL doesn't work well.For example, when pre-training the half-cheetah w/o obstacles and fine-tuning in a new env with obstacles, it gets stuck &
2022-10-19 03:08:06 Unlike other RL problems:* The goal is to solve the task once, rather than learning a policy* If the robot enters new states &
2022-10-19 03:08:05 Can robots adapt on the fly when deployed?Our paper studies *single life RL*, where an agent must adapt a solve a new scenario in just one episode.#NeurIPS2022 paper, led by @_anniechen_, w/ @archit_sharma97, @svlevine https://t.co/3piPZy2JKs
2022-09-20 04:22:46 RT @WIRED: Why is it easier for a robot to perform complex calculations than it is for it to pick up a solo cup? We asked computer scient…
2022-09-20 03:27:00 Why are the motor skills of a toddler so hard for robots to develop? @WIRED challenged me to explain Moravec’s Paradox at five levels of difficulty.A fun and accessible video, also featuring @mcxfrank and our very own LoCoBot .https://t.co/GiHYp2DMSy
2022-09-15 19:40:57 RT @yoonholeee: We're organizing the second Workshop on Distribution Shifts (DistShift) at #NeurIPS2022, which will bring together research…
2022-08-05 20:28:00 @Diyi_Yang @Stanford Welcome @Diyi_Yang!
2022-08-04 20:30:03 Kevin is the first PhD student from the IRIS lab (https://t.co/Xm7ZmlpYCB) to defend their thesis.For more of his work, check out his website: https://t.co/FiMNbSs9MUI'm proud to have advised him over the past several years &
2022-08-04 20:30:02 Congratulations to @TianheYu who defended his PhD thesis this week!His work includes:- Meta-World https://t.co/4pJ06QWoB0- offline model-based RL methods like MOPO and COMBO https://t.co/aZLgoueL8k- methods for using unlabeled data in offline RL https://t.co/slXiLyeTCO https://t.co/pbiDoyvitC
2022-07-15 22:56:42 The method is also quite simple to implement.Code: https://t.co/yVHkSgjkvn#ICML2022 Paper: https://t.co/cvV1itWux9WILDS Leaderboard: https://t.co/PqRetlSIxnSee Huaxiu's thread for much more!https://t.co/Gi7N2iMC1l(3/3)
2022-07-15 22:56:41 Prior methods encourage domain-invariant *representations*.This constrains the model's internal representationBy using mixup to interpolate within &
2022-07-15 22:56:40 Neural nets are brittle under domain shift &
2022-07-12 17:42:43 @judyefan @UCSD @StanfordPsych @Stanford @UCSDPsychology Welcome @judyefan!
2022-07-07 17:57:57 @PangWeiKoh @uwcse @GoogleAI @_beenkim Congrats @PangWeiKoh!! Really looking forward to your future research.
2022-06-16 04:13:30 We also show how model editors like SERAC can be used to change model sentiment on various topics.See the paper for more details &
2022-06-16 04:13:29 We find that SERAC can edit successfully without adversely affecting the model on out-of-scope examples.Try out the demo to see for yourself how these methods compare!https://t.co/UmgYgKoeA6(4/5) https://t.co/ACYTJzQvwW
2022-06-16 04:13:28 SERAC decomposes editing into two parts:1. is the test input *in-scope* for any of the edits?2. if so, how should the edit affect the prediction?These two components can be trained separately, and form a wrapper around a base model.(3/5) https://t.co/LDZL7nu6I2
2022-06-16 04:13:27 Following ENN (https://t.co/yyuBdVlhRI) and MEND (https://t.co/iJCrvbcga3), SERAC learns a model editor from data.Unfortunately, past methods struggle to make precise edits on hard in-scope &
2022-06-16 04:13:25 Want to edit a large language model?SERAC is a new model editor that can:* update factual info* selectively change model sentiment* scale to large models &
2022-06-01 15:08:55 RT @du_maximilian: Super excited to share our work on using audio to help with visually occluded tasks, like extracting keys from a paper b…
2022-06-01 04:50:49 Tagging all of the authors this time:@du_maximilian, @olivia_y_lee, @SurajNair_1
2022-06-01 03:40:34 For more, check out:Paper: https://t.co/wm3r43trdvWebsite: https://t.co/Iivdu04hfOVideo: https://t.co/ef4acItKmK(4/4)
2022-06-01 03:38:52 Experiments show:* Audio+vision outperforms vision or audio alone on tasks involving occlusion (see plot)* Audio allows the robot to distinguish between different occluded objects* Audio may not be reliable for objects like cloth that make little noise when grasped(3/4) https://t.co/i72hm58wlb
2022-06-01 03:38:51 Can robots deal with occlusion?We put a microphone on a robot's gripper &
2022-05-31 18:24:56 @irenetrampoline Congratulations @irenetrampoline!
2022-05-20 16:53:45 I'm excited for the RSS Workshop on Learning from Diverse, Offline Data.https://t.co/rGjaJwUxkFAwesome speakers include @ericjang11, @svlevine, @davsca1, and @wucathy.We extended the deadline to **May 27** if you're interested in submitting! https://t.co/dclbWoIXdz
2022-05-20 08:11:00 CAFIAC FIX
2022-10-27 04:56:42 @deliprao @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Of course, there is possibly a whole hierarchy of latent features. And the results suggest that perhaps oranges vs. lemons (from BREEDS Entity-30) are latent features closer to Y than to X.
2022-10-27 04:55:55 @deliprao @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Let Z be latent features where P(X,Y) = \int P(X,Y,Z) dz &
2022-10-27 02:19:16 @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Hard to categorize shifts, but:- "input-level" shifts (eg CIFAR->
2022-10-27 02:00:36 @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Thanks for the pointer! Hadn’t seen it &
2022-10-27 01:29:52 The paper has more:- an example where first layer fine-tuning provably outperforms last layer or full fine-tuning- different metrics for determining which layer to fine-tuneA wonderful collab w/ @yoonholeee @_anniechen_ @FahimTajwar10 @ananyaku @HuaxiuYaoML @percyliang https://t.co/N1px1Dxq2I
2022-10-27 01:29:51 Why might this be the case?We don't know.But, perhaps neural nets approximately invert the causal process &
2022-10-27 01:29:50 One of the most reliable ways to handle distr. shift is to fine-tune on a small amt. of data.We find that the best layers to fine-tune depends on the *type* of shift!Compared to fine-tuning the whole network, fine-tuning just one block achieves similar or higher accuracy. https://t.co/zp0pD3omv8
2022-10-27 01:29:49 Common fine-tuning wisdom is to adapt the last layer or the entire neural net.We find that, sometimes, fine-tuning *only* the first layers or middle layers works best.Paper: https://t.co/68rHObOf7FA short https://t.co/sxq00gWdML
2022-10-19 18:44:56 Mixup now works for regression!Code: https://t.co/7XVeamEi1ZPaper: https://t.co/AW4meCB4KD https://t.co/LBcUnQTzh4
2022-10-27 04:56:42 @deliprao @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Of course, there is possibly a whole hierarchy of latent features. And the results suggest that perhaps oranges vs. lemons (from BREEDS Entity-30) are latent features closer to Y than to X.
2022-10-27 04:55:55 @deliprao @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Let Z be latent features where P(X,Y) = \int P(X,Y,Z) dz &
2022-10-27 02:19:16 @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Hard to categorize shifts, but:- "input-level" shifts (eg CIFAR->
2022-10-27 02:00:36 @RylanSchaeffer @yoonholeee @_anniechen_ @FahimTajwar10 @HuaxiuYaoML @ananyaku @percyliang Thanks for the pointer! Hadn’t seen it &
2022-10-27 01:29:52 The paper has more:- an example where first layer fine-tuning provably outperforms last layer or full fine-tuning- different metrics for determining which layer to fine-tuneA wonderful collab w/ @yoonholeee @_anniechen_ @FahimTajwar10 @ananyaku @HuaxiuYaoML @percyliang https://t.co/N1px1Dxq2I
2022-10-27 01:29:51 Why might this be the case?We don't know.But, perhaps neural nets approximately invert the causal process &
2022-10-27 01:29:50 One of the most reliable ways to handle distr. shift is to fine-tune on a small amt. of data.We find that the best layers to fine-tune depends on the *type* of shift!Compared to fine-tuning the whole network, fine-tuning just one block achieves similar or higher accuracy. https://t.co/zp0pD3omv8
2022-10-27 01:29:49 Common fine-tuning wisdom is to adapt the last layer or the entire neural net.We find that, sometimes, fine-tuning *only* the first layers or middle layers works best.Paper: https://t.co/68rHObOf7FA short https://t.co/sxq00gWdML
2022-10-19 18:44:56 Mixup now works for regression!Code: https://t.co/7XVeamEi1ZPaper: https://t.co/AW4meCB4KD https://t.co/LBcUnQTzh4
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &
2022-11-17 21:54:36 DreamGrader can also find challenging bugs like "ball skewering" in a separate Breakout assignment. Check out the paper for more! Paper: https://t.co/L9KpcVysk9 Code: https://t.co/qLhzb1TuT9 https://t.co/aWz7PMM8hA
2022-11-17 21:54:34 On real student programs from https://t.co/xkVMLThhha, DREAM achieves near human-level accuracy. BUT, there is room for improvement, esp F1 score, so we are also proposing this problem as an open-sourced benchmark for future meta-RL research! https://t.co/0DySGVtE5i
2022-11-17 21:54:33 We can frame the problem of finding bugs in programs &
2022-11-17 21:54:31 Interactive student assignments, e.g. programming games or websites, are an engaging way to learn how to code! But, giving students feedback on those assignments is tedious &
2022-11-17 21:54:30 Excited to share our #NeurIPS2022 oral: We leverage techniques from meta-RL to give feedback on interactive student programs, reaching within 1.5% of human accuracy. Paper &