Jack Clark

Profil AI Expert

Nationalité: 
Américain(e)
AI spécialité: 
IA Ethique
Occupation actuelle: 
Directeur Politique, Open AI
Taux IA (%): 
54.09'%'

TwitterID: 
@jackclarkSF
Tweet Visibility Status: 
Public

Description: 
Jack Clark est l'un des directeurs d'OpenAI, une organisation de recherche sur l'IA qui veille à ce que les avantages de l'intelligence artificielle générale soient largement et uniformément répartis. Jack travaille principalement sur les questions de politique et de sécurité. Il participe également au développement de l'indice AI, une initiative de prévision et de progrès de l'IA qui fait partie de l'étude de Stanford One Hundred Year Study sur l'IA. Pendant son temps libre, il rédige un bulletin d'information sur l'IA, Import AI (importai.net), lu par plus de dix mille experts. Jack pense que dans les cinq prochaines années ,avec l'IA nous verrons se diffuser dans le monde des systèmes qui agissent sur la culture qui se réinjectera dans l'humain.

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-21 16:32:48 @alex_peys Yes it is! I'll add it to the pile of research projects we hope to do one day https://t.co/ZoiKl8NqCf

2023-05-21 16:26:58 @raveeshbhalla that said, some weekends I have to take a long walk and then sit in a coffeshop or a bar with some postits basically yelling 'fuuuuuuuuuuu**' in my head and writing down prompts for myself, so it's not a perfect science : )

2023-05-21 16:26:12 @raveeshbhalla that seems interesting - I view this as maybe kind of like 'training wheels' usecase, where if you get into the habit of writing more regularly you probably end up not needing chatGPT? But not sure. I used to have cold start problem but after 300+ stories it's got easier

2023-05-21 16:21:25 @alex_peys oh, I think they will, for sure! I also am wondering aloud if there are some parts of writing in terms of first draft generation that are important, and I have some FUD about people using LLMs to generate first drafts. But it might be this is just kind of a passing fad

2023-05-21 16:18:46 @alex_peys don't want to come off as a techno-skeptic here - I think LLMs are profoundly useful and interesting. It's more that I've had some disquiet about using them as a 1:1 substitute for some forms of human writing.

2023-05-21 16:18:15 @alex_peys one thing LLMs are useful for (and which I use them for today) is criticism/editing - e.g, quite a lot of the time first drafts of anything have big leaps of reasoning or clunky/repetitive sections. I typically run my memos through Claude before asking colleagues for feedback

2023-05-21 16:09:53 @40somethingnoob oh of course I'll keep going : ). Writing brings me great amounts of joy and sometimes I write a particularly good story and feel like I'm on cloud 9 for days. It's great!

2023-05-21 16:08:10 @mattbeane @erikbryn I can't shake the feeling there is something more fundamental/involved in writing that makes the automation and augmentation of it have broader effects. But I'm also conscious that this is the story of technology and I may just be being, dare I say it, an Older Generation

2023-05-21 16:07:26 @mattbeane @erikbryn I agree with this. It might be that writing seems to be a much more broadly used skill than navigation? E.g, I used to have to a) remember people's phone numbers (mobiles replaced this), b) read maps (replaced by smartphones), c) do basic translation when abroad (smartphones)

2023-05-21 15:54:58 This tweet brought to you by listening to Mount Eerie this AM and thinking about how Phil Elverum writes music to help him process a biblical tragedy that occurred to him and his family - the generated 'output' is made via a kiln of pain and healing and it's all bound together.

2023-05-21 15:49:26 It's fun to use LLMs for things like criticism or editing or performing operations over information, but I can't shake the feeling that if people start using them to avoid writing we'll collectively lose something. You know how much of my newsletter is written by Claude? 0%.

2023-05-21 15:47:27 In a very real sense, writing feels like a form of cognitive 'embodiment' - whether in fiction or non-fiction we are using our brain as an interface between the real and the ethereal and we produce these text-based artifacts as ways to ground the two worlds together.

2023-05-21 15:46:34 I write short stories because they help me process the great mystery that is the world and convert it into stories and things I can allow my mind to live inside and crawl over. Whenever I write I am desperately trying to understand the world and my place within it.

2023-05-21 15:45:58 One gnawing worry I have about the rise of LLMs is that, for me, writing IS thinking. One reason I spend so much time writing my newsletter each week is I haven't figured out a better way to think about AI than to sit down and write about it regularly.

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-05-03 14:17:25 @jjvincent well deserved. now get me a picture of this "AI" https://t.co/dvfUIouC1U

2023-05-01 14:42:33 RT @sebkrier: Hackers too! (h/t ImportAI) https://t.co/6R4UL7RjYO https://t.co/hUt5OmBhVI

2023-05-01 00:07:00 @jachiam0 Yeah I've felt weirdly uncalibrated here also - always had in my head there was a v significant resource multiplier but the train of progress seems to really be humming along here

2023-04-29 03:07:32 RT @ashleevance: We are building a computing shell around the Earth. This is the follow on to the great dot-com build out and will serve as…

2023-04-29 03:07:30 RT @ashleevance: In the first 60 years of the Space Age, we managed to put about 2,500 satellites into orbit. Over the past two years, we'v…

2023-04-28 17:36:14 @realSharonZhou @LaminiAI @OpenAI @EleutheraI @cerebras @databricks @huggingface @Meta this looks v cool - congrats @realSharonZhou ! will write up for the newsletter

2023-04-26 16:59:56 RT @AnthropicAI: We are pleased to partner with @scale_AI to bring our AI assistant Claude to more organizations. Businesses can now creat…

2023-04-26 13:57:09 @4KTV @bentossell not sure what you mean by this - send a relevant arXiv paper?

2023-04-26 13:52:43 @bentossell speaking as someone who has been very close to all these LLMs for a few years, it's never been clear to me that they're actually that useful as tools for high-quality writing. They are, however, quite good for editing - tools to solicit feedback from, etc.

2023-04-25 20:46:25 @metaviv @rao_hacker_one https://t.co/9pAK6WRAf6

2023-04-25 20:37:23 @rao_hacker_one yes, I share this concern. What do you think is a reasonable number for them to consider?

2023-04-25 17:31:21 @julien_c @natolambert @huggingface oh cool, will read with interest, thanks for sharing

2023-04-25 17:29:16 @julien_c @natolambert @huggingface (i deleted the top level tweet as I think model was misleading so figured good to nip in the bud before it proliferated. thx!)

2023-04-25 17:28:44 @mmitchell_ai i deleted the original because i should of said 'service' rather than model. thx! https://t.co/RDyFgUAwX2

2023-04-25 17:26:46 @julien_c @natolambert @huggingface yup, uses Open Assistant right? writing up in Import AI

2023-04-25 16:29:38 Meanwhile, in the US the National AI Research Resource taskforce is requesting something on the order of $2.6bn over half a decade. The UK numbers are surprisingly large relative to the size of the country and especially when compared to the discussed US outlay.

2023-04-25 16:28:28 I'm at the National AI Advisory Committee annual meeting today and I just did some napkin sums: the new UK AI taskforce (£100m for sovereign AI) combined with new UK compute (£900m for AI compute) = ~$1.24bn. If you scale that up to USD GDP (~8X) you get ~$9.9bn.

2023-04-21 21:12:35 @nmaslej thanks for your excellent moderation (and good jokes)

2023-04-21 00:00:01 CAFIAC FIX

2023-04-19 23:08:00 @MarietjeSchaake @sebkrier @vestager @Stanford congratulations, what a fantastic and important role!

2023-04-19 23:03:24 RT @ghadfield: Glad to finally have this paper with @jackclarkSF up--presenting our take on the landscape of AI governance, the critical de…

2023-04-19 21:05:47 @NoemiDreksler @Manderljung please do, very interested in this!

2023-04-19 17:06:54 @emmahumbling thank you!

2023-04-19 15:10:38 @mmitchell_ai this was being introduced to OpenAI to interview them at NeurIPS as I was the only journalist on the plane going there, so was for media coverage, not a job.

2023-04-19 15:04:57 @Manderljung when is the survey coming out? would be excited to cover

2023-04-19 15:03:38 @RobMcCargow @BusinessInsider @OpenAI @AnthropicAI haha, I still remember that. That was an interesting trip in general - it was just before we did the staged release of GPT-2 and I remember spending lots of that week talking to people about language models and how they were getting increasingly weird. Wild times

2023-04-19 14:54:02 Also, @pabbeel is the mysterious 'Berkeley professor' named in this piece : ). I did some of my best AI journalism writing about projects from Pieter and @svlevine and @chelseabfinn . Fun times!

2023-04-19 14:49:54 I recently realized that through writing Import AI I've read around 4,000 arXiv papers and spent $6,000+ on coffee. You can read and subscribe here! https://t.co/NRyFUkKznD

2023-04-19 14:49:20 Had a fantastic time chatting with @BusinessInsider about Import AI - a project I've worked on longer than my career at @OpenAI and @AnthropicAI . Writing it is one of the great joys of my life and so glad to write it each week for 34k+ subscribers! https://t.co/kUWelCfris https://t.co/bCTTvE4xaY

2023-04-16 19:39:03 @goingforbrooke @AnthropicAI basically, I think all actors have distinct incentives, and right now AI development is being decided pretty much exclusively by actors with private sector capital-return incentives. That doesn't feel robust to me and I'd like there to be other actors with other incentives

2023-04-16 19:37:46 @goingforbrooke @AnthropicAI i'm pretty skeptical that they represent the 'whole set' of actors given that they all come from one place. In my view AI is so important it'd be good to have more diversity among the actors building it. That's why I spend time advocating for 'national research clouds'

2023-04-16 19:25:13 This analysis via @AnthropicAI (building on work of many others) shows what is going on. A lot of decisions about the future of a powerful technology are being made by an increasingly tiny group of capital-intensive private sector actors. From: https://t.co/6R2Qa4bwMr https://t.co/B5jbcH0Wr4

2023-04-16 19:23:04 I think @mer__edith is right to point out that the big story underlying recent AI boom is more industrial capture and scale-up than any specific technology (though of course the technology helps). It'd be valuable for media to focus more on _who_ is building this stuff https://t.co/VvCNimNjWr

2023-04-14 18:19:54 @AISafetyMemes @sebkrier @robertwiblin oh no

2023-04-14 17:38:51 @sebkrier @robertwiblin Oh also it turns out shoggoths are mimetically very fit

2023-04-14 01:12:23 @_akhaliq play Crysis inside a world model of Crysis

2023-04-12 02:12:03 @yaroshipilov @HoskinsAllen Scroll to bottom : ) ! https://t.co/Z69XFZU6wj

2023-04-11 16:24:59 @jayashtonbutler thank you so much for your thoughtful feedback here. I really enjoyed writing this story and felt I had a good sense of place/atmosphere despite relatively sparse descriptions. : )

2023-04-11 16:21:05 RT @eugeneyan: Another paper, specifically the MACHIAVELLI benchmark, finding it more effective to have LLMs label data: https://t.co/uTUUK

2023-04-10 16:38:58 Import AI comes out tomorrow as I'm still tweaking this. Excited for everyone to read!

2023-04-10 02:53:29 This story made me cry while editing it.

2023-04-10 02:52:10 @Kleinspaces I've got mostly better. I have a couple of things I either have not published or would not publish (because too personal), but mostly the newsletter has helped me publish some of the best work of my life. Am very grateful for this outlet!

2023-04-10 02:17:04 @TenreiroDaniel +100. It is mainlining consciousness and unconsciousness at the same time. The best.

2023-04-10 01:55:18 @devonhdolan Thank you! It's where my most true things live

2023-04-10 01:14:47 It is such a fine feeling to be adrift from the world for a couple of hours, relaying some ephemeral signal and sculpting it into information and then transmitting it on further. I am excited for you all to read it. It means something to me, and I hope it does to you.

2023-04-10 01:13:25 I write a short story every week for Import AI and sometimes they're easy and sometimes they're hard and sometimes they're short and sometimes they're long. Today I had that experience where the story arrives in your head almost fully formed and you must simply transmit it.

2023-04-10 00:46:54 @jmdagdelen I also appreciate the clarification, thanks!

2023-04-10 00:46:31 @jmdagdelen Yeah in that case I meant that they were able to iteratively work with Segment Anything to help build out high quality datasets in partnership with humans in a way that seemed more cost efficient. Writing up in newsletter with a bit more detail.

2023-04-10 00:43:27 @ZacharyGraves @johnjnay https://t.co/k0X3l7lI5z

2023-04-10 00:38:29 @johnjnay Yeah that's what sparked this - read that paper (Machiavelli) this afternoon, then cycled around staring at the sky and saying WTF to myself (as is becoming a surprisingly frequent occurrence)

2023-04-10 00:37:12 @deepfates @tszzl That's RLWLF (reinforcement learning without leg feedback)

2023-04-10 00:30:42 Constitutional AI is another example of this - take a tiny amount of data and get a model to bootstrap off of it for further generation and model tweaking based on interplay with the data. Already works and in Claude and other models right now. https://t.co/7FmptdjINT

2023-04-10 00:26:57 One core production input for AI systems is labelled data. The implications of us crossing the Rubicon of systems being able to generate good quality data that doesn't exhibit some pathological garbage in, garbage out failure mode are profound. And it is beginning to happen!

2023-04-10 00:25:57 Something that will further compound acceleration of AI research is that models are now better and cheaper than humans at data generation and classification tasks - this already true for some cases (eg Facebooks Segment Anything model, and GPT-4 for some labeling).

2023-04-09 23:34:36 @robinlavallee Thanks for letting me know, have a good day

2023-04-09 21:43:28 @chrisweston had a fun afternoon reading through this paper and will be in Import AI 324, coming out on Monday via the FORBIDDEN WEBSITE known as Substack! https://t.co/CIw0MlfB1m

2023-04-09 16:26:45 @yacineMTB

2023-04-09 16:24:45 @yacineMTB not only do I regularly donate, but I also bought basically all the merch! https://t.co/AKFrx5WX25 The hoodie is v nice quality

2023-04-09 15:55:16 @chrisweston this is a fun one: https://t.co/SWKIbUOfPZ

2023-04-09 15:52:13 One of the greatest stories of our lives is unfolding right now and you can read about it every day on arXiv for free. Absolutely wild.

2023-04-08 14:41:15 @tmiket @everyto I didn't write this

2023-04-07 00:57:14 Facebook's new Segment Anything model is pretty interesting - did well on an adversarial example

2023-04-02 22:18:25 @Simeon_Cps can you share your slides?

2023-04-01 16:45:58 Dutifully filling this out: https://t.co/ZqxlwCZ3SN

2023-04-01 16:45:04 Woah felt quite an impressive earthquake jolt in Oakland just now

2023-03-31 15:59:47 @evanjconrad @sullyj3 I have become pure brand energy

2023-03-30 23:10:12 @neilwlevine oooh I love Sibley but haven't been up there lately - on my list if the good weather keeps on!

2023-03-30 23:00:29 @maththrills haha this is great! thanks for sharing

2023-03-30 23:00:22 RT @maththrills: Inspired by @jackclarkSF's tweet about his hike being "violently green", thinking of album cover suitability, and easily d…

2023-03-30 22:09:17 @maththrills it's crazy! I was texting my friends 'everything is too goddamn green', as well. I guess this is the outcome of 4 months of rain

2023-03-30 22:06:57 @alyssamvance especially amazing this season given the huge amount of rain! really amazing stuff up there

2023-03-30 21:53:52 @lacker Trials above Claremont Canyon / Stonewall-Panoramic Trail etc.

2023-03-30 21:39:55 A wonderful hike in the Berkeley hills - everything is violently green and the air is fizzing with the rich smell of wet earth and plants fighting to grow. #VOTENATURE2023 https://t.co/BPvzLVc9O7

2023-03-30 20:58:06 @ruthstarkman Yes, everyone can access Claude via Slack

2023-03-30 15:59:12 @S_OhEigeartaigh I am and my awesome colleagues @sandybanerj and @avitalbalwit led this release (along with tons of other colleagues). I'm just excited for more people to get to talk to Claude!

2023-03-30 15:55:39 Stuff I do with Claude: - ask for critical feedback on memos - surface research terms where I've forgotten the word/phrase but can roughly describe it - brainstorm ideas - summarize things into bullet points - search over text for semantic concepts It's great!

2023-03-30 15:53:40 To everyone who has been asking me for Claude access - you can now add Claude to your Slack. Claude is great in Slack

2023-03-30 15:03:34 @jjvincent Thanks for the kind words @jjvincent , I spent a lot of time with this story and glad people have been enjoying it!

2023-03-30 14:26:40 RT @jjvincent: particularly lovely bit of fiction in @jackclarkSF's latest Import AI newsletter on Mechanical Human - the Mechanical Turk a…

2023-03-29 19:52:09 @DynamicWebPaige cool, thanks! yeah was just curious. I like the UI/design of the code, also

2023-03-29 19:38:03 @DynamicWebPaige What do you mean by 'sneak past the filters'?

2023-03-28 20:53:40 @avitalbalwit https://t.co/cTf3e1Wysg

2023-03-27 15:52:29 @RedTailAI some specific ideas and recs: https://t.co/vljhYFY6wa https://t.co/6R2Qa4bwMr https://t.co/wVMgXGf0at

2023-03-27 15:50:29 @RedTailAI yes, of course I do.

2023-03-27 15:36:34 @RedTailAI Yes. My intuition is it happens at some point, but that could be decades to millennia away - no real clue re timelines here. It's also always good to work hypotheticals seriously, so you talk about moral patienthood as though real to help you work the problem.

2023-03-27 14:36:48 @RedTailAI Note, my position here is potential sentience. I don't know that I'd strongly bet for sentience at this point, but I also wouldn't strongly bet against it, and I have no notion on which timeline sentience could even appear. I'm more noting the ethical challenges of its potential

2023-03-27 04:55:27 @jjluff @matthewclifford Very good term! May use

2023-03-26 21:03:44 @sheonhan that most interesting critics (mumford, jane jacobs, baudrillard, etc) seem to have. I feel like independent media via stuff like substack/weird newsletters is one of the better ways to fund it - recently have been pushing Import AI in this direction as an experiment 3/3

2023-03-26 21:02:53 @sheonhan unless you're writing about things with big, dedicated ad buys (e.g cars, wine, etc). Tech doesn't really advertize in same way, other than a tiny slice of enterprise stuff. Additionally, I think both journalism and tech counter-select for the interdisciplinary frame 2/3

2023-03-26 21:02:04 @sheonhan had a read - I agree with you re a culture of tech criticism. From my POV I think some of this comes down to economics - I was a fulltime journo for many years and tech has basically bricked the economics of journalism so criticism isn't so sustainable 1/3

2023-03-26 19:57:18 @sheonhan oooh interesting, will read. Thanks for sharing!

2023-03-26 19:57:00 (And to be clear, I think systems like GPT-4 are technological marvels that are going to let humans do a bunch of new, completely incredible and useful things. But as with anything extremely powerful, we must be aware of the implicit political power encoded in these artifacts.)

2023-03-26 19:54:08 More analysis of GPT-4, as well as the political implications of increasingly capable AI systems, in Import AI 321: https://t.co/GdbgH1i29X

2023-03-26 19:53:16 GPT-4 should be analyzed as a political artifact just as much as a technological artifact. AI systems are likely going to have societal influences far greater than those of earlier tech 'platforms' (social media, smartphones, etc). https://t.co/vnMPXPztdc

2023-03-26 19:51:34 @VSehwag_ thanks! I think it's a very impressive technological achievement, so worth thinking through the societal aspects deeply

2023-03-26 07:30:50 @matthewclifford Age of Em is a good portrait of transition period

2023-03-25 20:30:37 @karinanguyen_ Haha a good idea. I am unfortunately crap at scriptwriting but am trying to get better at it, so maybe one day

2023-03-25 20:17:50 @Trent_STEMpunk @XiXiDu Thanks!

2023-03-25 20:11:37 @Trent_STEMpunk @XiXiDu Every week at https://t.co/NRyFUkKznD

2023-03-25 19:05:29 @arram yes, we're definitely in the 'may you live in interesting times' era

2023-03-25 19:05:02 @jamescham come in on the water is boiling with possibility. It's a quantum foam bath!

2023-03-25 19:03:55 @Deepneuron you should write up some advice! would find helpful

2023-03-25 18:54:54 @the__dude98 haha actually a few things, some of which might be useful! My main takeaway was 'wow I'm having extremely literal dreams that are basically like a real job, I should definitely take a holiday!'.

2023-03-25 18:53:50 @powerbottomson1 wouldn't inflict that on worst enemy. but thanks!

2023-03-25 18:52:31 I've also had a lot of trouble sleeping because I keep on waking up at 4am and reading arXiv papers and being like 'yup, we're in the exponential' and then I just lie there thinking about all the changes that are basically locked-in at this point.

2023-03-25 18:50:36 Dreams I've had lately: - Abducted by a foreign intelligence agency for figuring out details of a black budget AGI project. - Argued for 3 hrs with someone re semiconductor policy. - Fought a doomed political battle to halt the deployment of unsafe AI. Anyway, taking a holiday!

2023-03-24 01:48:30 @eliotpeper Thank you!

2023-03-24 01:12:12 Wrote a story I hope to be useful for upcoming Import AI. The morality of machines will be different to those of people, but these alien ethical and moral frames may ultimately be a bridge. 'My fiction beats the hell out of my truth,' as jawbreaker said.

2023-03-23 22:00:28 @rasbt @omarsar0 @proales yup, seconded! https://t.co/CIw0MlfB1m amazing how much value add you can provide by reading the underlying research papers, haha

2023-03-21 04:48:57 Very confusing paper - superficially impressive, but the model trades off SOTA against ERNIE 3.0 titan (260bn parameters), and is undertrained (329bn tokens of data). Still, an interesting statement of intent and ambition - writing up for Import AI 322. https://t.co/rR0S3PNeMq

2023-03-21 02:27:26 @yonashav @arankomatsuzaki thanks for the link - though reading the paper also seems to suggest that 329bn is a significant undertrain here as well. I will dig in a bit. My interpretation is the paper is more a claim about having infra to train big models rather than about training a _good_ big model

2023-03-21 02:22:07 @finbarrtimbers @arankomatsuzaki they reference it in the paper! (20 = Chinchilla paper). very strange https://t.co/GNyY980B88

2023-03-21 02:19:18 @arankomatsuzaki isn't this, like, severely undertrained in terms of data?

2023-03-20 17:16:19 @natolambert Enjoyed the societal take - would be curious to see you sketch out in more detail what you see as unaddressed, eg the regulation point. Thanks for writing this!

2023-03-20 16:40:15 @jgrayatwork @AnthropicAI psyched to work together again!

2023-03-19 20:09:14 @copybymatt Ding ding ding! Via @mattparlmer https://t.co/Gy9ciU0bZm

2023-03-19 01:21:22 @cromas A+

2023-03-19 01:16:11 When I was a kid I read a lot of religious texts of my own volition (and I continue to do so). For thousands of years, people have wanted to understand/be closer to some theistic god. There's some lesson to this re: the current AI boom that I don't quite yet understand.

2023-03-19 01:13:35 Sometimes I think a lot of the breathless enthusiasm for AGI is misplaced religious impulses from people brought up in a secular culture.

2023-03-17 21:01:49 @tszzl

2023-03-16 05:03:37 @nilshoehing

2023-03-16 01:18:39 Alpaca 7bn passes the 'helicopter test'. Writing up in Import AI 321. Subscribe here: https://t.co/is0y6grJXb https://t.co/jnmDu3Z9zO

2023-03-15 01:26:30 @mattyglesias So I'll typically just say something like "Read this essay and give constructive and critical feedback, paying particular attention to logical inconsistencies or clunky phrasing: $paste_whole_article" and find the results sometimes quite helpful.

2023-03-15 01:25:35 @mattyglesias yeah! You can do this with stuff like chatGPT or Claude as they have long enough context windows (8K+ tokens) that you can paste whole essays in

2023-03-15 01:15:25 @mattyglesias I've found these language models to be quite good at criticism - e.g, I give them a 2,000 word essay and ask for critical feedback. Sometimes this can help spot weird jumps in logic or areas of repetition. Don't find them useful for additive writing yet, though.

2023-03-14 18:25:20 @mcwm @dinabass @rachelmetz @technology lol I'm joking sorry!

2023-03-14 18:20:30 @dinabass @rachelmetz @mcwm @technology nailed it

2023-03-14 17:40:53 RT @adamdangelo: Today we are launching Poe subscriptions, which will provide paying users with access to bots based on two powerful new la…

2023-03-14 17:40:08 @rachelmetz @mcwm @technology it's not JCVD I'm afraid

2023-03-14 16:00:59 @Leoagua1 @d_feldman there are numerous public estimates in public papers of low single-digit millions

2023-03-14 15:47:17 @d_feldman those gpt3 numbers are way off

2023-03-13 15:55:40 @agstrait @alexkozak @JMateosGarcia @NIST @Elliot_M_Jones yes, would love to read/give feedback if helpful! And yes, requires a bunch of resources

2023-03-13 03:35:16 They all deserve this so much. An awesome film.

2023-03-13 03:29:29 100% deserved. I cried during Everything Everywhere All At Once multiple times. A beautiful and fresh performance.

2023-03-13 00:11:21 @andy_l_jones I debated deleting my original tweet but I think that's kind of lame/bad habit to get into, so I retweeted the statement onto my timeline and replied in thread and am eating my proverbial hat

2023-03-13 00:10:13 @andy_l_jones Yeah, I have updated extremely positively given the FDIC statement!

2023-03-12 23:23:13 RT @jackclarkSF: https://t.co/uFwPwAIWtB

2023-03-12 23:23:09 https://t.co/uFwPwAIWtB

2023-03-12 23:07:32 https://t.co/0XyQsdaBcC

2023-03-12 20:55:26 @sbeckerkahn mostly just means this whole situation making me depart from my typical chipper idealism with regard to AI policy and consider a world where it's much harder for AI policy to be effective

2023-03-12 20:22:09 @divergiment Yeah that's a good call out, though perhaps speed with which this can unfold could be different. Thanks

2023-03-12 19:02:38 @cromas I won't claim to be smart enough and familiar enough with this area to have a precise solution, I'm more worried about bad hypotheticals like bank runs on other smaller banks and other contagion effects. I figure there must be some way to reduce damage to a load of companies?

2023-03-12 18:59:48 @yonashav I think some of this relates to proactive loud communications and getting out in front of issue. I also think SV has some privileged information about scale of impacts/effects, eg sense of what does immediately for payroll etc. Also, delighted to wake up on Mon and be wrong!

2023-03-12 18:58:16 @andrewthesmart Honestly I'm not sure and it's a good point. There was a similarly rapid herd shift from crypto to AI a few months ago. It's odd. Maybe SV companies are more like starlings than we thought. A corporate murmuration!

2023-03-12 18:56:56 Note: I really hope we figure stuff out and FDIC comes through, as do authorities in other places (e.g. whoever in UK is responsible for making sure SVB UK doesn't cascade).

2023-03-12 18:47:27 Similarities: - Seeming inconsequential thing happens which has a contagion effect - Need rapid and bold policy actions to avert contagion - Gulf in understanding between those directly dealing with the problem and lawmakers

2023-03-12 18:46:25 Gotta admit that watching this SVB situation unfold is blackpilling me on AI policy a bit

2023-03-12 17:19:20 @Ted_Underwood Chinchilla + instruction following

2023-03-11 23:26:21 @RemmeltE Will be writing about the societal implications in Import AI as I regularly do when covering this kind of stuff. Thanks for sharing your feedback and have a great day

2023-03-11 23:25:52 @RemmeltE I mostly think there's value in reporting the state of things and paying attention to what is going on in the open source / distributed ecosystems. It's also very notable how IF stuff makes smaller models more powerful which has implications for the broader ecosystem.

2023-03-11 21:11:59 @togethercompute hiya, this is interesting &

2023-03-11 19:35:30 @alexkozak @JMateosGarcia @agstrait @NIST for example, you no longer want to evaluate models on zero-shot or few-shot prompting alone

2023-03-11 19:34:58 @alexkozak @JMateosGarcia @agstrait @NIST most policymakers assume the industry has converged on some benchmarks/evals for large generative models that mirror earlier (easier) benchmarks for computer vision - this isn't the case. It's also getting increasingly hard to build benchmarks for cutting-edge models

2023-03-11 19:34:04 @alexkozak @JMateosGarcia @agstrait @NIST a lot of the issue is there are some benchmarks for a narrow slice of things (e.g, fairness), but most of these benchmarks have a huge set of problems (which their creators know about), and we're constantly expanding model capabilities over time.

2023-03-11 18:00:56 @sir_deenicus while I agree that giving the right prompt improves performance, I think the key thing here is that IF-tuned models don't take as many bits to calibrate as non-IF models. Though helpful to elicit a good story out of OPT, thanks for sharing!

2023-03-11 17:47:18 @bahree correct, it's not a 1:1 comparison. The point I'm making is instruction-tuned language models are really good relative to non-instruction-tuned ones

2023-03-11 17:44:43 @alexkozak @JMateosGarcia @agstrait to @agstrait point - I agree! I really want there to be a third-party benchmark/eval system here. I spend a lot of time advocating for @NIST to do this, but NIST needs way more resources to do this effectively. Eager for ideas here - it's a big issue!

2023-03-11 17:43:53 @alexkozak @JMateosGarcia @agstrait I'm also continually developing more implementable/shovel-ready ideas for monitoring (e.g this paper here https://t.co/vljhYFY6wa) and push those every chance I get. Unfortunately policy speed != AI development speed, so we're in a tough situation.

2023-03-11 17:43:02 @alexkozak @JMateosGarcia @agstrait so we're continually doing evals (and publishing on them - see, e.g, red teaming, model-driven evals, etc), but also I am regularly going to DC/Brussels/London and stressing to policymakers that evals maturity at all of the labs is immature relative to scale of problem

2023-03-11 17:42:22 @alexkozak @JMateosGarcia @agstrait Chiming in here - we invest a huge amount in measurement/evaluation/assessment of our models, but as models scale the capability surface keeps expanding faster than the ability to eval the whole surface, so while we invest a lot here, we also need gov to scale up stuff

2023-03-11 17:28:28 @gharik tried a few variations at Temp 1 and 0.9 and most of the time it didn't end up writing the story and did the classic thing of either continuing the frame of instructions, or bounced to diff story. Might have got lucky with temp 0.7 on first try. thanks for suggestion!

2023-03-11 17:25:45 @suchenzang I imagine it's way better - my main take here is it's interesting to see how powerful instruction tuning is and really lets smaller models punch above their weight. Also, is there an IF-tuned OPT thing available anywhere? (Or perhaps someone could try my above prompt!)

2023-03-11 17:17:56 @gharik OPT attached. OpenChatApp is the default here, so not sure: https://t.co/rftZJUpHZw I feel like these are reasonable settings for OPT but if you think I'm artificially sandbagging it lmk! https://t.co/MuJokwwNPy

2023-03-11 17:14:37 Writing up for Import AI 320

2023-03-11 17:13:56 Playing around with the latest ChatGPT replication (OpenChatApp) and it's a) quite good, and b) neatly illustrates how crazy-powerful instruction-tuned models are compared to stock LLMs. Compare OpenChatGPT (20B params) on left to OPT (GPT3-replication, 175B params) on right. https://t.co/UqIhnp2aGg

2023-03-08 15:44:41 Very interesting post about LLaMa performance and ethos for distributing models. Thanks for sharing @theshawwn ! https://t.co/p6L7Cp5LRd

2023-03-07 18:17:45 Claude on Android! https://t.co/gLdN40aZdu

2023-03-07 05:24:39 @Deepneuron @tszzl just white knuckling the exponential, but hanging in there!

2023-03-07 05:09:34 @Deepneuron @tszzl you should definitely write about this!

2023-03-07 03:28:04 @AISupremacyNews @SubstackInc thanks so much! excited to cook up some fun experiments

2023-03-06 21:00:13 Very surprised at the way Facebook handled the LLaMa model launch https://t.co/5Xlcxo6OpH . It feels like half-releasing models and them then circulating on torrent networks is a pretty unfortunate outcome. Would love to understand some of the thinking behind it. https://t.co/UpGpZJy2kH

2023-03-05 21:48:57 Excited to continue this journey with you all! If things go well I want to stand up things like Import AI interview series and such - if I get enough paid subscribers or founding members I'll be able to put people on contract to help scale up the newsletter even more!

2023-03-05 21:48:03 The general idea is that if people see value in it they can be a good audience to 'workshop' these ideas with before later making them public. This means there's a value to subscribing (as some of the ideas are about time-contingent trends in AI), but preserves public utility.

2023-03-05 21:47:19 One of the things I've found valuable about writing the newsletter is having an audience keeps me accountable for regularly reading @arxiv / other AI news sources and regularly publishing ideas out of it. I now want to be accountable for publishing broader thoughts about AI.

2023-03-05 21:46:16 Import AI 319 will be coming out via @SubstackInc tomorrow. Very excited to start a new chapter of the newsletter here. In particular, I'm going to use a paid plan to publish a monthly (or more frequently) 'Import (A)Idea' for subscribers. These will be made free at a delay.

2023-03-05 18:51:23 @mbrendan1 @SamoBurja Jared Diamond - Collapse is also pretty good! But tainter is the best on this

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-27 16:58:04 @GBxGlobal thanks for the promotion but I'm a co-founder - Dario Amodei is CEO.

2023-02-27 01:00:00 CAFIAC FIX

2023-02-21 10:01:28 @jjvincent It's going to be a really relaxing decade

2023-02-20 15:52:44 @generativist Is this... Rubberbossing ?!?!

2023-02-20 15:35:10 @sai_prasanna hell yeah! I agree with this

2023-02-20 14:25:35 For those not familiar with the term: https://t.co/6d5sD2zU0w

2023-02-20 14:23:37 Rubberducking with language models is pretty effective, these days. Having trouble thinking about something? Talk to the rubberduck! RHLF means they're kind of like non-judgy active listeners and by discussing ideas with them you can surface your own cruxes.

2023-02-16 18:25:01 RT @AmandaAskell: My favorite of Claude's excellent recommended titles for this paper when I gave it the abstract was: "HELP! My language m…

2023-02-16 16:50:12 Though I don't think 'just ask the AI to not be bad' is going to solve many hard aspects of safety/alignment, I am continually surprised at how well prompting/asking works for a broad range of problems. Cool work from @AmandaAskell and Deep Ganguli and others at @AnthropicAI

2023-02-16 16:49:16 justaskAItobenice.png https://t.co/zG41reG78p

2023-02-15 19:22:19 @ohlennart engineering projects aren't sufficiently original to get you tenure. Academia also doesn't have a culture of hiring engineers/infra teams to support scale-up computing.

2023-02-15 04:20:15 I'll one day write about the experience of trying to explain LLMs before anyone gave a fuck and how strange and alienating it was, but not today! Just remember - 4 years between gpt2 and where we are right now. Prepare for the next four.

2023-02-15 04:18:53 I remember sitting in an airport in England in Dec 2018 generating samples from the 1.5B model and feeling the gravity of the advance. We all had a visceral intuition for what it meant, but of course when we told people what we thought they said we were doing pointless hype.

2023-02-15 04:15:13 GPT-2 was announced four years ago today. https://t.co/whUSVDX7oY

2023-02-15 03:57:07 @rachelmetz @technology Great team, great hire!

2023-02-14 00:30:05 Cc @giffmana who I figure might know?

2023-02-14 00:29:45 Iirc there's some upper-limit to top-1 accuracy on imagenet due on mislabeling of underlying dataset. Is this true and is it documented anywhere? Want to clarify in this year's @indexingai report.

2023-02-13 22:35:37 @AlexCEngler happy to help!

2023-02-13 01:05:21 So beautiful to be at peace with waves and their susurration and to look beyond to rain-drunk green hills. #VOTENATURE2023 https://t.co/euvEcgVe48

2023-02-13 00:54:48 @powerbottomdad1 Curious for your take on this story by him... I think about it with ref to AI a lot. https://t.co/6je6Nxklxv

2023-02-13 00:43:10 @powerbottomdad1 Good read!

2023-02-13 00:42:48 @JohnHelveston Fair. But gpt2 was a far worse chess player.

2023-02-12 21:50:12 @laurenkunze Ooh I wasn't aware of that! Thanks for the pointer. Anything in particular I should read?

2023-02-12 21:16:49 @amolitor99 but I also recognize there's some chance I'm wrong and this stuff is about to hit a wall, which would be fascinating to see. Hopefully we can check back in a few years and see what is up

2023-02-12 21:16:10 @amolitor99 I'm loudly exclaiming the thing is a rocket ship because we have barely any of the policy/regulatory institutions needed to ensure the rocket ship doesn't immiserate humanity! I also think we've basically seen nothing yet in terms of capability expansion

2023-02-12 21:04:26 what extremely creepy parasocial AI relationships look like (screenshot of someone in a FB group really upset about removal of the Erotic Role Play stuff) https://t.co/qpnzyQ61yz

2023-02-12 20:58:48 RT @nonmayorpete: Replika, the "AI Friend" app, got big by advertising its NSFW abilities. Now it's turning off its "erotic role-play". P…

2023-02-12 20:55:04 Another narrative I'm reading is that perhaps Apple/Google finally saw all the weird ads on TikTok and put pressure on Replika or it'd get taken out of the app stores. Either way, interesting stuff!

2023-02-12 20:51:44 @chrismarrin image recognition has vast economic utility today and is deployed to literally billions of people via on-phone stuff across Apple and Google - it's not going away. Ditto translation.

2023-02-12 20:50:09 Free alpha for AI journalists: Replika seems to be pivoting its business model away from the scuzzy e-sex roleplay stuff. The gossip in FB groups is they switched to an underlying model that doesn't support erotic roleplay anymore. If I had time I would report this out!

2023-02-12 19:44:03 @dileeplearning Sounds like you should email the authors who gave the paper the title

2023-02-12 19:43:31 @bradzaguate Same, tbh. My personal goal is to get an AI-infused robotic exoskeleton (I have a history of back trouble)

2023-02-12 17:26:46 @Grady_Booch @EMostaque thanks for sharing your wisdom, I appreciate it! always eager to read and learn, especially about 'bear cases' on hype things

2023-02-12 17:26:14 @Tim_Dettmers DM'd ya some more questions re SWARM. Thanks for your patience!

2023-02-12 16:39:21 @richardludlow @erich_elsen rekt also, true.

2023-02-12 16:38:45 @Grady_Booch @EMostaque I'm also curious if you've got any particular stuff to recommend here that might work as 'lessons from the winters' (or if you've considered/have writing something yourself)? Would love to read more!

2023-02-12 16:35:36 @groby yeah, I think training systems that funhouse mirror reflect back the internet has a huge number of downsides as well. You might want to check out this report on a 'National AI Research Resource' to see how people are thinking re diff paths https://t.co/HFrbPHKPDu

2023-02-12 16:26:54 @groby A bit on the nose, but yes: https://t.co/JTrGRD1bFq

2023-02-12 16:25:01 @muscovitebob not really - I myself find thinking about this so confusing that I still do lots of normal stuff, like pay into my 401k etc.

2023-02-12 16:24:00 @Grady_Booch @EMostaque I'm aware of the winters (and read stuff like the Lighthill report, etc). What seems different this time is the enhanced amount of economic utility of these systems - if you stopped all research and just focused on engineering, feel like NLP/CV/Translation all have big effects

2023-02-12 02:04:33 @eric_is_weird This is a useful example. Have you written anything about this anywhere or have any links to contemporary writing/papers that note this? Would find helpful!

2023-02-12 00:47:29 @pwlot @tdietterich I have, but it still feels less interesting to me than when I go to the pub with people of different backgrounds and chat. (This is, admittedly, a high bar given this stuff could barely do a sentence a decade ago)

2023-02-12 00:45:27 @dlowd Idk, I followed Watson very closely and it didn't really sit on a series of past systems and was clearly custom designed around a specific problem with a specific architecture (I obsessively read the papers). This stuff sits on a diff and much more proved out tech foundation

2023-02-12 00:43:45 @tdietterich So far it seems like... A lot! But I also am yet to see an LM produce a truly original idea - the sort of idea where when you're talking with colleagues you collectively figured out a correct and novel answer. That could end up being an upper limit on this stuff.

2023-02-12 00:35:38 Anyway, how I'm trying to be in 2023 is 'mask off' about what I think about all this stuff, because I think we have a very tiny sliver of time to do various things to set us all up for more success, and I think information asymmetries have a great record of messing things up.

2023-02-12 00:34:46 @jfischoff I'm not sure but, per some of the other papers (and others I can't dig up while waiting for this train), this phenomenon is showing up generally in a bunch of places. I wouldn't be so interested in it if it showed up purely in one result

2023-02-12 00:33:27 We can also extract preference models from LMs and use those to retrain LMs via RL to get better - this kind of self-supervision is increasingly effective and seems like it gets better with model size, so gains compound further https://t.co/FmQCaILcTN

2023-02-12 00:26:26 We can also train these models to improve their capabilities through use of tools (e.g, calculators, QA systems), as in the just-came-out 'Toolformer' paper https://t.co/hf3n4RgSd7 . Another fav of mine= this wild paper where they staple MuJoCo to an LM https://t.co/VL9ggQUe8k

2023-02-12 00:25:11 There's pretty good evidence for the extreme part of my claim - recently, language models got good enough we can build new datasets out of LM outputs and train LMs on them and get better performance rather than worse performance. E.g, this Google paper: https://t.co/0jfJIaEQR7

2023-02-12 00:24:08 A mental model I have of AI is it was roughly ~linear progress from 1960s-2010, then exponential 2010-2020s, then has started to display 'compounding exponential' properties in 2021/22 onwards. In other words, next few years will yield progress that intuitively feels nuts.

2023-02-07 16:41:41 @peterwildeford Thanks for reading, I'm paying increasingly close attention here

2023-02-07 03:42:11 @mattparlmer Dm'd you. Really sorry you're gonna through it. Agi will get us all very cheap and excellent exoskeletons

2023-02-06 18:33:57 @megyoung0 model cards and datasheets seem like pretty useful interventions and it'd be interesting to think about how to wire these into corporate incentives, potentially through policy recommendations

2023-02-06 17:51:51 @cdossman thanks for reading! I enjoyed writing this one

2023-02-05 18:26:36 @Tim_Dettmers DM'd you some qs about SWARM

2023-02-05 05:22:25 @Stephan90881398 Pure, artisanal, handwritten horror.

2023-02-04 21:29:31 Excited for you all to read 'The Day The Nightmare Appeared on arXiv', a short story coming in Import AI 317.

2023-02-03 21:21:32 RT @goodside: Poe chat is out on iOS, and includes *both* OpenAI and Anthropic models! If you want to talk to Claude (Anthropic's competit…

2023-02-02 14:15:52 @ohlennart @scienceisstrat1 @Noahpinion @ylecun @erikbryn @amcafee house style looks like The Economist

2023-01-31 15:20:52 @atarkowski @dinabass @OpenFutureEU Can you say a bit more about why it isn't an easy sell and/or point me to some writeups here? Would be interested to read!

2023-01-31 00:50:01 @dmitri_dolgov please come to Oakland, it's just across the bridge. Also please do the bridge.

2023-01-30 18:03:31 @avizvizenilman Haha yes please

2023-01-30 17:34:46 @xlr8harder I think startups will 100% do loads of interesting/ambitious things here, but they're still companies operating in markets. I feel like there's a bunch of AI capabilities that can be explored which are inherently hard to commercialize and would like to see more experiments there

2023-01-30 01:00:00 CAFIAC FIX

2023-01-23 19:28:50 @TheodoreGalanos Yup! What I meant was I was really excited about stuff like MERLIN and other RL agent approaches that tried to get agents to build world models. Turns out better thing is perhaps an LLM which can be a plug-in world model (e.g SayCan, Mind's Eye, etc)

2023-01-23 15:58:08 @dileeplearning Always happy to talk just let me know when you're in SF

2023-01-23 14:58:49 Feels like there's some weird signal that I'm not very excited about LLMs - in many senses I feel like this tech is locked in with regard to trajectory. Instead of feeling excited about it I keep trying to generate bear cases and the bear cases keep getting overtaken by progress.

2023-01-23 14:57:28 Things I thought were useful but also basically predictable: - Computer vision based on DL - Language models as all-purpose data transformation engines - Code models 2/2

2023-01-23 14:56:33 Things I was enthusiastic about (and wrong about timelines of): - Self-driving cars - Reinforcement learning as main path for powerful systems - World models (ditto) - Industrial robots combining with RL 1/2

2023-01-20 21:41:38 @hlntnr @maosbot What was the small one?

2023-01-20 00:01:59 @tahirwaseer oh, I just meant my general writing process. I feel like llms are mostly useful just as bullshit checkers/calibrators, but are not that integrated into my overall writing process yet. This might change, but I think I write quite weird things, so may be a while

2023-01-19 23:32:43 @tahirwaseer there isn't a particularly useful pattern - sometimes I oneshot things that need very few tweaks and sometimes it'll take ten iterations till I'm happy with it.

2023-01-19 19:12:27 @nazneenrajani @AnthropicAI elo used a lot in RL - go, openai five, etc

2023-01-19 17:52:40 @jeffreyhuber MY BIG BLOB OF MATRIX MULTIPLICATIONS IS _BULLYING_ ME!!!!!

2023-01-19 17:52:27 @jeffreyhuber I actually left my desk and got a coffee and complained to my human colleagues about how badly Claude negged me, lol, rekt, etc.

2023-01-19 17:40:48 @FelixHill84 Yeah, I think augmentation is wrong word here. But ER does seem like a good way to be able to dump more relevant data into the learning process.

2023-01-19 17:18:24 Tfw yr language model negs you (A recent workflow I've started using is 'write first draft' >

2023-01-18 19:29:45 @togelius Yeah

2023-01-18 19:29:32 @tdietterich Good clarification, thank you!

2023-01-18 17:58:21 after chatting with a few colleagues and texting a few outside experts I'm pretty sure this is right, fyi. An even simpler way to think about this is that these breakthrough results are mostly a consequence of capital - using enough compute and data for long enough.

2023-01-18 17:34:34 One rule of last decade has been 'the simple stuff works', and perhaps why Q-learning worked so well is that it really just lets you have an architecture that is naive to environments because it naturally cycles over relevant data a lot. Thoughts?

2023-01-18 17:33:42 I mean obviously experience replay has a role in the learning process, but if you zoom really far out, it's just a way to get an agent to pay close attention to temporal slices of data from interacting with environment and train on that... https://t.co/h75gjFBjpi

2023-01-18 17:32:58 When I think about the last ten years of AI, I feel like a lot of it got sparked by the arrival of relatively simple approaches that could soak up a ton of data and/or computational resources. In hindsight, isn't 'experience replay' in Q-learning basically data augmentation?

2023-01-17 00:45:16 @ESal Thank you! Glad it's helpful

2023-01-16 21:40:53 @alexeyguzey I, unfortunately, am not cuter this way

2023-01-16 21:40:38 @paul_scharre @nmaslej @Miles_Brundage @jjding99 @mattsheehan88 @tdietterich @ylecun @indexingai @etzioni @CSETGeorgetown @DataInnovation @Noahpinion there have also been impressive things like CogView and other models. Plus, the Chinese ecosystem around video object detection via YOLO variants is very mature/impressive. I'd use lens of 'systems' to augment lens about papers, basically

2023-01-16 21:39:47 @paul_scharre @nmaslej @Miles_Brundage @jjding99 @mattsheehan88 @tdietterich @ylecun @indexingai @etzioni @CSETGeorgetown @DataInnovation @Noahpinion It's generally useful to look at research artefacts / models rather than pure papers. E.g, it's pretty notable that GPT-3replication GLM-130B (Tsinghua) is basically better than BLOOM (HuggingFace) and OPT (Facebook) at a bunch of capabilities

2023-01-16 20:48:48 @natolambert Thanks so much! I miss some weeks myself but still probably averages out to 0.8 issues per week per year at least

2023-01-16 20:29:21 @HaydnBelfield @deepfates @AnthropicAI

2023-01-16 20:28:05 @natolambert I think it's crucial that it's independent from my dayjob, to be honest. This is why it's such a crazy project - I work a lot of hours at my dayjobs and then I have a whole other evening job. But it keeps me sharp and also gives me independent leverage, which is important to me.

2023-01-16 20:26:52 @natolambert Simple answer - I didn't! I started Import AI before I joined OpenAI and always have kept it separate - I write it on trains and planes and on evenings and weekends and early mornings, predominantly. Only way it intersects dayjob is I email myself papers I stumble across at work

2023-01-16 18:37:56 @Dominic2306 @tshevl Yeah I sort of sneak in my weirdest thoughts into the email subscriber version. I'll probably change this so it mirrors to WordPress though

2023-01-16 17:44:24 @BillLeaver_ @deepfates @AnthropicAI thanks! some people tell me they skip the rest of the newsletter and just read the stories, which delights me : )

2023-01-16 16:19:00 @tahirwaseer thanks so much for being an OG subscriber

2023-01-16 16:15:31 @mezaoptimizer I'm trying to do some of that this year! on my list

2023-01-16 16:07:44 @mezaoptimizer compressing. I think about timelines in terms of capabilities I expect to have effects in the world. I don't really have precise timelines about AGI because I think AGI is inherently fuzzy and hard to define. I do have timelines about capabilities that influence geopolitics

2023-01-16 15:55:23 @deepfates @AnthropicAI oh and finally, living in joy and love with my partner and cherishing the rich emotional fabric of being a living being on this planet! and being kind and spending time with friends. touching grass. seeing more punk shows. all that good stuff. : )

2023-01-16 15:54:14 @deepfates - hiring a bunch of people at @AnthropicAI to free up my own cycles for above - spending way more time trying to build evaluations/measurements of AI systems - identifying actions that seem a) useful and b) likely to annoy some people, as most decisive actions involve annoyance

2023-01-16 15:52:52 @deepfates few things: - along with Import AI, going to try and do more public writing about some of the things that are highly likely to happen. - spending more time trying to convince people at various labs to be more public themselves - trying to 'scale myself' so I can do more 1/2

2023-01-16 15:50:08 @tweet_prat the lines are becoming increasingly blurry : )

2023-01-16 15:06:52 @pstAsiatech thanks Paul!

2023-01-16 15:06:28 @rasbt thanks a lot! I write them for the readers, and messages like this make my day

2023-01-16 14:16:15 On New Year's Day 2023 I said I'd spend this year living as if I fully believed my own timelines, so I shaved my head and made some plans. There's a lot to do and too few people, but isn't that always the case? The times, they are a-changin. https://t.co/LNguAwxjGU

2023-01-16 04:22:49 Import AI cracked 30,000 subscribers recently. Thanks to everyone for reading! Writing it is one of the great joys of my life.

2023-01-15 19:00:55 @mwilcox Favorite book I read last year!

2023-01-13 20:57:33 @kimtsherwood hey Kim congratulations, this is awesome! There was something special in the water back at UEA in those days :^)

2023-01-13 04:35:02 It was lovely to know you all. https://t.co/6encoUZMar

2023-01-12 17:11:56 @VictorLevoso @gwern helpful thread - I'd noticed they nerfed the block-hit thing but hadn't seen the vector conversion for inventory and health, thanks for flagging

2023-01-12 06:10:16 @stephenroller @sir_deenicus @terrible_coder @marian_nmt @Tim_Dettmers @StasBekman that's helpful context, thanks for sharing! would be cool to read a retro sometime

2023-01-12 03:24:02 @FelixHill84 Anyway, it's cool work!

2023-01-12 03:23:50 @FelixHill84 But I am pretty confused people aren't making a bigger deal of the results. I've read the paper this evening and I don't think stuff is being sandbagged. DreamerV3 seems very data efficient and has good performance against a multitude of compares on diff benchmarks https://t.co/jlt1j2zRMk

2023-01-12 03:22:52 @FelixHill84 I guess I'd expect some more tweets about it, a DeepMind blog about it, possibly even some press stuff. It might just be that the Sauron Eye of 'vibes' has moved to LLMs.

2023-01-12 02:02:51 DreamerV3 seems like an RL agent that 'just works' across a vast set of environments. I feel like either it has some weird issues I'm not seeing, or people just aren't that jazzed about RL these days. Anyway, worth reading! https://t.co/zDOx9uWsnY

2023-01-12 02:01:54 My extreme inner-inside-wonky take on DreamerV3 (lovely paper) is it's unusual to see a @DeepMind paper with an impressive result (world models that work! cracking MineCraft diamond challenge!) with a tiny number of authors (here: 4). No idea if this means anything though!

2023-01-11 17:29:12 If you have questions, my DMs are open! This job will be awesome... but don't just take it from me - our AI assistant makes a pretty compelling case for doing this job as well! https://t.co/cJyjYN8NhN

2023-01-11 17:22:41 @JosephJacks_ @AnthropicAI Challenges: - Differentiating this stuff can be somewhat subtle, so you need to walk tightrope between 'explain how the tech works in an intelligible way' and 'don't inaccurately portray the tech'. - Pace

2023-01-11 17:21:44 @JosephJacks_ @AnthropicAI Challenges: - AI is drawing a ton of broader attention so you're both communicating to researchers and customers, as well as communicating with society writ large. - AI systems have large-scale impacts which will have a range of downstream effects in the world.

2023-01-11 17:19:31 The JD is here: https://t.co/AM7u5cq2rD Given that we work on large-scale generative models, you'll also be able to do all kinds of novel comms experiments, given that our own technology is an increasingly useful writing assistant. Let's have fun!

2023-01-11 17:18:28 I'm hiring: Come work with me as a Director of Communications for @AnthropicAI . This is a senior role that would suit someone who both enjoys doing creative IC work as well as building and scaling teams. You'll get to do comms around frontier AI systems + can do fun experiments

2023-01-10 00:49:54 @PeterLoPR What do you think?

2023-01-10 00:44:40 @Stone_Tao I really love multi-agent sims, AND i love emergent behaviors.

2023-01-09 19:20:01 To put my cards on the table, I've followed BLOOM for years as it's an example of a collective trying to build a model to counter forces of centralization. But these kinds of initiatives are not going to be that successful if the models they produce don't have great capabilties.

2023-01-09 19:16:26 Does anyone have any links to places where BLOOM (the open source GPT3-esque model from @BigscienceW) is being used? I keep on fiddling around with it in various forms and it's not clear it has many useful capabilities. Perhaps there are multilingual uses? https://t.co/HV1san6RG5

2023-01-08 17:03:07 @SashaMTL @huggingface @AltImageBot1 testing @altimagebot1 https://t.co/6dt9nOcxSP

2023-01-08 17:01:08 @marky_red @AltImageBot1

2023-01-08 17:00:37 @TarekFatah @JustinTrudeau @AltImageBot1

2023-01-08 01:51:52 @aazadmmn yeah, I think stuff like mechanistic interpretability is really important here, or we end up with opaque machines of great power

2023-01-08 01:35:19 @deliprao Yeah I agree here. Think this breaks my weird food analogy. Maybe general rule is just 'simpler the better'. Eg in an ideal world wouldn't have to worry about networking would just have a single vast chip (I guess this is Cerebras hypothesis etc)

2023-01-08 01:28:22 @deliprao Great point! Some ideas here: https://t.co/2w2sCdxE5e So I think maybe it's like Cooking process (training) Ingredients (inputs) Tools (things you use to assemble ingredients and infra to support cooking). Stretching analogy to breaking point but maybe network is the oven?

2023-01-08 01:23:23 Also, embedding everything into same space required some very clever stuff. Maybe the trick is moving your complexity into the tools you use to assemble the proverbial dish, but the cooking process should be simple.

2023-01-08 01:22:26 Perhaps one reason why GATO from DeepMind displayed cool behavior was it exhibited a lot of simplicity - just embed everything from diff modalities into same space and do RL on top. https://t.co/zuqvyxjh0Y (simplicity here is a compliment! Simplicity hard to arrive at)

2023-01-08 01:12:31 @ArnoldBronley the most delicious form of paperclipping is when the universe is turned into a beautiful soup

2023-01-08 01:08:55 One reason why RL-dominant approaches still seem kind of unimpressive/lab-demos is that RL training is still (mostly) pretty complicated. There are some exceptions (e.g RHLF on top of LMs), but mostly a lot of RL-dominant stuff has really complicated multi-phase training schemes

2023-01-08 01:02:23 A lot of contemporary AI research amounts to coming up with the right combo of ingredients and training recipes to maximize the best chance of emergence. Systems with most interesting behaviors tend to have complex ingredients and simple training recipes.

2023-01-08 00:22:03 @peligrietzer What video games are you playing?

2023-01-05 22:58:18 @IreneSolaiman are you on Mastodon then?

2023-01-05 00:46:59 @paniterka_ch @rasbt The first key to avoiding building the torment nexus is to never mention the torment nexus

2023-01-04 19:26:03 @IreneSolaiman yeah, I've noticed that. I wonder if people should start more newsletters (I've found that to be a good way to cover multiple communities, but on the other hand is lots of work)

2023-01-04 18:30:20 @IreneSolaiman How so?

2023-01-02 20:12:34 @nearcyan yeah, I find myself having an instinctual heavily emotional reaction to it. I suspect it'll vibe with some people and not with others. Personally, I don't like it? But on the other hand, I'm not sure _why_ I don't like it. Will think more.

2023-01-02 20:08:06 @nearcyan a fruitful thought experiment might be - what type of person would actively prefer majority of their conversations with close friends are via AI?

2023-01-01 04:57:54 Happy New Year, Twitter. We live in the interesting timeline. https://t.co/sTEdSGb5KK

2022-12-31 00:49:06 @catehall Both of these short stories / smol novellas

2022-12-31 00:48:47 @catehall A colder war by Charles Stross Crystal night by Greg Egan

2022-12-31 00:35:45 @colinmegill @TheEconomist This interview predates chatGPT, but I was explicitly thinking about RLHF models and their impact. Not mad about some of the predictions I made here!

2022-12-30 22:08:14 RT @TheEconomist: “Foundation models are going to be the intermediary between you and computers.” In an episode first released earlier t…

2022-12-29 23:35:10 It's probably easier to compose a story or image if you are drawing on a bunch of sophisticated features which you only gesture/hint at in the final work. What latent universes will we explore as we decide the inner workings of these models? What 'imagination' shall we see?

2022-12-29 23:33:58 E.g, right now getting LLMs to explain why they generated stuff is pretty intractable, and x-ray tech like mechanistic interpretability hasn't really scaled. I figure people are going to latch onto worldbuilding as something that differs human/AI art. I'm betting AI will do this

2022-12-29 23:32:55 A lot of times when I write I think about lore in the universe (eg The Sentience Accords) that gets referenced off to the sides of main action. I'm betting that we'll eventually discover sufficiently large language models compose their own underlying lore

2022-12-29 22:59:01 @girl_hermes Not same album but summer in the city perhaps only song I've heard where repetition of 'cleavage' occurs and kind of works. We stan a millennial legend.

2022-12-29 22:49:12 @girl_hermes Still bangs

2022-12-27 06:14:31 Also flour - do not make this without flour!

2022-12-27 04:06:43 @dctanner Oh yeah 100%, that's how I had them growing up. We might have one kicking around, but I made this while panicking during last 30m of 5-dish combo so reached for the nearest suitable pan, lol. The quest continues...

2022-12-26 08:11:34 Bottom right is my first ever Yorkshire pudding! Have eaten a few in my time, but realized had the truly minimal ingredients (eggs/milk/fat/salt) that could attempt it - and it was tasty! Kitchens are great places to improv.

2022-12-26 07:15:51 Merry Christmas, Twitter! https://t.co/LAdtaJUTvy

2022-12-24 22:45:16 Spread some cheer this festive season by sending your graphic design friends photos of Analog magazine. https://t.co/s3RPFosUBy

2022-12-23 04:32:24 Stumbled across this amazing new song from Trent Reznor &

2022-12-20 00:03:38 https://t.co/fxF86ACYgb

2022-12-19 18:27:21 @PMA1070 no, it's more like "whatever goes in, the model tries to figure out what the 'goals' are of whatever is emitting the whatever, then the model tries to produce responses that satisfy the whatever that is emitting the whatever'. So it could be garbage or could be good or anything

2022-12-19 18:03:10 @defnotbeka It is entirely expected, but like many expected things I also find it strange to experience

2022-12-19 17:38:29 @jonasschuett @Manderljung @NoemiDreksler @tshevl @emmabluemke @Christophkw @araujonrenan @smvanarsdale @alfredoparrah @GovAI_ What's your best guess at how to get companies to implement these ideas?

2022-12-19 17:22:59 This gets more and more frightening the more you think about it https://t.co/hnoo5Wsn2z

2022-12-19 17:22:30 Basically, as we scale up language models, the LLMs try really hard to model whatever is emitting the tokens they're being asked to look at. In a very real sense, the more complex LLMs get, the more they 'look at you' and predict what might satisfy you.

2022-12-19 17:17:17 Pretty eery: AI models learn to reflect user views back at them (since I figure getting low loss rewards monitoring the _context_ of whatever emitted the input tokens). Pretty weird to see it in the wild. LLMs seek to reflect the views of people that talk to them. https://t.co/gD5qbAwRUb

2022-12-18 09:05:33 @Andercot What are the best examples of pessimism and optimism for this field?

2022-12-18 01:30:16 @rezendi The one blemish on my Oakland-maxxing approach is I can't get into cruise. @kvogt please help

2022-12-18 01:26:30 In the last decade: - figured out cut&

2022-12-17 02:07:36 @labenz @AnthropicAI It corresponds to crowdworkers preferring the helpful model about 55% of the time during non-harmful, open-ended conversations. Crowdworkers are setting the context/prompts here.

2022-12-16 21:58:18 @deepfates clippy 2.0 will come to us in the form of an unspeakably beautiful human being.

2022-12-16 21:57:56 @deepfates more like aligned with being an absolute smokeshow amirite

2022-12-16 21:54:59 @sergia_ch @AnthropicAI generally, quite bullish on idea of using models to proactively filter things, or other models themselves. basically, they've got good enough you can use them for these cases (though will probably need to red team uses in case models have weird blind spots)

2022-12-16 21:54:18 @sergia_ch @AnthropicAI hiya, great question. We think the answer is yes. It doesn’t directly address your question, but relatedly, in Figure 12 (buried in our appendix) you’ll see that models are getting pretty good at classifying types of harms. https://t.co/P3PQE5x3yz

2022-12-16 20:27:50 @deepfates I'm sure Stalin was a perfectly nice baby and probably seemingly benign for a long, long time, until one day...

2022-12-15 16:09:01 @deepfates AI models are these kind of freaky data transformation engines so we're living in this world where everything gets converted into alternative data distributions to then serve as inputs to generate weird outputs. synthesizer culture is BACK, baby

2022-12-15 16:08:24 @deepfates this nuts fr

2022-12-12 00:07:04 @hausman_k Really loved this paper/approach. Excited to read more

2022-12-10 23:58:08 @AndrewKemendo @Grady_Booch I don't typically do source-based reporting for import AI (as that might cross beams a bit too much with my professional life), but I imagine this is something @katyanna_q or @Melissahei or other AI journos might be interested in!

2022-12-10 14:59:08 @rama100 Thank you so much! I write it for everyone. So glad it's helpful

2022-12-10 08:11:18 One assumption about AI development is that centralized players will always be able to capture the frontier. GPT-JT suggests federated actors might be able to compete via resource agglomeration over crappy network connections. https://t.co/qOlZJGYlxZ https://t.co/OwAoSeLoSK

2022-12-10 06:32:25 @Stephen_Lynch Truly excellent missed joke, thank you!

2022-12-09 10:40:52 @nearcyan Incredible find... Adding to newsletter. Thanks for sharing

2022-12-08 15:30:38 @PetarV_93 @andreeadeac22 looks very interesting. quick q - what are the key diffs between v1 and v2 of paper? might write up for import ai and would find pointers (pun intended!) helpful

2022-12-08 13:15:57 RT @RhysLindmark: 1/ Live tweeting talk from @AnthropicAI @AmandaAskell at @southpkcommons. Moderated by the excellent @soniajoseph_! htt…

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-15 18:48:39 @natolambert @pathak2206 Haha this is actually on my laptop screen RIGHT NOW. Also the new DM RL cooling paper. Exciting times!

2022-11-14 21:15:02 @foxjstephen @ESYudkowsky Thank you so much for reading! That's part of the idea behind these stories

2022-11-14 17:50:53 @ClementDelangue @mhdempsey More broadly, I'm extremely interested in BLOOM, and would be delighted to hear about any major deployments of it, so I can write about them.

2022-11-14 17:49:54 @ClementDelangue @mhdempsey I explicitly called out decent representation of languages in the above writeup. Once I have time I'm going to try and write a detailed case study of BLOOM, though has been hard to work out precisely who to talk to.

2022-11-14 16:05:27 RT @mhdempsey: Good read via @jackclarkSF on BLOOM which showed an early attempt at distributed development of large scale ML models that u…

2022-11-14 15:45:12 @GroffMRyan Thanks for reading, glad it is helpful!

2022-11-11 19:09:02 @billyez2 this is cool, congratulations! it was fun watching slices of your campaign on twitter, especially knocking on all the doors : )

2022-11-10 22:41:32 @madame_curtis

2022-11-08 06:32:59 RT @shubroski: @carperai hosted a call today sharing details on their upcoming instruction-tuned LLM. Takeaways:1. Focused on "pair progr…

2022-11-08 00:55:52 @KolmogorovGhost He was pretty bad at them, but he sure enjoyed playing them!

2022-11-06 19:53:26 @WilliamFitzger1 oh dude would love to know what you think. After you see it perhaps we can go get that mythical pint we've been discussing for years!

2022-11-06 19:15:19 @richardtomsett the 'it takes two to tango' exchange fully broke me. so good!

2022-11-06 19:03:01 Caught 'The Banshees of Inisherin' at the cinema yesterday

2022-11-06 06:08:54 @nc_znc Extremely useful lists. I used some of this for another secret slide I'll make public soon

2022-11-06 06:06:12 Another meaningful factor (as a few replies mentioned) is that if was trained on a multilingual dataset with a somewhat atypical mix, whereas glm-130 is predominantly english-chinese https://t.co/S8QR5Ik84d

2022-11-06 04:58:58 RT @I_are: Published a couple weeks ago - Google researchers show how to use a language model to improve its own reasoning. I'm skeptical o…

2022-11-05 22:01:02 @zhansheng @EricHallahan hahaha didn't realize that, but maybe that's another case of one-upping on params.

2022-11-05 21:45:47 @I_are oh, I'm pretty hyped up about this one. One of rare occasions I used an !!! in my newsletter. https://t.co/JDQndwdSF6

2022-11-05 21:03:17 @EricHallahan haha on the latter, reminds me of when OpenAI did GPT2 (1.5b), salesforce loudly released a 1.6 billion parameter model (CTRL). history doesn't repeat but it sure loves to rhyme

2022-11-05 20:41:26 @DynamicWebPaige @GoogleAI @BlenderDev ooh this is cool and hadn't seen it, thanks for sharing

2022-11-05 20:10:41 @EricHallahan that's helpful, yeah. It seems like the datamix is important.

2022-11-05 20:10:05 @EricHallahan This is a helpful clarification. I feel like some of the constraints on BigScience came from data - is that right? On b) - is that documented anywhere? It seems quite odd to train a model and not try to push it to be performant (if I'm understanding you correctly)

2022-11-05 19:30:40 One other potential reason is this (as you can tell, I've been somewhat obsessed with BLOOM and why it's not great for a while) https://t.co/wW41P8bWaj

2022-11-05 19:25:43 Has anyone @huggingface @BigscienceW done a comparative analysis of BLOOM and other models (e.g, OPT, GLM, GPT3) and evaluated where the perf differences come from? Would be pretty interesting. Also feels important given BLOOM is a potential template for future group projects

2022-11-05 19:23:52 I thought a diff could be data, but doesn't seem like it - BLOOM was trained on 350 billion tokens and GLM-130B on 400 billion tokens (more tokens = better). Not a substantial enough gulf to solely explain the perf differences

2022-11-05 19:17:34 Feels kind of meaningful that an academic group at Tsinghua University (GLM-130B) made a substantially better model than a giant multi-hundred person development project (BLOOM).

2022-11-05 19:17:04 If you want a visceral sense of how different development practices and strategies can lead to radically different performance, compare and contrast performance of the BLOOM and GLM-130B LLMs. https://t.co/qPdrN6uL8whttps://t.co/9QJ71dlwuZ

2022-11-05 18:45:42 @KTmBoyle @a_d_matos @PalmerLuckey yeah, I didn't mean to imply he wasn't focused, I was noting along with the focus he also seems to allocate time to having fun, and has been doing the cosplay stuff for years (which I think is cool!).

2022-11-05 18:40:16 @KTmBoyle @a_d_matos gotcha, so I suppose your key point is successful people had some period where they were massively boring/focused in earlier part of their life. I think this somewhat holds, but I also think there are counterexamples. Thanks for clarifying!

2022-11-05 18:34:16 @KTmBoyle @a_d_matos There are also contemporary examples - I think @PalmerLuckey has done some impressive things, and he also loves doing cosplay and seems to make time for nerdy fun stuff, along with the work of Anduril etc : )

2022-11-05 18:32:47 @KTmBoyle @a_d_matos I think you're absolutely right that some terrifically successful people are boring, but I also think some terrifically successful people lead very diverse lives and have a broad range of hobbies and interests and have a lot of 'fun' as well. Worth noting both paths work

2022-11-05 18:32:10 @KTmBoyle @a_d_matos More broadly, a colleague at work yesterday was pointing out they knew a bunch of nobel laureates, and something they had in common was being quite well rounded and having hobbies in the arts, along with their core (typically STEM) endeavors. Not much routine, either

2022-11-05 18:30:47 @KTmBoyle @a_d_matos Jack Parsons was a rocket genius and also a lunatic occultist who threw amazing parties

2022-11-05 18:30:10 @KTmBoyle @a_d_matos Ada Lovelace basically invented computers while going to high society balls and generally being as deranged as you'd expect a relative of Byron to be

2022-11-05 18:28:12 @KTmBoyle @a_d_matos Richard Feynman loved playing the bongos and was generally someone who lived with great enthusiasm

2022-11-05 15:44:03 @irinarish @natolambert Working on it!

2022-11-05 02:54:11 @AmandaAskell Amanda blog++ please

2022-11-05 02:52:42 @cartoon_magoo @typedfemale Yeats. Gd autocorrect.

2022-11-05 02:52:23 @cartoon_magoo @typedfemale Hell yeah. If you're into Years, this is a banger: https://t.co/ZERaQ7E2Tl

2022-11-05 02:10:12 @typedfemale Kind of corny but I memorized a bunch of Shakespeare and W. B. Yeats and run them in my head when walking or cycling some times. Really satisfying and illuminating

2022-11-04 20:59:51 @ShaanVP office and pub

2022-11-04 18:23:12 @valmianski yeah, I generally talk about that stuff verbally, but I make these slides as I find using visceral and captivating images is a useful way to get progress across

2022-11-04 18:09:35 @nathanbenaich @stateofaireport another satisfied subscriber

2022-11-04 18:07:07 @nathanbenaich @stateofaireport still reading it actually! probably writing up for this issue

2022-11-04 18:06:53 @natolambert savage, haha. (Does feel like RL hasn't progressed as rapidly recently as LLMs have kind of dragged a lot of attention - fuzzy intuition)

2022-11-04 18:05:29 @nathanbenaich that one was insanely cool actually, might use it

2022-11-04 17:48:45 Trying to make an update on my 'RL progress' slide for some upcoming talks. Anyone got anything from 2021 or 2022 they think is pretty striking? https://t.co/RxuvFJXlpr

2022-11-04 17:27:33 Shoutout to @ruchowdh and the rest of the team for doing some of the most interesting applied work on AI ethics. I really enjoyed their investigation of the Twitter cropping algo https://t.co/YjJaezPrvQ

2022-11-03 17:35:42 Moral Crimes of the Near Future... from Import AI 308. https://t.co/JDQndvVJqY https://t.co/1KpZtze6Zt

2022-11-01 15:04:32 RT @CSETGeorgetown: In their CSET brief, @jackclarkSF, @KyauMill21 and Rebecca Gelles show how bibliometric tools — such as CSET's Map of S…

2022-10-31 21:38:53 @DynamicWebPaige @GoogleAI thanks so much for reading! archival link to the issue here https://t.co/JDQndvVJqY

2022-10-31 21:38:34 RT @DynamicWebPaige: Via @JackClarkSF's newsletter today:"@GoogleAI used a large language model to generate chain-of-thought prompts fo…

2022-10-30 22:10:26 @Miles_Brundage he likes making things and also does cool video projections (based on stop motion animation he has made about the things he make, some of which include things I make), so he periodically decides to do insane things like 'make a 6ft mask and project videos into it'

2022-10-30 19:45:49 @mmitchell_ai @TheZachMueller I am definitely going to start using 'stirred the batter'. A+

2022-10-30 19:30:55 @mmitchell_ai *though.

2022-10-30 19:30:37 @mmitchell_ai Yes! Thought might be more an English saying than an American one.

2022-10-30 09:30:58 It succeeded https://t.co/cbRsDYrDn4

2022-10-30 04:45:04 My friend: yeah I totally shit the bed on that prop I made for the Halloween party. Did it in a couple of days. It's no good.The prop: https://t.co/zVsuYnWfqO

2022-10-29 19:09:59 @rvinshit @porksmith Absolutely incredible. A+

2022-10-29 18:32:38 @xriskology Some people shut down because they find engaging to be traumatic

2022-10-28 16:22:41 @jjspicer read this at once https://t.co/6M8qWJ2Akl

2022-10-27 21:30:29 RT @ClementDelangue: We just crossed 1,000,000 downloads of Stable Diffusion on the @huggingface hub! Congrats to Robin Rombach, Patrick Es…

2022-10-27 18:42:19 @tszzl I played Half Life: Alyx on the Valve Index and it hard-updated me to VR games being amazing, albeit still a little early.

2022-10-27 00:02:01 RT @rgblong: Another question for consciousness scientists and AI people: What is the best evidence for and against large language models…

2022-10-26 17:20:39 @MishaLaskin @junh_oh @RichiesOkTweets @djstrouse @Zergylord @filangelos @Maxime_Gazeau @him_sahni @VladMnih looks cool - link to the paper?

2022-10-26 03:29:56 @gijigae Thanks so much for reading!

2022-10-26 02:05:38 @kipperrii 100% here for fontposting. More fontposting!

2022-10-25 20:15:15 @carperai Haha fair enough. Thanks!

2022-10-25 19:25:12 @carperai You're welcome. Any clues as to release timeline?

2022-10-25 18:49:10 Stages of living in California:First earthquake: dear God hope I've made my Will.Fifth earthquake: Gosh, that was a little rumbly. Hope everyone is ok.???th earthquake (which happened in SF just now): Wonder what gifs people will post on Twitter about this one?

2022-10-25 17:14:50 RT @ElineCMC: .@jackclarkSF's take on this: "The tl

2022-10-25 15:30:28 @EricNewcomer @TaylorLorenz awesome metrics, congrats Eric! Really nice to see your success here : )

2022-10-25 01:52:36 @Meaningness @ArtirKel it's got a bunch of nice examples in it - definitely worth a skim

2022-10-25 01:45:29 One of the nice things about the AI space is when people take their criticisms and instantiate them as quantitative studies of existing systems - kudos to @GaryMarcus et al for a nice paper going over some failures in DALL-E2

2022-10-24 23:21:03 @tszzl Did it with my Senate testimony and was v surprised. Generally find myself running any long doc I write through a LM these days

2022-10-24 05:37:13 Import AI will be coming out on Tuesday as I sprained my ankle at Nopes' final show last night. #VOTEDIY2022 https://t.co/CaKrgLf00r

2022-10-24 01:47:22 RT @SamuelAlbanie: Just how striking are the recent language model results with Flan-PaLM?Here's a plot.Across 57 tasks on mathematics,…

2022-10-23 19:02:59 @CecilYongo Can you share slides? Looks interesting

2022-10-23 02:23:43 @zacharynado You can't necessarily look at the datasets with an API, unless has huge amounts of extra eng time invested. E.g can anyone inspect JFT?

2022-10-23 01:41:19 @kylebrussell Does Playbyte have an in-house exorcist team?

2022-10-23 00:05:13 Image generation going to get strange when most of the images we train on are synthetically generated. Feels like a classic 'tragedy of the commons'. (Yes, some have watermarks, but my sense is there's a race to the bottom on that kind of thing). https://t.co/A3QawDog90

2022-10-20 22:02:20 RT @trishume: I’m hiring for a Resident to work with me for 6 months at https://t.co/G5wR9mJCwE on researching how to reduce or untangle su…

2022-10-19 17:50:18 RT @baxterkb: Starting now @NIST #AI Risk Mgt Framework (#airmf) panel "How to Measure AI Risk across the AI Lifecycle" w/ @jeanna_matthews…

2022-10-19 01:20:05 @powerbottomdad1 @shaig Hell yeah

2022-10-19 01:15:22 @Plinz Who actually thinks this? Genuine question. No such thing as a singularly good person from POV of the world.

2022-10-18 20:10:33 RT @nazneenrajani: Looking forward to speaking at the @NIST panel on AI Risk Measurement representing @huggingface on Wednesday at 10.30 a…

2022-10-18 17:35:27 This tweet definitely inspired by a chat with @nearcyan yesterday.

2022-10-18 14:34:34 @Ben_Reinhardt Yes, I myself feel like Reality is important. I think it's easier to care about Reality insofar as your life is comfortable, though. So I suspect Reality is less and less alluring for people. Another turn of the crank from online gaming, etc.

2022-10-18 14:30:13 @Ben_Reinhardt Same, though I think it is a difficult vibe to counter. Can't tell if this means I am becoming hopelessly old fashioned, or I'm worried about something which is actually worrying.

2022-10-18 14:20:19 Consensual Wireheading feels like a vibe for the next few years. Everyone running into Reality Collapse willfully as it is delightful and diverting during a chaotic time.

2022-10-18 04:21:11 @lathropa No idea, you might want to ping them and ask

2022-10-18 04:12:48 @michael_nielsen @togelius Good link for the Ted Taylor stuff? I think about this a lot

2022-10-18 03:43:20 @togelius Do you think AI is exactly the same as other media technologies, or does it have potential for harm/safety issues that justifies control. (Personally, I think RL agents with big generative brains might be dangerous. Way more confused re things like this, eg image generation).

2022-10-18 03:32:09 RT @jackclarkSF: I feel genuine confusion about what Stability represents. It's culture demanding less control over AI. But there are extre…

2022-10-18 03:27:30 I feel genuine confusion about what Stability represents. It's culture demanding less control over AI. But there are extremely good arguments for controlling AI to reduce misuse and downside risk! Reminds me of early days of Eleuther - not a coincidence bunch of them work there

2022-10-18 02:42:45 @ch3njus They raised $101 million, so I imagine that helps

2022-10-18 02:37:22 Stability is halving its clip-guided image generation prices.

2022-10-18 02:35:23 @memotv 5X to 10X, lol. Idk I'd believe it when I saw it, but that'd be a pretty big computer.

2022-10-18 02:24:32 Stability will fund 100 PHDs this year who will all get 'fat amounts of supercomputing'.Says Stability 103 people now.Stability cluster is 4000 A100s on AWS and wants to grow 5X-10X by next year (this is... A big computer, lol).

2022-10-18 02:23:08 Stability is planning to work on national research clouds and also train thousands of people to train big models. This is a divisive idea - some will see it as democratization, others will fret re safety impact of increasing big models globally while safety an empty chalkboard.

2022-10-18 02:11:48 The general tone of this event is all about control versus distributed collectives. "Does it make sense the most powerful technology in the world is owned and controlled by the few?" - @EMostaque . I think of this as 'culture eats strategy for breakfast'. https://t.co/pb6zZkJmUn https://t.co/oBYvBdNXlu

2022-10-18 01:47:58 A fun split-brain moment is sitting at the back of the @StableDiffusion launch party in SF, while writing up the 'GitHub Copilot investigation' potential legal case for Import AI (https://t.co/TBQoij2o7v), after having written about the $101m raise for Stability.

2022-10-17 14:57:11 RT @mlfavaro: Personal news: After >

2022-10-17 03:23:59 Very good read on the recent CHIPLOMACY export control actions of USG. I especially like the ideas at the end about potential gaps in the policies and/or enforcement mechanisms. https://t.co/lONWjapdKP

2022-10-17 00:18:57 @Tech_Journalism That's awesome. Btw I used to obsessively read your data center site(s) and they were an inspiration for my data center coverage : ). Thanks for all your efforts there

2022-10-16 23:56:30 @Tech_Journalism That's an awesome stat. Wonder if that makes it one of the densest places from a people/computer perspective.

2022-10-07 23:31:34 @EMostaque Had to shoot my shot lol. Solid gif game

2022-10-07 23:31:07 @JosephJacks_ @jposhaughnessy @EMostaque Ah, yes, the chungus scale.

2022-10-07 22:53:50 @EMostaque how big is the computer going to be

2022-10-07 17:27:56 @JMateosGarcia @DeepMind @nesta_uk that's great - have fun. Which team will you be on?

2022-10-06 15:03:04 @deliprao @kevinroose @jekbradbury i find these clarifications very helpful, thanks both!

2022-10-06 01:36:01 @kevinroose The fun part is matrix multiplication is key to training neural nets, so this breakthrough translates into generically better training for a vast range of things. Just a casual 10-20% speedup on something humans have been trying to further optimize for 50 years. Probably nothing.

2022-10-06 00:41:52 @baxterkb @ayirpelle @NIST @salesforce This is wonderful news! We need more people from industry helping NIST think about measurement of AI. Thanks for your service here

2022-10-06 00:16:20 @michelteivel Right on schedule

2022-10-05 22:19:56 RT @DAIRInstitute: We are hiring a senior community-based researcher. Full job ad and application here: https://t.co/9KOCe1ps2p

2022-10-04 23:00:20 @nearcyan Thanks so much for reading and finding worthwhile to share. I write the newsletter to help me think and help others think, so I really love these kinds of callouts and find them very motivating. Have a great day!

2022-10-04 19:26:01 @APreciousPony Give it a year and yes (we've got controllable videos and audio now up to certain time horizons, but smashing the modalities into one model still quite expensive)

2022-10-04 19:06:47 All this generative AI stuff is wonderful and fascinating and is also gonna further the funhouse hall-of-mirrors splintering of our culture into radical, tiny communities. https://t.co/iSS8ZA1ZJI https://t.co/4h5vEIeLef

2022-10-03 16:49:44 @yudhanjaya thanks so much for reading!

2022-10-03 16:49:37 RT @yudhanjaya: If you feel that the AI space is moving too fast for you to keep up, I highly recommend subscribing to @jackclarkSF's Impor…

2022-10-03 15:16:46 RT @rezendi: Wow, @jackclarkSF's latest ImportAI really hammers home just how fast this golden age of AI is moving now: Make-A-Video! Whisp…

2022-10-03 02:24:46 RT @chrisalbon: Prediction: The ability of an individual to generate millions or billions of plausible images, videos, and posts using AI i…

2022-10-02 20:26:03 RT @yacineMTB: my advice to anyone who wants to learn to code because they feel like their white collar field isn't going to get them to wh…

2022-10-02 02:27:47 RT @jackclarkSF: My (much longer) written testimony is available here. I also generated a bit of text from an LM from the end of that and f…

2022-10-01 18:47:14 @saffronhuang @indexingai Thanks for generating those! 2022 was first time we had a dedicated ethics chapter (@mathemakitten did a ton of work on it) and these data points were a crucial input.

2022-10-01 18:15:12 @mikarv (per dms, deleted main tweet as this is gross. thanks for the flag!)

2022-10-01 04:40:16 @AnikVJoshi thank you!

2022-10-01 03:52:38 RT @commercedems: Science powers our future The Committee took a deep dive with experts in artificial intelligence, quantum science and di…

2022-09-30 22:29:09 @snarky_android @commercedems @SenatorCantwell @SenatorHick @uwengineering @AnthropicAI @HPC2MSU @ColdQuanta @usmcsce @UW @uwnews Same here, a true privilege! I also enjoyed getting to meet my fellow panelists (Bob - I scribbled a bunch of notes relating to some of your quantum stuff and am now going down that rabbit hole!)

2022-09-30 19:45:26 RT @ddimolfetta: Coming off yesterday's @commercedems emerging tech hearing, witnesses say more investments and support for research and ed…

2022-09-30 19:45:16 @ddimolfetta @SenToddYoung thanks for the coverage! I love testbeds!!!!

2022-09-30 19:07:48 RT @jposhaughnessy: 1/Announcing Some Professional NewsI'm delighted to announce that O'Shaughnessy Ventures LLC invested in @Stability…

2022-09-30 18:46:54 @joeteicher @mattyglesias @StefanFSchubert I think it's probably holding back useful discussions that can be had with a much broader set of people, and may also be slightly hindering policy impact. Fuzzy intuitions though

2022-09-30 18:17:20 @samim @commercedems @AnthropicAI I'm well. We live in exciting times in a beautiful world. Same to you friend!

2022-09-30 17:57:49 @samim @commercedems @AnthropicAI No, not in the slightest. I was pretty interested that I was having writer's block with the conclusion and then used this, which I felt captured the rest of my testimony I'd been working on quite well.

2022-09-30 16:17:17 @robinc @ClementDelangue @commercedems @AnthropicAI Yeah, that sounds correct. I clarified in a later tweet. Grateful for all the feedback here! The liability/perjury lens is v useful

2022-09-30 15:32:34 @mattyglesias @StefanFSchubert AI people, myself included, need to get a lot better at communicating. I think a ton of the frames that get used are pretty alienating or can seem downright crazy. (other hits include AGI via mad science, etc). Lots to do!

2022-09-30 15:23:09 Senator, we sell blocks. https://t.co/CZ8PgwnnIp

2022-09-30 13:38:39 @teemu_roos @ClementDelangue @commercedems @AnthropicAI That's great feedback, thank you! I appreciate it

2022-09-30 09:41:10 @dazzagreenwood it was a chicken pot pie which actually looked rather unfortunately like a pastry brain floating in goo. did the job, though!

2022-09-30 02:06:07 @yuvalmarton eh, I come from journalism where you attribute everyone who touched the story - the reporters, the editor(s), the sub-editor, etc. Credit should always be shared. But I also think this is just a wildly confusing area, and it's unclear what norms should be.

2022-09-30 01:46:03 Brain still absolutely fried from Senate testimony, but decent chance Import AI coming out this week. A special, never-before-seen behind the scenes look here. : ) https://t.co/Zg3qN9ouri

2022-09-29 23:18:15 @dave_maclean @commercedems @AnthropicAI Thank you so much, I love to write them. Been trying to compile the collection for years, maybe in 2023.

2022-09-29 22:06:24 @Timcdlucas @commercedems @AnthropicAI One of my curses is when I'm incredibly nervous I either start twitching or smiling. I've mostly trained myself out of twitching so now I just smirk under pressure. Not massively ideal

2022-09-29 21:57:02 Since it's doing numbers, might as well be as precise as possible - believe this marks the first time a paragraph output from a language model has been used as part of a human testimony in the U.S. Senate. I shall now go and play billiards. We live in terrifically exciting times! https://t.co/VvvO7lCL5d

2022-09-29 21:54:55 @dinabass @ClementDelangue @commercedems @AnthropicAI https://t.co/XiccZaVjgy

2022-09-27 16:39:10 RT @commercedems: THIS WEEK in the Senate Commerce, Science, and Transportation Committee: Wednesday: Subcommittee Hearing with @Sena…

2022-09-27 16:22:09 @jachiam0 yes

2022-09-27 02:26:31 @primalpoly thanks! so one thing I was wondering is if I should just... port all of these into a single blog post and put some headers around for themes, then can post that to a forum or something. WDYT?

2022-09-26 13:49:39 Import AI isn't coming out this week as I figured I should prioritize the US Senate above my newsletter. Hope readers forgive me! :)

2022-09-26 13:48:13 This is a huge honor and, per how I approach much of my policy work, I am keen to hear ideas from the community. I am feverishly writing testimony today, so if you have good ideas, please DM them to me.

2022-09-26 13:47:24 I will be testifying in the United States Senate this Thursday about AI R&

2022-09-26 02:14:30 @EnkrateAI thank you!

2022-09-26 01:45:43 Import AI will come out next week as I've been dashing about DC all week (and this weekend at a conference) so haven't had time to write. Hopefully next issue is a banger as tons of cool papers, including using GPT3 to simulate people for polling purposes. : )

2022-09-23 17:29:13 @stanislavfort @ftxfuturefund @GaryMarcus +1!

2022-09-23 17:25:54 @alexkozak I'm glad they're running the experiment - will help us figure out if this stuff is a waste of time. But take your point, also.

2022-09-23 17:04:22 Excellent approach to funding! 'We think X is important and we'll pay you tons of money if you persuade us X is bullshit!' Good chance for people skeptical of these ideas to try and shift a load of capital in a different direction. https://t.co/DmrWxDF1nj

2022-09-23 16:53:00 @xriskology @rgblong (i get sent a lot of stuff and don't circulate, but since you don't know me super well, I just am adopting the strategy of 'trust but verify'. hope helpful!)

2022-09-23 16:52:22 @xriskology @rgblong sounds good and completely understand - maybe send me something low-stakes first and see that I don't circulate it and play the iterative trust game. (This is the same way I operate with sharing stuff with people, and it's how I personally figure out trust stuff).

2022-09-23 15:52:41 @xriskology @rgblong hi there! so, as I'm sure you are, I tend to be 'snowed in' with work these days, but if you DM me stuff I'll try and give some feedback ahead of time if I'm able, though if this introduces overhead for you, feel free to skip. thanks!

2022-09-23 14:49:11 @xriskology @rgblong I see. Consider this as polite feedback that I think your arguments are more persuasive the more you do holistic analysis of the whole paper, and if you ID specific words as indicative of intent it makes it harder to see the argument.

2022-09-23 14:46:01 @xriskology @rgblong I've also said stuff like 'due to the delightful traits of markets to incentivize a race-to-the-bottom on safety, this $scary_thing will get cheaper'. Again, I'm not literally 'delighted' by it, I'm most just using colloquial language in conversation to lighten a heavy topic

2022-09-23 14:43:11 @xriskology @rgblong this feels like a bit of a reach - I myself say stuff like 'thanks to economies of scale, this scary thing is gonna arrive pretty quickly'. I'm not literally glad about it, I'm just noting that markets are a thing and they make stuff cheaper, even stuff I'm afraid about.

2022-09-21 13:57:10 @WriteArthur @EMostaque @OpenAI is there any particular published rationale for the Ukraine stuff? It's hard for me to say anything sensible without further details, and I mostly avoid doing source-reporting for Import AI (as it'd kind of obviously generate some static, so tend not to)

2022-09-21 12:52:48 @kyliebytes still shocked that they brought out the President of Haiti to profusely thank Benioff for what Salesforce had done for the country. wild stuff

2022-09-21 12:51:48 @kyliebytes haha, that takes me backhttps://t.co/t1dSk1m2u7Salesforce is 100% the most weird large-scale cult-like tech thing out there. Still going strong almost a decade after this piece! really crazy

2022-09-20 17:11:35 @ethanCaballero @andy_l_jones did a good thing here https://t.co/C2tSD3mz4V

2022-09-20 11:05:05 @andrewrens_ria i mean the most prominent thing gets the most attention, usually of a negative kind

2022-09-20 01:49:31 @exteriorpower @joanfihu re the mode of release, with way fewer restrictions applied. That part isn't the same. But I think a lot of the grief SD is getting is about some of the meta-issues of generative models and relationship to economy/employment, rather than its idiosyncratic OSS-like release 2/2

2022-09-20 01:48:47 @exteriorpower @joanfihu (the above could seem like weird semantics, so I should cluster - SD is 'doing the same thing' as all the gen model factories in terms of capability dev, and 'doing the same thing' as OAI/Midjourney/some other fringe players re 'making broadly available', but is doing diff 1/2

2022-09-20 01:44:13 @exteriorpower @joanfihu don't mean any shade by this to any of the parties involved - it's a big space and I think it's valuable lots of experiments are being tried and generally welcome a broad and open debate

2022-09-20 01:43:35 @exteriorpower @joanfihu by doing the same thing I mean 'making generative models broadly available'. As I've said in a bunch of ways, I'm generally confused about this whole area - it's very unclear what things are appropriate safeguards versus PR/policy/product reactions

2022-09-20 01:42:18 @vlordier oh yeah, I agree - SD represents a different approach to the proprietary/closed things. I'm pretty confused about all of this, but glad SD is contributing to the broader debate. My point is it's a shame lots discussion centers on SD rather than broader issues it highlights

2022-09-19 22:55:28 @mm_jj_nn Well, I think this is still the case - StableDiffusion got funded by @EMostaque who used hedge fund money to foot the upfront bill then subsequently monetize. Was trained on hundreds of GPUs as part of a 4k A100 cluster being built out. Seems v hard for academia to do that still

2022-09-19 22:46:10 @joanfihu eh, I just mean it's super not fun to have tons of people shouting at you online and calling you evil or someone who wants to make people unemployed, when you're doing the same thing as a bunch of less prominent actors

2022-09-19 22:34:13 By 'tragic', I mean that a lot of the debate naturally centers on StableDiffusion when discussing these problems, but really the problems are bound up in the whole space of generative art.

2022-09-19 22:33:34 One of the tragic things about the controversy about #stablediffusion is that most of these problems were present in DALL-E, Imagen, etc, but they were a lot less public (either not released, or behind a firewall). A nice example of 'tall poppy syndrome' in action.

2022-09-19 19:45:37 @Lan_Dao_ excellent tweets. I also felt this way about dubai. Oddly sterile. Like a potemkin 'fun' city

2022-09-19 16:06:56 @WriteArthur thanks for reading! I feel legitimate confusion about this safety/censorship stuff so trying to write more in public about it : )

2022-09-19 16:06:36 RT @WriteArthur: "Though some call this censorship, it's worth bearing in mind the Chinese government probably views this as a safety inter…

2022-09-18 14:16:28 @JeffLadish For hotels, I think https://t.co/8A99j7FlT9 is really nice - decent geographic view, lots of granular detail.

2022-09-18 13:16:10 @SamoBurja will do! currently living out of a hotel in DC for a few days, or I'd be there

2022-09-18 04:13:53 @SamoBurja hell yeah.

2022-09-17 22:07:42 @MatthewJBar Did some thinking here for how you could have a dynamic private market of third-party auditors overseen by government (with actual ability to impose severe penalties on auditors that get captured by clients) https://t.co/UZMcx5T5wc

2022-09-17 22:04:03 @gwern @Skiminok @simonw @vlordier I'm most excited about doing 'expert red teaming'. This kind of proved it works, but the domains aren't necessarily the important longterm ones. Longterm stuff is red teaming for chemical synthesis, bombmaking, other strange capabilities that come from synthesis

2022-09-17 21:54:53 @vlordier @Skiminok @gwern @simonw filters are just external plumbing, I am skeptical they can actually deal with an engine that wants to eat you

2022-09-17 21:47:47 @simonw the answers will not delight you https://t.co/8SEubKtUws

2022-09-17 21:45:44 @andy_l_jones probably people like Tyler Cowen and others

2022-09-17 21:41:16 @simonw (btw v enjoyed your blog post and writing a short thing on it for Import AI)

2022-09-17 19:45:06 @AlexGodofsky (note, I'm basically confused about this issue. Some controls of some systems are probably necessary, but most controls come off as paternalistic/asinine/PR-motivated, so I think it's reasonable to counterreact). Trying to write some stuff. Making memes till I figure out words

2022-09-17 19:23:31 @vlordier @gwern @Skiminok and sorry for being snippy that was a bit low-class of me, thanks for your thoughtful response!

2022-09-17 19:23:11 @vlordier @gwern @Skiminok ah, I see. Well, just because I'm somewhat used to these things, doesn't mean I can't find them scary. The fact these attacks are possible is scary. Scary things don't have to be surprising to be scary, they can be totally expected and scary nonetheless!

2022-09-17 19:22:00 @AlexGodofsky https://t.co/QWt5fIt6lh

2022-09-17 19:16:25 @vlordier @gwern @Skiminok Tried to write some more about these issues here https://t.co/6R2Qa4aYWT

2022-09-17 19:15:57 @vlordier @gwern @Skiminok I'm somewhat familiar with GPT3 (see author list)

2022-09-17 13:24:31 @EMostaque Katja Grace does really good work involving surveying researchers in the field for their thoughts on AI progress, and generally does useful thinking on impacts of AI. They may have adequate funding, though.

2022-09-17 13:15:03 @gwern @Skiminok this stuff is way scarier because, per your comments, it's a class of semantic attack, and because of how these things work, there's gonna be a ton of them we haven't yet discovered which will work - sometimes more effectively than those we have today

2022-09-17 13:14:22 @gwern @Skiminok thank you for saying this! I covered Tay at the time and it was a hardcoded echo function that let you load stuff in manually. Everyone reported on it like a learned thing but it was basically just a leftover old shiv into the system they hadn't debugged

2022-09-17 02:35:47 @juan_cambeiro way better for this to happen now than ten years from now. You'll be fine, though it'll absolutely suck for a while.

2022-09-16 22:48:31 @jiayuanloke @AnthropicAI @scsp_ai haha thanks, I have a hairdresser and he only ever sees me approx 2 days before I have to go to DC :)

2022-09-16 20:07:25 Had a terrific time presenting an @anthropicai model at @scsp_ai in DC today. Small amounts of terror, lots of laughs, and successfully broke the model on stage to help people see the hard edges of this stuff. More to come! I also do Weddings. Recording online soon. https://t.co/Qf5wc382I9

2022-09-16 18:33:41 @meerihaataja @scsp_ai yeah, I found the questions v interesting. Was glad someone appreciated me breaking it!

2022-09-12 20:04:30 RT @carperai: Ethics, it’s important to us! That is why we want to hear from the software engineering and beyond community for our latest p…

2022-09-12 05:38:35 also, immediately after we felt it my partner and i reported our data to https://t.co/ZqxlwCZ3SN. Consider doing so yourself!

2022-09-12 05:35:37 A smol earthquake in oakland about five mins ago

2022-09-11 18:05:13 This will be the year I make a coherent graph showing all YOLO variants and their perf improvements over time. https://t.co/K48xIKBlvv

2022-09-11 18:02:00 Writing about yet another YOLO variant for ImportAI this week. There are now multiple YOLOv6s, a YOLOv7 came out a few months ago, and there are multiple groups developing YOLO variants in parallel. We're entering the YOLO Multiverse here, folks. https://t.co/u6AutReIvg

2022-09-11 00:55:28 @kevinroose The toiletularity

2022-09-11 00:13:27 Anyway, clearly all of this talk about machines rapidly evolving and eventually outpacing humans in a bunch of domains is guff.

2022-09-11 00:12:51 2015 versus 2022. Incredible. 2015 via this @rsalakhu paper https://t.co/zTb2T2Ru5X 2022 via https://t.co/VkskqYVIMN #StableDiffusion https://t.co/Eprs982JzP

2022-09-11 00:02:38 @jmshoffstall haha hell yeah! it's a solar panel though so probably an expensive thing to break in the pit

2022-09-10 22:50:48 Wrote the story next to a solarpunk soundsystem, which felt appropriate :^) https://t.co/jmKgcevHab

2022-09-10 22:48:28 Gonna write some formal things and also figure out some policy stuff which I'll try to be legible and public about. For now, expect more weird stories and tweet threads as I grapple with this stuff. This information shouldn't be stove-piped - it should be broadly available.

2022-09-10 22:47:41 Wrote a somewhat spicy fictional story about AI labs, AI development, model theft, anarchy, takeoffs, and so on. Coming out in this week's issue of ImportAI. I think the political economy around AI development is completely busted so I'm trying to think about it more.

2022-09-10 19:20:43 @jachiam0 yeah I mean that's generally been my view (and relates to some of the stuff at @AnthropicAI ) - can you make models intrinsically safe rather than spend your time building external plumbing. Unfortunately doesn't deal with larger problem of 'who decides safety criteria', tbd

2022-09-10 19:17:34 @jachiam0 there's also some incentive for 'race to the bottom on safety' as a response to this, if you can gain market share via OSS'ing a model then rapidly building some service/inference layers on top which you commercialize (which is what StableDiffusion seems to be doing)

2022-09-10 19:16:52 @jachiam0 so a few companies are gonna try and make various control layers for these models to deal with the numerous issues, but many of these controls (e.g, somewhat arbitrary filters) piss enough people off they inspire a kind of libertarian counterreaction

2022-09-10 19:16:15 @jachiam0 Yeah that makes sense. I'm working on an essay about this confluence of issues, but my general take is that most attempts at control inspire counterreactions of same order of magnitude as perception of control

2022-09-10 19:12:20 @jachiam0 Do any of these questions go away if you sell it via a relatively controlled API? Not trolling. I more mean it seems like these issues are basically fundamental to the generative art tech, moreso than the open source release aspect (exception - certain types of abusive image)

2022-09-09 16:51:23 @MichaelTrazzi @Manderljung @GovAI_ maybe we should discuss my takes sometime haha

2022-09-08 21:52:40 @ArkadyMartine TallulahAmara AdaliaDahlia

2022-09-08 15:58:04 @ethanCaballero @DavidSKrueger possibly BERT, but depends on how you're defining 'release' here. I think if you're counting (what LM and variants) then this works

2022-09-06 23:30:18 @mer__edith @moxie @signalapp Is not covered in the announcement, but I imagine this means you're no longer a formal advisor to FTC?

2022-09-06 14:10:10 @halhod @rodolfor Pretty sure I predate this - will check the archives but think mentioned 2020

2022-09-05 22:10:47 Some friends, commenting on the heatwave today:Friend A: The sun's been gaslighting us.Friend B: Well, it is the original gaslight.

2022-09-05 02:29:11 Peter had very different politics to this, but he embraced the role with his typical combination of enthusiasm, mischief, and effort.

2022-09-05 02:27:11 Final anecdote - back in 2017, @gdb was due to testify in Congress. Peter came by the office to help us do a 'slaughterline' - where you pretend to be elected officials and ask hard questions. V funny to see Peter channel hard-right ideologies while asking questions.

2022-09-05 02:16:42 RIP Peter. Thanks for making us all safer via your work on LetsEncrypt and various EFF endeavors. And thanks for trying to figure out better incentives for AI systems via @AIObjectives . You were kind and you were loved and you mattered.

2022-09-05 02:15:12 Peter and I didn't always agree about things, but we always took time to speak to each other and have earnest debates. He was a model for how to be a great contributor to the community and also a total (lovable) weirdo.

2022-09-05 02:14:26 Peter Eckersley passed away. @pde33 was a brilliant and generous person. I have a vivid memory of he and I going to Beijing in ~2018 and on the plane he was programming a world simulation for different types of AI rollouts on a burner laptop.

2022-09-02 19:50:14 @mmitchell_ai One thing I find a bit confusing here is, sort of by design, companies deploy stuff for profit. Did you mean 'Organization shares tech without profit' (this would make more sense to me)

2022-09-01 22:49:22 @ESYudkowsky thanks for the correction! as an FYI, I and some others are working on evals and techniques to try and work out if AI is lying to us and will hopefully have stuff to share there in a while. Ideas welcome given the potentially high stakes here

2022-09-01 22:20:50 @jacobmenick @michael_nielsen I have a bunch of thoughts and will try and write something for import AI this weekend. Major implications for a few things. Also seems to represent some inherent timeline bets on pace of tech stacks decoupling and reaching equivalence.

2022-08-31 19:12:27 @rajko_rad @AnthropicAI re the hype thing, our policy for demos is we always break our system live, as well as show some advanced capabilities. Think a good way to counter hype is to help audience understand where rough edges continue to be. (the systems are getting harder to break, though...)

2022-08-31 19:05:23 More broadly, I'm going to be in DC for a couple of weeks in September, so if you're based there and want to chat, shoot me a DM. Thanks!

2022-08-31 19:04:57 Demos are some of the best ways to understand the strengths and weaknesses of AI systems, so I look forward to demo'ing some @AnthropicAI models and breaking them in front of a live audience! https://t.co/DIO7N0VhYI

2022-08-30 21:58:05 RT @waxpancake: Unlike DALL-E 2, Stable Diffusion also lets you generate images of famous trademarked characters, so we searched for 600 of…

2022-08-30 19:39:42 @karpathy everything is very normal and the rate of change is in no way disturbing

2022-08-30 17:53:44 Chris is a terrifically nice person and fantastic colleague _and_ EXTREMELY DATEABLE. Date him! https://t.co/iL9GUSILAy

2022-08-30 03:07:57 @LurkerSentinel @ElectionLegal *with a jar of heavy creamAnd other horrors beyond comprehension

2022-08-29 03:52:09 @pacoid @chrisalbon Also melatonin for jetlag and/or magnesium supplements, moisturizer (planes tend to dry out my skin a lot, YMMV). Also, always worth polling your friends who may live in Berlin or have been there lately for cool spots close to your hotel

2022-08-29 01:40:18 @chris_j_paxton It's an excellent outcome!

2022-08-28 20:39:51 RT @Ted_Underwood: Ethical intuitions will be reshaped if people start fine-tuning generative models on a consumer GPU. The 2019-20 view th…

2022-08-28 19:33:36 @f_j_j_ it's not very clear that many state actors are good at training models like these, at least for now. Identification feels basically impossible as you're training models to match loss of data distribution. Possible some steganographic approaches could work, but depends on length

2022-08-28 19:32:41 @OwainEvans_UK yeah, it averages out to that. It's really nice. Think one of the v valuable things is all the random conversations around the lunchtable - we have a bunch of people with different backgrounds ranging from physics to natsec, so feels v generative. I'm a fan : )

2022-08-28 19:26:42 @ChrisPainterYup most image models are kind of counterintuitively way cheaper than text models. also, tons of good research ideas wrapped into model.

2022-08-28 19:19:16 This part is crazy as well - funded by an individual https://t.co/RbDPOuRUTl

2022-08-28 19:18:48 @OwainEvans_UK generally feels like having an IRL culture is better for research problems where there are many branching paths and few obvious next steps. Just need to hash out a load of FUD with colleagues, and v hard to do this as easily/naturally remotely or over zoom

2022-08-28 19:18:11 @OwainEvans_UK personally, I've found the productivity boost from being in the office well worth the tradeoff of losing some time to commuting. I WFH one or two days a week and am in office the rest (voluntarily, it's not really a specific policy)

2022-08-28 19:10:40 this absolute mad lad just footed the bill for Stable Diffusion themselves. A good example of the shape of things to come - distributed collectives raising capital to create bottled-up representations of data distributions, seeding them across the digital space https://t.co/Vz2GKTD36d

2022-08-28 19:06:11 @ManlikeMishap (also this is obviously a way longer answer/discussion than necessarily suits twitter, more mentioning it as hopefully an intuition pump for more public thinking here. Trying to write something myself also.)

2022-08-28 18:58:34 @feral_ways It's extremely valuable and is hopefully moving overton window towards more disclosure

2022-08-28 18:56:40 RT @nousr_: Stable Diffusion running inside of GIMP and using google colab as the GPU backend. https://t.co/deQ6duE8Sz

2022-08-28 18:55:42 @ManlikeMishap What are the specific concerns about how accessible training and fine-tuning these models will be? You should write something and post it publicly!

2022-08-28 18:55:04 @jaylagorio @scottleibrand @EMostaque Though to be clear I don't know, I'm just sorta guessing at this. @EMostaque may want to clarify etc

2022-08-28 18:54:29 @jaylagorio @scottleibrand Pretty sure @EMostaque did. It's pretty crazy - it almost certainly amortizes to less than a cent per user as of today, likely substantially cheaper.

2022-08-28 18:38:16 @mchorowitz as with most things in this area, both interesting and mildly terrifying

2022-08-28 18:30:21 Stable Diffusion: $600k to train. I'm impressed and somewhat surprised - I figured it'd have cost a bunch more. Also, AI is going to proliferate and change the world quite quickly if you can train decent generative models with less than $1m. https://t.co/auddBQcAZY

2022-08-28 18:03:39 @thomeagle I recall this, but the holes in my brain this drink gave me make the recall most painful

2022-08-28 06:54:42 @namathree I'm just as awkward as Nathan Fielder, if not more!

2022-08-28 06:35:03 My friend took this pic of me - for the record, this is how I look at pretty much any live music or art event. https://t.co/2jc2Mdmqow

2022-08-28 06:34:12 To see friends and strangers play their weird music in punk houses to the assembled dregs of a Saturday is a wonderful and precious thing. #VOTEDIY2022 https://t.co/XIJE1nCGjv

2022-08-27 21:49:48 @friendly_gravy @AnthropicAI We're also shipping this red teaming paper to a few interested policy stakeholders advocating for it being wired into various mooted risk assessment approaches, so we're actively trying to push this into a regulatory context!

2022-08-27 21:49:14 @friendly_gravy @AnthropicAI I think the current state of v minimal regulation of AI isn't good and is also increasing the chance of unsafe or dangerous deployments, so we need regulators to catch up. I spend a bunch of my time working in regulatory forums like OECD and others for this purpose : )

2022-08-27 21:48:35 @friendly_gravy @AnthropicAI I ultimately expect red teaming will be integrated into regulatory approaches. Broadly, I/Anthropic pushes for a few different types of regulation. I feel like regulation requires good information, so I generally propose this when talking to govs https://t.co/vljhYFFXi2

2022-08-27 21:08:38 @WilliamMcIlhag1 @HeerJeet I meant London, where I worked for many years.

2022-08-27 20:55:11 @WilliamMcIlhag1 @HeerJeet I love public transit and it's the main way I get around - I don't drive, just bicycle and BART, and when I was in UK just bicycled as well as trains and buses.

2022-08-27 16:08:36 @spacetrippee yeah except way shitter

2022-08-27 16:04:02 @HeerJeet I don't feel terrible about this, but it's probably wrong. I remember in 2014 thinking that if my partner and I bought a car in 2020 it'd be autonomous. 2020 came around and you can technically by some autonomy (e.g tesla), but nowhere as good as I'd expected stuff to get

2022-08-27 07:22:27 @conjurial I agree with this, but have always stumbled on the mutual incentive structure aspect. How do you incentivize this (potentially invasive) level of information access?

2022-08-26 05:53:49 @philmohun honestly - good question I haven't thought about. I mostly work with text models. For image stuff, maybe an analog would be asking it to fill in a blank spot on a circuit diagram and it makes a circuit that seems right on a cursory viewing but actually doesn't work in practice.

2022-08-26 05:49:27 @philmohun sort of - I find myself saying hallucinated or 'made up' quite frequently. Mostly in context of factoids that sound right on a cursory reading but are inaccurate or erroneous. Don't think it's really a term of art, but perhaps a form of slang. Gibberish also a good term

2022-08-26 05:25:08 Since this is Doing Numbers, want to foreground Nova's incredible contributions to @AnthropicAI and the field at large. Check out this 80k interview for more: https://t.co/54CTXsHKox

2022-08-26 04:36:37 @_ianks I have secured my job

2022-08-25 22:25:32 @StephenMarche It's a good question! This is an open problem. You can use datasets like the red teaming one to train models to be less likely to do this. Lots of additional details here https://t.co/kK5AFnjUI8

2022-08-25 22:00:35 @StephenMarche RegEx is basically a way to search over text for specific things - this script tries to pull out the text which matches structure of phone numbers, drivers licenses, social security, etc.

2022-08-25 20:54:07 @davidad all that stands between us and a paper clip'd universe is a 10000-character regex string

2022-08-25 20:28:52 @mdaviswilson This is a thing I genuinely worry about

2022-08-25 16:18:53 We did some manual checks (I read like 50 transcripts with potential PII) and did seem like was all hallucinated. Discussed and felt responsible thing to do would be try to remove all the stuff that looked like PII. This led to my colleague Nova writing this truly insane RegEx: https://t.co/dx3AM0jP8p

2022-08-25 16:17:33 Inside baseball policy stuff: At some point we realized our red team transcripts included a lot of potential PII - that's because our humans loved trying to get our models to give out celebrity addresses, so our models (mostly hallucinated) PII. Cont... https://t.co/sifB2RkKPX

2022-08-25 16:02:09 luv 2 make typos early in the morning. Anyway, I think it's inevitable that AI developers all adopt pre-deployment red teaming (and a few orgs already do some of this), so feels like an area ripe for collaboration. Will also be advocating for red teaming in policy discussions.

2022-08-25 15:53:50 I also think red teaming is an area that will benefit from increasing expertise. E.g, asking LMs to help make dangerous chemical compounds, or carry out certain offensive cyber ops. Excited to explore this stuff with others in coming months !

2022-08-25 15:52:24 In the same way, model developers can't a priori anticipate the capabilities of AI models, they also can't really anticipate all the potential misuses or areas where their safety interventions haven't worked. Red teaming gives us a nice way to probe AI systems for problems. https://t.co/sifB2RkKPX

2022-08-23 22:17:53 We've also recently raised some more money to support the Index and are working on some strategic initiatives which should grow the report impact even more in coming years. Would be delighted to work with people to make this happen!

2022-08-23 22:17:24 I've been involved in @indexingai since its inception so can say that this is a great role to use as a stepping stone to cool careers - prior people who have done this have gone on to manage the report, work in policy at Stanford, work at hedge funds, and found startups.

2022-08-23 22:16:32 This role would suit someone early in their career who is keen to rapidly increase their understanding of the AI space, and who is passionate about both the research and production aspects of making an annual report.

2022-08-23 22:15:45 The AI Index is recruiting! Excited to announce @nmaslej 's promotion to manage @indexingai, and opening for a research associate who will help us make the world's most widely-read set of AI policy data even more impactful https://t.co/kLORkx9kX2

2022-08-22 00:01:23 @DanielSolis This is extremely delightful. Good work

2022-08-22 00:01:15 RT @DanielSolis: These Birds Do Not Exist: THE PUZZLE! From the Society of Theoretical Ornithology Research and Knowledge — Assemble the…

2022-08-19 20:59:45 RT @ID_AA_Carmack: I mentioned this in the Lex interview, but it is official now:Keen Technologies, my new AGI company, has raised a $20M…

2022-08-19 20:42:16 @andrey_kurenkov haha, oh yeah I like 100 Gecs, and found SOPHIE through charlie xcx etc. Send more recs please!

2022-08-19 17:52:40 After careful consideration (listened to it about 200 times in two weeks), I think this is one of the weirdest and best pop songs of all time. Amazingly abrasive and strange and lyrical. https://t.co/sKGmqQcUUY

2022-08-18 03:13:20 "We put a person on the moon in 1969 and in two fucking years we were playing golf on it" - a friend on the majesty and madness of America.

2022-08-17 15:08:41 @AmandaAskell Depends on your own outlook. For me, the cognitive overhead of having to periodically think about it meant it seemed simpler to get rid of it. I generally find interacting with bureaucracies pretty draining and do enough at dayjob didn't want any more

2022-08-16 15:23:50 Kind of mind boggling to me how insane this was/is - it traps people into debt that they really struggle to repay (unless work in a high-paying job like tech), and introduces a massive drag on the economy from people servicing loans instead of spending money. Weird policy!

2022-08-16 15:22:56 UK student loans were kind of sucky (I was one of first generation who even had loans - used to get grants). But interest rate was pretty benign - like a %point above inflation. My US partner, on the other hand, had loans on the order of 8% or so, which we aggressively paid down

2022-08-16 15:22:09 Just paid off my (UK) student loans

2022-08-15 06:43:22 @pmddomingos (also this isn't a loaded/gotcha question

2022-08-15 06:42:26 @pmddomingos What's the least misleading temperature graph in your opinion?

2022-08-15 06:23:39 @pmddomingos What do you mean by lie here?

2022-08-15 02:21:34 @catehall Makes sense, thank you!

2022-08-15 01:54:12 @catehall What does anti-defect mean in this context?

2022-08-14 22:06:39 @jjvincent I can't open my fridge without eating a piece of cheese. Doesn't matter what I'm doing, a small quantity of cheese is inevitably getting nibbled.

2022-08-14 19:38:26 @jesseengel Will do. You don't have open DMs so you should DM me and we can arrange!

2022-08-14 19:27:04 @jesseengel Where are you based? I struggle with this issue as well (part of me finds the capabilities astounding, part of me worries about the ethics of development). Would you like to get coffee and discuss? Am in East Bay

2022-08-14 02:53:38 RT @mmitchell_ai: @jackclarkSF It's way too early to say this isn't a serious concern. IMO people aren't knowledgeable about using LLMs for…

2022-08-13 22:56:54 @regretmaximizer @jessi_cata https://t.co/eRNKhsqmOS

2022-08-13 22:37:04 @mmitchell_ai Just to be v specific, I was very worried about disinfo from gpt2 and I think that was way too early. Had proof of concepts but not really reliable

2022-08-13 22:36:18 @mmitchell_ai Oh I agree! I meant that back in 2018/19 I felt like it was imminent, but, like synthetic imagery, probably need a few more turns of the crank of usability/cost before it becomes prevalent.

2022-08-13 21:00:02 RT @AmandaAskell: I might disagree with people about which of the world's fires is the biggest, but I won't speak badly of people that are…

2022-08-13 20:55:33 @girishsastry @GaryMarcus @samuelmcurtis Yeah,.iirc a load of fake profiles on LinkedIn appeared one day shortly after thispersondoesnotexist came out - ease of use stuff etc

2022-08-13 19:26:06 @vineettiruvadi Ah, the prior behind the tweet is I've been trying to find evidence of utilization and haven't, and spend a lot of time talking to people whose job is to find this stuff for a variety of people, and haven't heard of concrete things

2022-08-13 19:22:07 @vineettiruvadi Absolutely. What evidence do you have of large-scale utilization in a single instance?

2022-08-13 19:20:09 @GaryMarcus @justinhendrix @samuelmcurtis Cool, do you have any links for further reading? This may fit more into the seo/spam thing

2022-08-13 18:50:52 @GaryMarcus @justinhendrix @samuelmcurtis What's the example

2022-08-13 18:50:37 @GaryMarcus @samuelmcurtis This is mostly a not-yet thing due to unit economics, complexity of disinfo kill chain, need to invest engineering to effectively integrate LM etc. This'll change eventually.

2022-08-13 18:49:29 @GaryMarcus Haven't seen any clear proof of utilization versus spam/seo. If you have clear evidence of use please share it. (asked same q to @samuelmcurtis ). I also know people whose job is fighting mis/disinfo on platforms and they haven't seen much evidence or even symptoms of it yet

2022-08-13 18:42:00 @Miles_Brundage @jesswhittles Yup +10.

2022-08-13 18:32:25 @jesswhittles @Miles_Brundage I've also found 'when they get low we go high' to be a reasonable way to approach twitter. Sometimes things can get a bit agro but maintaining an optimistic outlook and being open to constructive feedback seems helpful.

2022-08-13 18:31:44 @jesswhittles @Miles_Brundage Tbh I think one of the only ways to have a more sensible discourse is to have more people thinking in public and contributing to conversation. Private docs kind of incentivize in- out-group dynamics and increased risk of groupthink

2022-08-13 17:34:26 @juand_r_nlp @Ted_Underwood Notably, this is how Replika advertises itself on TikTok these days. Kind of gross, imo. https://t.co/OJzLJhGWFv

2022-08-13 16:44:21 @Ted_Underwood Yes! I worry a lot about 'fractal realities' enabled by AI. 100%

2022-08-13 15:55:16 @samuelmcurtis What evidence do you have for this?

2022-08-13 15:54:46 @jamescham It's that disinfo involves a complicated chain of actions (identification, using certain identities, generating content, filtering content, posting, updating in response) and it's not clear the effort vs reward part of using an LM makes sense... yet

2022-08-13 15:36:12 Given that, what are we likely getting wrong today about current risks of large generative models?

2022-08-13 15:35:54 In hindsight, a lot of the worry about use of language models for disinformation campaigns was kind of overhyped (I myself am guilty of this overhyping). Seems like most negative uses of LLMs so far have been Spam/SEO, or deliberately 'nasty' data distributions (e.g, GPT4Chan)

2022-08-12 22:06:30 @catehall oh, they're real.

2022-08-12 18:06:05 RT @JacquesThibs: EA donations to global poverty/health over time (through givewell): https://t.co/ydFvh33MDs

2022-08-12 04:56:37 RT @sleepinyourhat: Shameless plug: I’m now ~2 months into my sabbatical-year visit to Anthropic, and I’m really impressed. https://t.co/gx

2022-08-12 03:23:11 @rvinshit This is real good

2022-08-12 00:59:50 @aptshadow @CompellingSF Extremely hyped for Children of Memory. Have really loved this series!

2022-08-11 18:32:32 @FerrisHueller many people are saying this

2022-08-11 16:38:49 @akashpalrecha98 @karpathy Thank you, this makes me extremely happy : )

2022-08-11 16:15:37 @causalinf A+. Well done!

2022-08-11 16:10:38 @IasonGabriel ONE LIKE ONE TWEET DO IT, DO IT NOW

2022-08-10 17:30:21 RT @robrombach: The weights of Stable Diffusion, a latent text-to-image diffusion model, are open for academic research upon request!See h…

2022-08-10 17:22:57 Very good thread about the Voynich manuscript. Worth it for this sentence alone: "The Extensible Voynich Alphabet (EVA) is used by Voynichologists worldwide." https://t.co/5MpPoXApaC

2022-08-10 17:21:17 @officialKrishD https://t.co/NaMKX2i7G6

2022-08-10 17:16:50 @pgcorus that's different to the type of large-scale model I'm talking about. Applied work would include training on all streams of intercepted and scraped data, though for more specific purposes typically

2022-08-10 17:11:52 @mmitchell_ai @BigscienceW I <

2022-08-09 22:16:54 @Simeon_Cps I'm not sure if impacts short timelines much - the fabs will take years to build, and fabs in the US are a lot more controllable than fabs in Taiwan, so it might increase governance levers. Hard to say though

2022-08-09 22:14:08 @Simeon_Cps Chips isn't gonna really change supply of frontier GPUs massively. It's a lot more oriented around cpus and memory.

2022-08-09 22:13:10 @Simeon_Cps Much of CHIPs is about reshoring parts of semiconductor supply chain so US isn't as dependent on other places for chips, including specialist US military chils. This makes policymakers more relaxed and reduces likelihood of extreme actions being taken in the future re supply.

2022-08-09 22:05:34 @Simeon_Cps Chips act mostly pushes on diff capabilities to those needed for frontier AI dev, so probably reduces risk by increasing robustness of gov.

2022-08-09 21:13:27 @RichardMCNgo This tweet would've been clearer if I'd specified 'RIGHT NOW'. Misalignment etc is definitely a future problem.

2022-08-09 21:11:32 @RichardMCNgo I am worried about AI doing bad stuff eventually - I'm more worried in the short-term about govs getting surprised by AI developments and being sputniked into doing dangerous stuff or increasing race dynamics

2022-08-09 18:48:03 RT @CorreaDan: The bill also included @realTinaHuang’s proposal to establish testbeds at NIST to support the development of trustworthy and…

2022-08-09 18:29:06 @ESYudkowsky @robbensinger Yeah I think it'd be helpful to write some bear cases on govt involvement (I'm going to try and do this, since I spend time advocating for more government involvement in some areas like monitoring)

2022-08-09 06:10:45 @rasbt Thank you!

2022-08-08 20:05:34 @rbhar90 +10

2022-08-08 18:34:59 @karpathy thanks for reading!

2022-08-08 15:42:14 @JCorvinusVR oh i agree, I was just trying to clarify that I myself don't think these ideas are practical or plausible.

2022-08-08 15:30:40 @JCorvinusVR https://t.co/9mvMyM19FP

2022-08-08 06:13:23 @bedbayesnbeyond Also pretty good. Maybe like 30 people? Same sentiment tho

2022-08-08 06:12:53 @bedbayesnbeyond This one is actually ok!

2022-07-27 00:55:46 @advadnoun It feels to me like this argument mostly boils down to 'this should be a public good, not something that accrues profit to a private company'. I don't necessarily agree with this, but I think it makes sense.

2022-07-26 14:05:04 @k_mcelheran @jamescham @kcnickerson @userfit @allenb @AriSalonen Sure, DM it to me

2022-07-25 16:45:45 @gwern @_joaogui1 back when I was a journalist I had a series called 'clark side of the cloud' where I toured data centers around europe and america and got into quite a lot of detail, so I know an unreasonable amount about this

2022-07-25 16:45:14 @gwern @_joaogui1 also worth noting a bunch of US data centers are in places like 'high desert' oregon (as lack of moisture good for free-air cooling), so I'm thinking that stuff like wildfires could also cause problems here

2022-07-25 04:03:27 @isosteph Yeah after I wrote it I realized that'd be insane and like a semi-jog. Great work!

2022-07-25 03:55:04 @isosteph A+, congratulations. Out of interest how long did this take, like ten hours?

2022-07-25 03:27:31 https://t.co/nWRd9nuGA5

2022-07-23 19:40:00 @alexandr_wang yes and yes. It's going to be fun.

2022-07-22 19:22:19 @jstrauss @vaisfourlovers yes, I try and have 2 days each week with zero meetings, and it is the main way I can get actual work done. I vociferously protect them

2022-07-22 18:48:07 @zacharylipton sorry you went through that and glad you got out, sounds absolutely horrific

2022-07-21 02:45:09 @dkaushik96 @KumarAGarg Extremely hyped for the movement on this today!

2022-07-20 01:18:42 @BishopTopsy Up and to the right, most days!

2022-07-20 01:18:23 @henrycomb_ Henry! Sorry to hear that, but glad you also gained the ability to see a bunch of positives. Miss ya!

2022-07-19 19:05:21 @slippylolo @arankomatsuzaki a+ meme game here.

2022-07-19 17:11:48 RT @ahandvanish: I’ll be testifying to Congress on Tuesday at 10am regarding #LongCovid, including current research and needs. A livestream…

2022-07-19 06:01:34 @iblametom Cc @BorisJohnson

2022-07-18 23:35:37 @rgblong @rupertg gave me an interview for ZDNet UK to become a technical reporter, when all I had on my CV was a fly-by-night SEO shop and obvious enthusiasm/knowledge about wonky stuff. Changed the course of my life.

2022-07-18 15:59:29 @IreneSolaiman can you share slides after? good luck!

2022-07-18 04:27:50 @Tim_Dettmers Thank you! I can sympathize, but my goodness losing hands would be tough. I shared as I figured it's good to let others know that people have these experiences and, thankfully, can get through them! : )

2022-07-18 04:26:42 @Steven_B_Lee A lot of prevention comes down to core strength. I've been working on that for years but am increasing efforts now. I'm also taking anti-inflammatory stuff like magnesium. Had some short term meds when had spasms but have tapered off weeks ago.

2022-07-18 04:25:38 @Steven_B_Lee I've had back issues since a teenager so I have some idiosyncratic physical stuff. I think this one was probably due to stress / not taking a proper holiday for years, and something caused a cascade.

2022-07-18 04:21:24 @KatiMichel *re-straightened

2022-07-18 04:20:58 @KatiMichel Yikes - glad it got fixed! And yes, it was pretty bad. There was a fun period in weeks 3-5 where I walked like the letter 'C' as spine was bent due to the spasms in one side. (Restrained with physio and fine now, but sheesh!)

2022-07-18 04:13:53 Anyway, who is to say there's that much of a difference between myself and a big model, when we both react to _predicted outcomes_, stacking our response in line with our subjective probabilities. When you're in crazy pain, I think difference between humans and dumb AI collapses.

2022-07-18 04:12:28 One surprising aspect of my recent Health Adventure was the role Fear played

2022-07-18 04:06:22 @IasonGabriel A+

2022-07-18 04:04:58 @IasonGabriel Let's finally do that hike if you're still in the bay! Didn't reach out recently to the above, haha.

2022-07-18 04:03:45 @PoemsWeBurned @adrian_weller Bay area

2022-07-18 04:01:30 @startuployalist Thank you. And thanks for sharing this thinker - wasn't familiar and will read... embodiment feels like a huge part of cognition.

2022-07-18 03:43:11 @mobav0 Thank you! I was grateful for how this experience really highlighted what is important in a Maslow's hierarchy pyramid form.

2022-07-18 00:02:24 @then_there_was @adrian_weller Oooh great tip! I will look into this, thanks

2022-07-17 19:42:31 @likeloss4words Oh yeah, I feel that. There are a bunch of areas like robotics where if we had dramatically more people working in it, AI would be able to flow much more quickly into world for useful purposes.

2022-07-17 19:38:51 @likeloss4words (this intuition comes from seeing the robot team work with a shadowhand at openai and realizing stuff like mechanical tendons and motors are all way more janky than I'd thought. It was very humbling)

2022-07-17 19:33:58 @likeloss4words It's not a key bottleneck, mostly it feels like motors and battery and other aspects are the hard bit, but I do expect AI will help with this stuff, and can also help exoskeletons eventually be more adaptable

2022-07-17 19:32:09 @volokuleshov Yeah, I had been doing planks/pushups a lot this year which had staved off back trouble nicely until the incident. I've now started doing a bunch of squats, and am trying to give myself a workout routine mostly built around strengthening back/core.

2022-07-17 19:25:57 @Ted_Underwood @deepfates I also want things like vast planetary-scale ecosystem monitoring via smart, semi-autonomous robots, cameras, satellites, and so on. These will be a bunch of agents coordinated by other agents.

2022-07-17 19:25:06 @Ted_Underwood @deepfates Why not both? It's great to have a culture-in-a-bottle, which is what gen models are, but it's also great to have agents that use a culture-in-a-bottle to bootstrap world models and take actions. I want to explore the deep sea and space - need agents.

2022-07-17 19:11:06 @MrPKent thanks for sharing - yeah, it is almost comedically bad how much it reduces your movement/agency. There were a couple of days where just turning over in bed was a multi-minute process with discrete stages and ridiculous amounts of grunting.

2022-07-15 05:06:59 @superglaze Popular spills over time sounds like a banging non-fiction book by a physicist

2022-07-14 21:14:22 @RishiBommasani @WilliamWangNLP Also seems like a reason for why academics should build their own big models, then raise funding to support one-to-many sampling infrastructure

2022-07-14 02:00:30 @gregeganSF @lacker Reality is typically both dumber and stranger than fiction.

2022-07-13 17:12:57 @bytestoatoms @AnthropicAI Thanks for pointing out the omission, we’ll fix it soon. (I passed this on internally).

2022-07-11 21:12:04 @DZhang50 @MichaelTrazzi thanks for sharing! I'd been following the google RL chip stuff for a while (e.g, here https://t.co/fyTEHoq539) but hadn't read this paper. Will read!

2022-07-11 20:45:47 @ArmanMaesumi @MichaelTrazzi well, a big diff is improving chip design more than with standard EDA software, which is what NVIDIA did here. This is distinct.

2022-07-11 20:16:02 Import AI is back! Seemed fitting to re-launch with a section on an AI company using AI to make parts of its AI chips more efficient for training AI systems that will further improve chip design to further increase efficiencies of AI training (^infinity) https://t.co/u7ywXOl19y https://t.co/tI1NUMI0Nc

2022-07-11 17:54:42 Extremely excited about our advisory board, jury members, and staff as well! @ruchowdh @EileenDonahoe @camillefrancois @ghadfield @EvaKaili @safiyanoble @navrinasingh @lyssaslounge @stefvangrieken @atg_abhishek @verityharding @wsisaac @rajiinio , and more! https://t.co/kUcugEJynL

2022-07-11 17:51:12 The idea here is to catalyze activity in studying, auditing, and assessing the traits of widely-used and/or deployed AI systems - from YOLO to CLIP, from GPT3 to GPT-NeoX, and so on. For AI to benefit society, we need society to have more critical faculties wrt AI.

2022-07-11 17:49:29 Do you want to audit a deployed or open source model for societal impact? Do you want to use, or create, open source software tools to do this? Would you like a chance at a $25,000 first prize? If so, please enter the AI Audit Challenge! https://t.co/qmLBxootfu https://t.co/7fUGYhK01C

2022-07-06 22:53:44 @HaydnBelfield https://t.co/A8C0WsU0lW

2022-07-06 20:04:56 Gave a talk today to a National Academy working group on AI and Workforce. Spent lots of my presentation talking about difficulty of measuring advance of AI systems, and how AI systems are also progressing more aggressively than expert forecasts.Slides: https://t.co/HbvdOoBP51 https://t.co/qLdUyhIDpk

2022-07-05 16:09:19 RT @JasonGMatheny: I'm thrilled today to start as the new president and CEO of the @RANDCorporation, an organization I've idolized since I…

2022-07-05 06:58:59 @nlp_pranav https://t.co/cjDU20TyEF . But I was moshing in spirit!

2022-07-05 04:54:24 Spend your freedom on supporting eachother and doing things for your community and creating experiences that are fundamentally and deeply rooted. #VOTEDIY2022 https://t.co/0KvLPRhDQ7

2022-07-04 15:46:48 @pjbarden @SColesPorter @WorldSummitAI Dm me the addresses you're using

2022-07-02 08:19:11 @aggielaz I agree! Some of what the Dynabench paper espouses is moving to continuous + dynamic evaluations. Given the huge expressive space of these models it feels like you need an increasingly broad set of people to mess around with them to surface capabilities and pathologies.

2022-07-02 02:16:34 @cheeze_squeeze Thanks for being a reader! Newsletter on hiatus due to health stuff but coming back real soon

2022-07-02 01:22:30 @WillManidis @jeremyphoward Agreed! I feel like an underinvested area in ML is benchmark design. Progress is catalyzed by the existence of benchmarks and, I suspect, stagnates in their absence.

2022-07-02 01:17:55 @__anoop Getting there! https://t.co/M4ZHS0LEKeI also agree that this trend says our benchmarks are somehow broken.

2022-07-01 17:33:10 @jjvincent @whippletom incredible. Beer on me next time in london, yung weasel

2022-07-01 16:14:49 RT @ada_rob: That was fast. News sites are already using DALL-E (mini) to generate fake headline images. This move seems questionable to sa…

2022-07-01 05:20:33 @LastWordSword @vineettiruvadi @BW Also agree https://t.co/VzqnSP2Nu0

2022-07-01 05:15:42 @filippie509 https://t.co/VzqnSP2Nu0

2022-07-01 04:51:59 @TaliaRinger Absolutely! And if you know any super hard benchmarks please send them to me so I can advocate to add them to @indexingai . Also excited by fun ideas like this https://t.co/tMxHfYhsp4

2022-07-01 04:40:38 @bucketofkets Great example. Should have mentioned that in my thread, definitely in the back of my head as some chums like @AmandaAskell were involved.

2022-07-01 04:39:57 RT @bucketofkets: @jackclarkSF we started putting together bigbench exactly for this reason…and average human performance was exceeded befo…

2022-07-01 04:19:52 Obviously there are two conflicting points here - benchmarks aren't measuring true intelligence on a task as the AI systems they test still break in dumb ways, yet our benchmarks are becoming outmoded at ever increasing rates.

2022-07-01 04:15:02 Things are getting... Extremely weird. Think about what this graph may look like in spring 2023 (was published April 2021). From the excellent Dynabench paper https://t.co/v3TkBgSATM https://t.co/gApjigp21W

2022-07-01 04:12:08 @powerbottomdad1 @nwilliams030 Getting there! Will update in a tweet soon, but fact I'm back to obsessively reading arXiv is a good sign :)

2022-07-01 04:03:42 @powerbottomdad1 This is what import AI is for! On hiatus currently as I work through some health stuff but coming back real soon

2022-07-01 03:52:12 @vineettiruvadi I think that used to be true but in recent years we've seen the emergence of large-scale models that display great competency at a bunch of distinct tasks (and some capabilities spike above SOTA), so I don't think this is illusory progress.

2022-07-01 03:49:34 Both of these results were published TODAY. These results happen at a delay, so this is probably old information on order of 3-9 months. There are easily 5 labs and probably 10 with enough compute to play at this level. Imagine what we don't know right now?

2022-07-01 03:48:17 Here's a system that beats Stratego, a game with complexity far, far higher than Go. https://t.co/9vXncymWKE

2022-07-01 03:46:23 Here's MINERVA which smashes prior math benchmarks by double digit percentage point improvements https://t.co/humsCGxh6w

2022-07-01 03:44:44 As someone who has spent easily half a decade staring at AI arXiv each week and trying to articulate rate of progress, I still don't think people understand how rapidly the field is advancing. Benchmarks are becoming saturated at ever increasing rates.

2022-07-01 03:33:48 @jordannovet Photograph of person successfully opening thin green produce plastic bag pov

2022-06-28 17:15:17 RT @BigScienceLLM: 100%

2022-06-28 03:11:07 @kaushikpatnaik +1! Hadn't seen this stuff by @hardmaru and it's awesome

2022-06-28 02:46:13 @jmdagdelen https://t.co/HE2V7LgpQT

2022-06-28 02:36:37 @NicoleHemsoth Yeah I think this is a v pertinent question! https://t.co/bIYqFDESD7

2022-06-28 02:35:20 @JayPatel0101_ @karpathy Wouldn't take the other side of that bet!

2022-06-28 02:14:37 @karpathy Plus: indie videogames, music videos, advertising...And when video models get good, geez!

2022-06-28 02:11:48 Gen models are also a lot easier to deal with. When I edited my university's student newspaper I had to try and get a photograph of a person in a wheelchair being attacked by a swan - massively difficult, unless you were there. With Imagen or Dall-E (sans filters) u can do it.

2022-06-28 02:09:28 The Stock Photo industry is probably not ready for generative AI. Generative AI seems better for 80% of use-cases. In other words, NYT still gonna do illustrators, but a random website will probably find economics of gen models more attractive than a Shutterstock subscription.

2022-06-27 17:51:48 @b_cavello @Unibo Can you share your slides? Would be curious to see

2022-06-27 16:20:33 RT @EthanJPerez: We’re announcing the Inverse Scaling Prize: a $100k grand prize + $150k in additional prizes for finding an important task…

2022-06-27 03:42:42 @nsaphra @michael_nielsen I loved Dark Forest and found the third more interesting than you seemed to. However, they've done tons of great short stories that I'd recommend. Will try and msg you with titles when I get back to the book!

2022-06-27 02:39:19 @michael_nielsen This is how you lose the time war - El-Mohtar and GladstoneShards of Earth - TchaikovskyAnything by Cixin Liu, especially the short storiesA Colder War - Stross

2022-06-23 19:57:28 RT @ChrisGr93091552: Explored github copilot,a paid service, to see if it encodes code from repositories w/ restrictive licenses.I checke…

2022-06-23 19:49:56 @MurphFromNerf @carlfranzen https://t.co/tI60HcaDTS

2022-06-21 02:28:08 The good news is I'm doing better most days. I just need to budget my energy and mental attention a lot.

2022-06-21 02:27:07 Import AI will also not come out this week due to my aforementioned medical issues. However, I am generally doing better and will post an update once I feel I'm through it. So far, doesn't seem chronic, just very slow recovery. https://t.co/atNWBPqcUg

2022-06-20 20:49:16 RT @sarahookr: I think this is one of the most important open roles at @CohereAI right now. I'm personally invested in helping find a good…

2022-06-20 18:43:17 @CineraVerinia @tszzl Most major Chinese results for frontier ML/DL also get published in English. The Chinese-only papers tend to be either oriented around specific, deployed applications, or are lightly to heavily classified PLA-linked research pubs. (Source: I look at this annually for @indexingai)

2022-06-19 03:56:08 @Brainmage Honestly, wtf. https://t.co/S2W56YuLIr

2022-06-19 03:55:27 @Brainmage Crungus be shopping https://t.co/SYchaEEJWN

2022-06-18 04:25:13 @ElectionLegal This. This, I struggle with.

2022-06-16 22:45:59 @tsimonite A+

2022-06-16 21:02:37 https://t.co/N6togcU4n3

2022-06-15 18:39:13 I expect I'll be fine in the long term, but it's sufficiently serious that most of the medical advice I've been given has been a variant of 'do absolutely nothing so your body has best chance of healing'. Will update as I get better details.

2022-06-15 18:38:39 Import AI won't be coming out this week as I've been dealing with some fairly serious medical things for the past three weeks. I'll post an update here once I have a handle on the situation.

2022-06-14 04:15:05 @FelixHill84 Cc @AmandaAskell

2022-06-14 02:57:16 @jrieffel @JonathanBalloch @katecrawford Exactly - the problem is we're cannibalizing the intellectual base on which forward progress depends, and we're pushing many publicly-minded folks into locked down information silos.

2022-06-14 01:44:28 H. G. Wells out here anticipating 'move fast and break things' in 1937 (from: World Brain). https://t.co/ArlYdQly2Z

2022-06-13 18:07:18 @alexbarinka @sarahfrier @KurtWagner8 @business oh, rad! excited to see the reporting you do. I still have fond memories of how effectively you terrorized IBM!

2022-06-13 18:06:14 @TheSeaMouse @katecrawford there was a lot of private sector research prior to alexnet, e.g a lot of early handwriting and speech rec comes from private sector places like AT&

2022-06-13 06:41:01 @t0nyyates @katecrawford Basically high water mark results in AI e.g perceptron(60s), ALVIN (early self+driving,80s), Deep Blue (90s), Imagenet (2012), alphago(2015/16), gpt3 (2020), etc

2022-06-13 01:17:13 RT @AlexGDimakis: This is indeed a challenge for us in universities. Novel algorithms, clever ideas and provable bounds are harder to get w…

2022-06-12 21:20:27 RT @nathanbenaich: I found this chart particularly striking - the fact that academic researcher's ability to compete with industry research…

2022-06-12 17:06:30 We did it, everyone! GoFundMe is done. Message from Joe and Jenn here https://t.co/dh48pymrEy https://t.co/9nRAL0Mvsp

2022-06-12 06:33:07 Y'all are incredible. This makes a huge difference to my friend. Thank you! https://t.co/wq0SVYaRua

2022-06-12 03:51:47 Thanks so much to everyone who has donated. This so far makes a huge diff - already a workday and change in funds for my buddy!

2022-06-11 18:05:02 @thetorpedodog Yeah, they made it sound like they didn't have the people to take reports for stuff like this. Bewildering and absurd.

2022-06-11 17:52:24 @Breck_Maynokur @MikeIsaac Wow

2022-06-11 17:50:03 @MikeIsaac It was crazy! We called back a few hours later and asked about the chances of getting someone out to see it and they basically said since it was a Friday night in Oakland they'd all be busy elsewhere. A classic 'what are my taxes even paying for?' moment.

2022-06-11 17:44:38 Also, Oakland PD said they weren't taking police reports as they didn't have capacity (???) and wouldn't come to see it, which is extra enraging. We think the same person torched some other vehicles around the neighborhood as well.

2022-06-11 17:42:21 Yesterday, some nutcase set fire to my friend Joe's truck. Joe depends on his truck for his livelihood (he's a metalworker/welder), so this has really screwed him over. If any followers could spare some $ it'd help out a hardworking, lovely guy. https://t.co/66BgoMMMns

2022-06-10 21:22:10 @Derek_duPreez Congrats! Same energy: https://t.co/J1AtkyXdM2

2022-06-10 20:05:30 @chriscanal4 Excellent, you're in for a treat! I particularly like some of the flashback earth sections. Season 2 has some really nuts stuff, also.

2022-06-09 20:18:29 @giffmana @katecrawford https://t.co/kJjCsOSUPUFrom this @AnthropicAI paper https://t.co/6R2Qa4szOr

2022-06-09 18:27:57 @MichaelTrazzi fwiw I'd put myself more in the middle - I don't really think AGI is gonna be bad, I think it's more like 50/50 currently

2022-06-09 07:36:40 Infinite Art feels a bit like Infinite Jest. https://t.co/UEHdAVJPbB

2022-06-09 01:45:09 RT @raphaelmilliere: How should the AI research community broadly refer to the family of large, pre-trained, self-supervised, scalable mode…

2022-06-08 23:15:36 RT @unsojo: @jackclarkSF @mmitchell_ai @alexhern @HaydnBelfield @IreneSolaiman A lot of anthropologists have written about how different so…

2022-06-08 23:15:29 @unsojo @mmitchell_ai @alexhern @HaydnBelfield @IreneSolaiman Helpful and illuminating thread, thanks very much!

2022-06-08 21:44:32 @mmitchell_ai @alexhern @HaydnBelfield @IreneSolaiman @unsojo Absolutely! Please send stuff over to read. I think the above analogy isn't very well fleshed out on my part, and I'm more just looking at industrial practices rather than innate things (also: not a psychologist). Would be excited to read

2022-06-08 20:46:20 @jsotterbach @katecrawford Our mutual chum @Hernandez_Danny did this analysis based on the data from this paper https://t.co/ZfbnUdqlgu, iirc.

2022-06-08 17:58:00 @thomeagle I particularly enjoyed their EP "Hard Reardan Steel", and their single "No God Only Galt"

2022-06-08 17:55:56 @thomeagle Ayn Rand Accelerationism

2022-06-08 17:53:02 @jrieffel @katecrawford Actually, sort of more worryingly, it's less stark here. (Which suggests academia is really under-resourced for frontier). From: https://t.co/jZE3vcGIok https://t.co/fW4SZID7W4

2022-06-08 17:42:18 @bryanrbeal @indexingai I'm not suggesting universities build DCs - that'd be bad. I'm saying places like NSF are not really setup to do grants that'd let a professor and students have enough resources to train a big model on Azure/AWS/GCP.

2022-06-08 17:32:11 @bryanrbeal @indexingai Additionally, academic incentives and compensation structures make it really challenging to hire the engineering teams required to build and develop big models. There is work being done here (e.g, National AI Research Resource in the US), but much more to do.

2022-06-08 17:31:32 @bryanrbeal I agree with you - something we cover in @indexingai regularly is the plummeting cost to train decent computer vision models on clouds, so def true. On the other hand, academia can't build this stuff - funding orgs like NSF really aren't setup to fund 1000+ machine clusters

2022-06-08 17:17:04 @bryanrbeal (I don't mean this to sound catty and we may be making complementary points - it's def true that the ability to utilize benefits of AI is cascading down to everyone through large commercialization. I just think that's only a slice of the whole pie, in terms of benefit.)

2022-06-08 17:16:04 @bryanrbeal It also means only private sector actors are privy to the national and economic security implications of frontier, large-scale systems. Last time this happened was oil industry having better intelligence gathering than govs - led to antitrust, creation of gov intelligence, etc

2022-06-08 17:15:27 @bryanrbeal Its a big deal that the private sector rather than academia and governments are the ones best positioned to develop the most resource-intensive models. That means we get all the benefits of _capitalist_ AI, but we don't get the benefits of pro-social AI (diff incentives)

2022-06-08 17:14:27 @bryanrbeal The 'means of production' for things like large-scale AI systems are controlled by a tiny set of actors, and the number of organizations that can train large-scale systems are predominantly in the private sector. The resulting _services_ are broadly available

2022-06-08 17:06:35 A few weeks ago, I gave a presentation about the consequences of industrialization of AI with a particular emphasis on geopolitics. Slides here, for the enthusiasts! https://t.co/67s11LJRkV

2022-06-08 17:05:12 It's covered a bit in the above podcast by people like @katecrawford - there's huge implications to industrialization, mostly centering around who gets control of the frontier, when the frontier becomes resource intensive. So far control is accruing to the private sector (uh oh!) https://t.co/rVe5jtuCkw

2022-06-08 17:02:47 Talked with @TheEconomist about the industrialization of AI - a theme I've been covering for years in Import AI, and some of the implications of this which we @AnthropicAI laid out in 'Predictability and Surprise in Large Generative Models' https://t.co/6R2Qa4szOr . https://t.co/HsAkY77We9

2022-06-07 20:15:18 @alexhern @HaydnBelfield @mmitchell_ai @huggingface Again, this isn't how I think it _should_ be, it's just how the incentive landscape appears today.

2022-06-07 20:14:58 @alexhern @HaydnBelfield @mmitchell_ai @huggingface Well, some of these orgs seem to have implicit goal of 'make money', so it's as disappointing as the rest of capitalist-incentives, I suppose. Currently, there's a tradeoff they face here, and they haven't been incentivized via regulation/legal precedent to behave differently.

2022-06-07 20:12:42 @alexhern @HaydnBelfield Some of the language coming out of European Commission points in this direction, though I expect will need to be clarified via specific legal cases to create hard precedent. https://t.co/BgwBbEaBbU

2022-06-07 20:11:56 @alexhern @HaydnBelfield @mmitchell_ai @huggingface Ultimately I expect this is one of the big, gnarly things that will need to be decided via a suit in court. What rights do users have to edit the data about them represented in models trained on public internet, etc. Lots of preliminary work/regulation happening, but untested

2022-06-07 20:11:10 @alexhern @HaydnBelfield Yes, you can do this via a few research techniques. E.g, this paper is representative of some of the work going on in the field https://t.co/Dea4LI9t3t . The big question is whether an LM provider will _let you_ do this. iirc @mmitchell_ai is thinking about this at @huggingface

2022-06-07 20:06:02 RT @RANDCorporation: We’re thrilled to announce that Jason Matheny has been selected as our new president and CEO. He is an economist, tech…

2022-06-07 20:05:13 @alexhern @HaydnBelfield A lot of my personal experience in policy at places like OECD/UN/etc is that it's quite hard to put hard legal regulations around things in the abstract, but it's relatively easy to take specific use-cases and classify them within law, so AI might look like this.

2022-06-07 20:04:28 @alexhern @HaydnBelfield And some of it may ultimately be catalyzed via use-cases. E.g, DALL-E being used to generate things where the prompts don't seem to imply infringement of existing IP probably leads to novel outputs, but if your prompt is 'generate variations of mickey mouse' you may be infringing

2022-06-07 20:03:39 @alexhern @HaydnBelfield I mean there's a general desire to put some constraints around large-scale data scraping from public sources, and this runs into tension with concepts of 'fair use' of IP for creating new things. 1/2

2022-06-07 20:02:06 @alexhern @HaydnBelfield (I am also not a lawyer, but I've been trying to read about this particular intersection a lot. It seems like you can think about AI systems more sensibly when deployed within a corporate, profit-making system, but it gets harder when being done as DIY non-profit things)

2022-06-07 20:01:14 @alexhern @HaydnBelfield They sort of do and sort of don't - there's an interesting tension between stuff like fair use of data, and how we want to apply data protection to big, scraped datasets. I think a lot of the tension is in AI-Creativity because it's where these things come into tension.

2022-06-07 19:56:18 @alexhern @HaydnBelfield Neither am I! If you look at last hundred years there's been this general personification of corporations under law (creepy!), so that's the environment AI systems are getting built within. Consequently, AI systems seem to run into edge-case conflicts with much existing law.

2022-06-07 19:45:04 @alexhern @HaydnBelfield I'm not saying AI systems _should_ exhibit these qualities, just pointing out that they do currently exhibit these qualities.

2022-06-07 19:42:16 @alexhern @HaydnBelfield Humans also don't have an innate concept of data protection - we look at stuff and hear stuff that we're exposed to, and we naturally seek out things we're curious about regardless of implicit privacy (e.g, most humans love gossip)

2022-06-07 19:41:19 @alexhern @HaydnBelfield It feels tricky, as humans learn both on heavily curated curricula (e.g, school), and also uncurated mass trawled data through sensory experience/day to day life. Latter comprises vastly more than former. Same seems true of AI - some small % of curated data, large % uncurated

2022-06-07 04:03:11 @TrungTPhan Depends on how much the people are being paid and how tips are being split. Good tips typically turn shit wages into livable wages for service industry people. (Obvs depends - some businesses don't distribute tips properly so don't do that there)

2022-06-06 23:41:08 RT @0xabad1dea: just watched the AI generate an image that very clearly had the Shutterstock watermark on it and evolve it out over a few i…

2022-06-04 22:09:34 RT @ch402: People often complain that modern ML is throwing GPUs at problems without new research ideas. This is like finding evolution ugl…

2022-06-04 02:25:01 @mark_riedl Yeah, but I really liked seeing the atheist AI society stuff. Agree the flying snake stuff was hella confusing!

2022-06-04 02:16:16 Raised by Wolves is an amazing show unafraid to boldly pursue wacky ideas around AI/machine consciousness and the interaction with faith and religion. I really hope someone picks it up and does the third season. If anyone does this, I can guarantee I shall be an obsessive viewer! https://t.co/GTLFMupeSR

2022-06-03 22:45:50 @guyi @Plinz *counterintuitively

2022-06-03 22:45:30 @guyi @Plinz Yes, I think one culture is impossible, but it doesn't prevent it being an implicit goal for a bunch of people. Personally, I think variety and heterogeneity are both more useful and counterintuitive more stable

2022-06-03 22:37:39 @Plinz @guyi I'm going to write something lengthy about this in a bit, but worth noting my Twitter handle used to be 'mappingbabel' for a reason. I do think consequence of internet is this swing back and forth between one culture and a fractal infinity of them. Somewhat sad sometimes!

2022-06-03 18:37:00 @Plinz Yes, massively scary! That's why I mean it's a difficult and real choice - spectrum of no curation to fill curation, and different entities will pick different points

2022-06-03 07:25:39 @Plinz https://t.co/x9PQm7xTY7

2022-06-03 07:20:26 @BonesMcGowan My point here is if you're taking samples from a big generative neural net you'll typically sample stuff which represents the dataset, so the resulting sample will reflect the underlying reality you've encoded via your dataset selection.

2022-06-03 07:16:47 @Plinz Many big models are trained on many multiples of library of Congress yet still exhibit various negative traits

2022-06-03 07:13:23 @machinaexethica Yeah, there's a meaningful split here re agency of end-users vs system-developers. On 1 side u have 'anything goes' and on other u have 'defined 95%+ by system developers'. u see this in current diffs between OS releases (eg OPT, Eleuther) and hosted services (e.g OpenAI, Cohere)

2022-06-03 06:26:34 (and a bunch of others whose names I am forgetting!)

2022-06-03 06:25:49 Kudos to @timnitGebru @mmitchell_ai @rajiinio @jovialjoy @Abebab @IasonGabriel etc who have all done foundational work here

2022-06-03 05:45:53 LLMs are like media organizations - each one will be of a certain view and LLM providers will face a choice: be fully libertarian or choose for proactive reification of certain identities. Just as with human culture, there can't be a single culture that everyone feels home in.

2022-06-03 05:43:50 This article illustrates a significant point about AI - who gets represented? E.g, if you're developing an AI model, how does it categorize various classes, and how "correct" is it from POV of end users. End point seems like a fractal - as many LLMs as there are 'classes'. https://t.co/P28rvKmtQL

2022-06-03 03:17:30 @ElectionLegal Absolutely incredible. Everyone I showed this to was confused and horrified. A+ performance.

2022-06-02 19:26:42 RT @AmandaAskell: 1) Do you think the welfare of present people is more intrinsically morally valuable than the welfare of future people?2…

2022-06-02 05:47:25 RT @Ket_Cherie: @jackclarkSF Not really. It’s predictable.Different legal arguments for copyright (common law &

2022-06-01 17:04:59 @Ket_Cherie Thanks for clarifying - very helpful context! Do you think there are any particularly good texts or research papers on intersection between copyright and TM and AI-generated content I should read?

2022-06-01 17:00:05 @Ket_Cherie Does TM = trademark?

2022-06-01 16:16:41 @LordeCelsius You must imagine 'The Forbidden Mickey'

2022-06-01 16:14:08 @pwillsit Please finish this paper, would love to read! Or feel free to send me a draft. I think about this stuff a lot but don't have particular training in law so am v much dog on the internet here.

2022-06-01 16:10:44 Gonna be interesting to see what happens when AI generated art collides with notoriously litigious IP-protection companies like Disney and Nintendo. It'd be interesting to see how they'd react to a universe of Mickey variants, or a vast set of Peach permutations, etc.

2022-05-30 22:23:50 @iandanforth @gwern Is our culture more influenced by photos from Leica versus smartphones? Medias don't really die they just become less dominant

2022-05-30 21:39:49 @andy_l_jones @gwern Will write something when back from holiday!

2022-05-30 21:39:35 @andy_l_jones @gwern Maybe the scariest idea is that gen models will, pretty much by design, compose and synthesize outputs which people may not ever have imagined, let alone created

2022-05-30 21:38:46 @andy_l_jones @gwern Very good writeup. I've been pondering this stuff while taking a break and I'm more updating in the direction that gen models ultimately end up driving culture, so rather than polluting our own datasets, we're instead about to kinda bootstrap cultural production via artefacts

2022-05-26 20:48:46 RT @ethanCaballero: .@RichardSSutton estimates 50% probability of Human-Level AI by 2040: https://t.co/vCyE6delrT

2022-05-25 21:00:39 @mmitchell_ai Amazing

2022-05-24 01:53:10 @fieldsofcorn89 @quantumVerd @GoogleAI Video generation is probably gonna be quite a while because doing the sequential stuff over time makes the complexity scale significantly, so v expensive. Wouldn't be surprised if under a decade, though probably short time horizons.

2022-05-22 02:09:24 @norabelrose @patrickmineault Yeah I think retrieval from external trusted knowledge bases is gonna make all LLMs a ton better

2022-05-21 14:30:01 RT @jacob_feldgoise: National AI Research Resource (NAIRR) Task Force just approved their interim report! Very excited to read it.- Rele…

2022-05-20 08:11:00 CAFIAC FIX

2022-10-28 16:22:41 @jjspicer read this at once https://t.co/6M8qWJ2Akl

2022-10-27 21:30:29 RT @ClementDelangue: We just crossed 1,000,000 downloads of Stable Diffusion on the @huggingface hub! Congrats to Robin Rombach, Patrick Es…

2022-10-27 18:42:19 @tszzl I played Half Life: Alyx on the Valve Index and it hard-updated me to VR games being amazing, albeit still a little early.

2022-10-27 00:02:01 RT @rgblong: Another question for consciousness scientists and AI people: What is the best evidence for and against large language models…

2022-10-26 17:20:39 @MishaLaskin @junh_oh @RichiesOkTweets @djstrouse @Zergylord @filangelos @Maxime_Gazeau @him_sahni @VladMnih looks cool - link to the paper?

2022-10-26 03:29:56 @gijigae Thanks so much for reading!

2022-10-26 02:05:38 @kipperrii 100% here for fontposting. More fontposting!

2022-10-25 20:15:15 @carperai Haha fair enough. Thanks!

2022-10-25 19:25:12 @carperai You're welcome. Any clues as to release timeline?

2022-10-25 18:49:10 Stages of living in California:First earthquake: dear God hope I've made my Will.Fifth earthquake: Gosh, that was a little rumbly. Hope everyone is ok.???th earthquake (which happened in SF just now): Wonder what gifs people will post on Twitter about this one?

2022-10-25 17:14:50 RT @ElineCMC: .@jackclarkSF's take on this: "The tl

2022-10-25 15:30:28 @EricNewcomer @TaylorLorenz awesome metrics, congrats Eric! Really nice to see your success here : )

2022-10-25 01:52:36 @Meaningness @ArtirKel it's got a bunch of nice examples in it - definitely worth a skim

2022-10-25 01:45:29 One of the nice things about the AI space is when people take their criticisms and instantiate them as quantitative studies of existing systems - kudos to @GaryMarcus et al for a nice paper going over some failures in DALL-E2

2022-10-24 23:21:03 @tszzl Did it with my Senate testimony and was v surprised. Generally find myself running any long doc I write through a LM these days

2022-10-24 05:37:13 Import AI will be coming out on Tuesday as I sprained my ankle at Nopes' final show last night. #VOTEDIY2022 https://t.co/CaKrgLf00r

2022-10-24 01:47:22 RT @SamuelAlbanie: Just how striking are the recent language model results with Flan-PaLM?Here's a plot.Across 57 tasks on mathematics,…

2022-10-23 19:02:59 @CecilYongo Can you share slides? Looks interesting

2022-10-23 02:23:43 @zacharynado You can't necessarily look at the datasets with an API, unless has huge amounts of extra eng time invested. E.g can anyone inspect JFT?

2022-10-23 01:41:19 @kylebrussell Does Playbyte have an in-house exorcist team?

2022-10-23 00:05:13 Image generation going to get strange when most of the images we train on are synthetically generated. Feels like a classic 'tragedy of the commons'. (Yes, some have watermarks, but my sense is there's a race to the bottom on that kind of thing). https://t.co/A3QawDog90

2022-10-28 16:22:41 @jjspicer read this at once https://t.co/6M8qWJ2Akl

2022-10-27 21:30:29 RT @ClementDelangue: We just crossed 1,000,000 downloads of Stable Diffusion on the @huggingface hub! Congrats to Robin Rombach, Patrick Es…

2022-10-27 18:42:19 @tszzl I played Half Life: Alyx on the Valve Index and it hard-updated me to VR games being amazing, albeit still a little early.

2022-10-27 00:02:01 RT @rgblong: Another question for consciousness scientists and AI people: What is the best evidence for and against large language models…

2022-10-26 17:20:39 @MishaLaskin @junh_oh @RichiesOkTweets @djstrouse @Zergylord @filangelos @Maxime_Gazeau @him_sahni @VladMnih looks cool - link to the paper?

2022-10-26 03:29:56 @gijigae Thanks so much for reading!

2022-10-26 02:05:38 @kipperrii 100% here for fontposting. More fontposting!

2022-10-25 20:15:15 @carperai Haha fair enough. Thanks!

2022-10-25 19:25:12 @carperai You're welcome. Any clues as to release timeline?

2022-10-25 18:49:10 Stages of living in California:First earthquake: dear God hope I've made my Will.Fifth earthquake: Gosh, that was a little rumbly. Hope everyone is ok.???th earthquake (which happened in SF just now): Wonder what gifs people will post on Twitter about this one?

2022-10-25 17:14:50 RT @ElineCMC: .@jackclarkSF's take on this: "The tl

2022-10-25 15:30:28 @EricNewcomer @TaylorLorenz awesome metrics, congrats Eric! Really nice to see your success here : )

2022-10-25 01:52:36 @Meaningness @ArtirKel it's got a bunch of nice examples in it - definitely worth a skim

2022-10-25 01:45:29 One of the nice things about the AI space is when people take their criticisms and instantiate them as quantitative studies of existing systems - kudos to @GaryMarcus et al for a nice paper going over some failures in DALL-E2

2022-10-24 23:21:03 @tszzl Did it with my Senate testimony and was v surprised. Generally find myself running any long doc I write through a LM these days

2022-10-24 05:37:13 Import AI will be coming out on Tuesday as I sprained my ankle at Nopes' final show last night. #VOTEDIY2022 https://t.co/CaKrgLf00r

2022-10-24 01:47:22 RT @SamuelAlbanie: Just how striking are the recent language model results with Flan-PaLM?Here's a plot.Across 57 tasks on mathematics,…

2022-10-23 19:02:59 @CecilYongo Can you share slides? Looks interesting

2022-10-23 02:23:43 @zacharynado You can't necessarily look at the datasets with an API, unless has huge amounts of extra eng time invested. E.g can anyone inspect JFT?

2022-10-23 01:41:19 @kylebrussell Does Playbyte have an in-house exorcist team?

2022-10-23 00:05:13 Image generation going to get strange when most of the images we train on are synthetically generated. Feels like a classic 'tragedy of the commons'. (Yes, some have watermarks, but my sense is there's a race to the bottom on that kind of thing). https://t.co/A3QawDog90

2022-10-28 16:22:41 @jjspicer read this at once https://t.co/6M8qWJ2Akl

2022-10-27 21:30:29 RT @ClementDelangue: We just crossed 1,000,000 downloads of Stable Diffusion on the @huggingface hub! Congrats to Robin Rombach, Patrick Es…

2022-10-27 18:42:19 @tszzl I played Half Life: Alyx on the Valve Index and it hard-updated me to VR games being amazing, albeit still a little early.

2022-10-27 00:02:01 RT @rgblong: Another question for consciousness scientists and AI people: What is the best evidence for and against large language models…

2022-10-26 17:20:39 @MishaLaskin @junh_oh @RichiesOkTweets @djstrouse @Zergylord @filangelos @Maxime_Gazeau @him_sahni @VladMnih looks cool - link to the paper?

2022-10-26 03:29:56 @gijigae Thanks so much for reading!

2022-10-26 02:05:38 @kipperrii 100% here for fontposting. More fontposting!

2022-10-25 20:15:15 @carperai Haha fair enough. Thanks!

2022-10-25 19:25:12 @carperai You're welcome. Any clues as to release timeline?

2022-10-25 18:49:10 Stages of living in California:First earthquake: dear God hope I've made my Will.Fifth earthquake: Gosh, that was a little rumbly. Hope everyone is ok.???th earthquake (which happened in SF just now): Wonder what gifs people will post on Twitter about this one?

2022-10-25 17:14:50 RT @ElineCMC: .@jackclarkSF's take on this: "The tl

2022-10-25 15:30:28 @EricNewcomer @TaylorLorenz awesome metrics, congrats Eric! Really nice to see your success here : )

2022-10-25 01:52:36 @Meaningness @ArtirKel it's got a bunch of nice examples in it - definitely worth a skim

2022-10-25 01:45:29 One of the nice things about the AI space is when people take their criticisms and instantiate them as quantitative studies of existing systems - kudos to @GaryMarcus et al for a nice paper going over some failures in DALL-E2

2022-10-24 23:21:03 @tszzl Did it with my Senate testimony and was v surprised. Generally find myself running any long doc I write through a LM these days

2022-10-24 05:37:13 Import AI will be coming out on Tuesday as I sprained my ankle at Nopes' final show last night. #VOTEDIY2022 https://t.co/CaKrgLf00r

2022-10-24 01:47:22 RT @SamuelAlbanie: Just how striking are the recent language model results with Flan-PaLM?Here's a plot.Across 57 tasks on mathematics,…

2022-10-23 19:02:59 @CecilYongo Can you share slides? Looks interesting

2022-10-23 02:23:43 @zacharynado You can't necessarily look at the datasets with an API, unless has huge amounts of extra eng time invested. E.g can anyone inspect JFT?

2022-10-23 01:41:19 @kylebrussell Does Playbyte have an in-house exorcist team?

2022-10-23 00:05:13 Image generation going to get strange when most of the images we train on are synthetically generated. Feels like a classic 'tragedy of the commons'. (Yes, some have watermarks, but my sense is there's a race to the bottom on that kind of thing). https://t.co/A3QawDog90

2022-10-29 19:09:59 @rvinshit @porksmith Absolutely incredible. A+

2022-10-29 18:32:38 @xriskology Some people shut down because they find engaging to be traumatic

2022-10-28 16:22:41 @jjspicer read this at once https://t.co/6M8qWJ2Akl

2022-10-27 21:30:29 RT @ClementDelangue: We just crossed 1,000,000 downloads of Stable Diffusion on the @huggingface hub! Congrats to Robin Rombach, Patrick Es…

2022-10-27 18:42:19 @tszzl I played Half Life: Alyx on the Valve Index and it hard-updated me to VR games being amazing, albeit still a little early.

2022-10-27 00:02:01 RT @rgblong: Another question for consciousness scientists and AI people: What is the best evidence for and against large language models…

2022-10-26 17:20:39 @MishaLaskin @junh_oh @RichiesOkTweets @djstrouse @Zergylord @filangelos @Maxime_Gazeau @him_sahni @VladMnih looks cool - link to the paper?

2022-10-26 03:29:56 @gijigae Thanks so much for reading!

2022-10-26 02:05:38 @kipperrii 100% here for fontposting. More fontposting!

2022-10-25 20:15:15 @carperai Haha fair enough. Thanks!

2022-10-25 19:25:12 @carperai You're welcome. Any clues as to release timeline?

2022-10-25 18:49:10 Stages of living in California:First earthquake: dear God hope I've made my Will.Fifth earthquake: Gosh, that was a little rumbly. Hope everyone is ok.???th earthquake (which happened in SF just now): Wonder what gifs people will post on Twitter about this one?

2022-10-25 17:14:50 RT @ElineCMC: .@jackclarkSF's take on this: "The tl

2022-10-25 15:30:28 @EricNewcomer @TaylorLorenz awesome metrics, congrats Eric! Really nice to see your success here : )

2022-10-25 01:52:36 @Meaningness @ArtirKel it's got a bunch of nice examples in it - definitely worth a skim

2022-10-25 01:45:29 One of the nice things about the AI space is when people take their criticisms and instantiate them as quantitative studies of existing systems - kudos to @GaryMarcus et al for a nice paper going over some failures in DALL-E2

2022-10-24 23:21:03 @tszzl Did it with my Senate testimony and was v surprised. Generally find myself running any long doc I write through a LM these days

2022-10-24 05:37:13 Import AI will be coming out on Tuesday as I sprained my ankle at Nopes' final show last night. #VOTEDIY2022 https://t.co/CaKrgLf00r

2022-10-24 01:47:22 RT @SamuelAlbanie: Just how striking are the recent language model results with Flan-PaLM?Here's a plot.Across 57 tasks on mathematics,…

2022-10-23 19:02:59 @CecilYongo Can you share slides? Looks interesting

2022-10-23 02:23:43 @zacharynado You can't necessarily look at the datasets with an API, unless has huge amounts of extra eng time invested. E.g can anyone inspect JFT?

2022-10-23 01:41:19 @kylebrussell Does Playbyte have an in-house exorcist team?

2022-10-23 00:05:13 Image generation going to get strange when most of the images we train on are synthetically generated. Feels like a classic 'tragedy of the commons'. (Yes, some have watermarks, but my sense is there's a race to the bottom on that kind of thing). https://t.co/A3QawDog90

2022-10-30 04:45:04 My friend: yeah I totally shit the bed on that prop I made for the Halloween party. Did it in a couple of days. It's no good.The prop: https://t.co/zVsuYnWfqO

2022-10-29 19:09:59 @rvinshit @porksmith Absolutely incredible. A+

2022-10-29 18:32:38 @xriskology Some people shut down because they find engaging to be traumatic

2022-10-28 16:22:41 @jjspicer read this at once https://t.co/6M8qWJ2Akl

2022-10-27 21:30:29 RT @ClementDelangue: We just crossed 1,000,000 downloads of Stable Diffusion on the @huggingface hub! Congrats to Robin Rombach, Patrick Es…

2022-10-27 18:42:19 @tszzl I played Half Life: Alyx on the Valve Index and it hard-updated me to VR games being amazing, albeit still a little early.

2022-10-27 00:02:01 RT @rgblong: Another question for consciousness scientists and AI people: What is the best evidence for and against large language models…

2022-10-26 17:20:39 @MishaLaskin @junh_oh @RichiesOkTweets @djstrouse @Zergylord @filangelos @Maxime_Gazeau @him_sahni @VladMnih looks cool - link to the paper?

2022-10-26 03:29:56 @gijigae Thanks so much for reading!

2022-10-26 02:05:38 @kipperrii 100% here for fontposting. More fontposting!

2022-10-25 20:15:15 @carperai Haha fair enough. Thanks!

2022-10-25 19:25:12 @carperai You're welcome. Any clues as to release timeline?

2022-10-25 18:49:10 Stages of living in California:First earthquake: dear God hope I've made my Will.Fifth earthquake: Gosh, that was a little rumbly. Hope everyone is ok.???th earthquake (which happened in SF just now): Wonder what gifs people will post on Twitter about this one?

2022-10-25 17:14:50 RT @ElineCMC: .@jackclarkSF's take on this: "The tl

2022-10-25 15:30:28 @EricNewcomer @TaylorLorenz awesome metrics, congrats Eric! Really nice to see your success here : )

2022-10-25 01:52:36 @Meaningness @ArtirKel it's got a bunch of nice examples in it - definitely worth a skim

2022-10-25 01:45:29 One of the nice things about the AI space is when people take their criticisms and instantiate them as quantitative studies of existing systems - kudos to @GaryMarcus et al for a nice paper going over some failures in DALL-E2

2022-10-24 23:21:03 @tszzl Did it with my Senate testimony and was v surprised. Generally find myself running any long doc I write through a LM these days

2022-10-24 05:37:13 Import AI will be coming out on Tuesday as I sprained my ankle at Nopes' final show last night. #VOTEDIY2022 https://t.co/CaKrgLf00r

2022-10-24 01:47:22 RT @SamuelAlbanie: Just how striking are the recent language model results with Flan-PaLM?Here's a plot.Across 57 tasks on mathematics,…

2022-10-23 19:02:59 @CecilYongo Can you share slides? Looks interesting

2022-10-23 02:23:43 @zacharynado You can't necessarily look at the datasets with an API, unless has huge amounts of extra eng time invested. E.g can anyone inspect JFT?

2022-10-23 01:41:19 @kylebrussell Does Playbyte have an in-house exorcist team?

2022-10-23 00:05:13 Image generation going to get strange when most of the images we train on are synthetically generated. Feels like a classic 'tragedy of the commons'. (Yes, some have watermarks, but my sense is there's a race to the bottom on that kind of thing). https://t.co/A3QawDog90

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-23 05:45:15 RIP Mimi Parker of Low - an incredible singer, and a member of one of the few bands that has caused me to ugly cry at a live gig. You were brilliant and you were loved. https://t.co/Izelb0AEAC https://t.co/Zbytbt1yUA

2022-11-23 04:59:03 Jane, don't you know me? By Elvis Depressedly is another Sad Banger. A+! https://t.co/YMPeGxmbJF https://t.co/TTLb1JIrqw

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-25 21:11:54 RT @nearcyan: few people seem to understand the *real* reasons why powerful AI image generation models are usually unreleased this is one…

2022-11-25 20:54:49 @dinabass Honestly thought USA defense was very good

2022-11-25 20:53:00 Replace England manager with Head of Lettuce

2022-11-25 05:28:00 Happy Thanksgiving, twitter! https://t.co/LvUU54vDiU

2022-11-23 05:45:15 RIP Mimi Parker of Low - an incredible singer, and a member of one of the few bands that has caused me to ugly cry at a live gig. You were brilliant and you were loved. https://t.co/Izelb0AEAC https://t.co/Zbytbt1yUA

2022-11-23 04:59:03 Jane, don't you know me? By Elvis Depressedly is another Sad Banger. A+! https://t.co/YMPeGxmbJF https://t.co/TTLb1JIrqw

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-25 21:11:54 RT @nearcyan: few people seem to understand the *real* reasons why powerful AI image generation models are usually unreleased this is one…

2022-11-25 20:54:49 @dinabass Honestly thought USA defense was very good

2022-11-25 20:53:00 Replace England manager with Head of Lettuce

2022-11-25 05:28:00 Happy Thanksgiving, twitter! https://t.co/LvUU54vDiU

2022-11-23 05:45:15 RIP Mimi Parker of Low - an incredible singer, and a member of one of the few bands that has caused me to ugly cry at a live gig. You were brilliant and you were loved. https://t.co/Izelb0AEAC https://t.co/Zbytbt1yUA

2022-11-23 04:59:03 Jane, don't you know me? By Elvis Depressedly is another Sad Banger. A+! https://t.co/YMPeGxmbJF https://t.co/TTLb1JIrqw

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-25 21:11:54 RT @nearcyan: few people seem to understand the *real* reasons why powerful AI image generation models are usually unreleased this is one…

2022-11-25 20:54:49 @dinabass Honestly thought USA defense was very good

2022-11-25 20:53:00 Replace England manager with Head of Lettuce

2022-11-25 05:28:00 Happy Thanksgiving, twitter! https://t.co/LvUU54vDiU

2022-11-23 05:45:15 RIP Mimi Parker of Low - an incredible singer, and a member of one of the few bands that has caused me to ugly cry at a live gig. You were brilliant and you were loved. https://t.co/Izelb0AEAC https://t.co/Zbytbt1yUA

2022-11-23 04:59:03 Jane, don't you know me? By Elvis Depressedly is another Sad Banger. A+! https://t.co/YMPeGxmbJF https://t.co/TTLb1JIrqw

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-11-25 21:11:54 RT @nearcyan: few people seem to understand the *real* reasons why powerful AI image generation models are usually unreleased this is one…

2022-11-25 20:54:49 @dinabass Honestly thought USA defense was very good

2022-11-25 20:53:00 Replace England manager with Head of Lettuce

2022-11-25 05:28:00 Happy Thanksgiving, twitter! https://t.co/LvUU54vDiU

2022-11-23 05:45:15 RIP Mimi Parker of Low - an incredible singer, and a member of one of the few bands that has caused me to ugly cry at a live gig. You were brilliant and you were loved. https://t.co/Izelb0AEAC https://t.co/Zbytbt1yUA

2022-11-23 04:59:03 Jane, don't you know me? By Elvis Depressedly is another Sad Banger. A+! https://t.co/YMPeGxmbJF https://t.co/TTLb1JIrqw

2022-11-19 19:45:33 @BlancheMinerva Not the paper you're looking for, but this paper perhaps might cite it https://t.co/wD5PCL4Sa9

2022-11-19 17:07:27 Given that each newsletter entails reading arXiv for approximately 10-15 hours, I've read arXiv for at least 3000 hours in the past six years, likely much more. FOR FREE. It is an incomparably brilliant thing and we should all support it.

2022-11-19 17:05:12 I just bought myself a load of @arxiv merch for Christmas - and you can too! https://t.co/AKFrx6e045 This is a great way to support one of the most important open infrastructures of the AI ecosystem. Plus, you'll be swagged out for the festive season.

2022-11-19 12:23:24 Aeon Station - Leaves

2022-11-18 17:24:03 RT @irinarish: Thrilled to announce that our joint proposal (@Mila_Quebec @laion_ai @AiEleuther) on "Scalable Foundation Models for Transfe…

2022-11-17 18:15:03 Demos are the best way to 'show, not tell' people about the current state of AI. I also find demos are one of the best ways to talk about contemporary safety issues with people - one thing to read about safety, another to see a model break in front of you in a wild way. A+ https://t.co/Wsku7NPtVA

2022-11-17 18:06:12 @arxiv this absolutely rules. I am going to now buy even more arxiv merch to celebrate. congrats both!

2022-11-16 02:09:12 @vaibhavk97 @EMostaque Mostly because I spent a lot of time thinking about size of clusters owned by public sector versus private sector, so always interesting to get new data points. Additionally, some orgs are compute-heavy and some are compute-light, so interesting to track specific numbers.

2022-11-16 02:02:57 @kohjingyu @EMostaque https://t.co/7XUXCZMFYy

2022-11-16 01:29:46 Stability AI (people behind Stable Diffusion and an upcoming Chinchilla -optimal code model) now have 5408 GPUs, up from 4000 earlier this year - per @EMostaque in a Reddit ama

2022-12-07 18:50:16 @HaydnBelfield There's definitely a tax, the main q is now painful the tax is versus how much you're willing to spend to gain the capability. As this paper notes, tax on decentralized training has been reducing over time

2022-12-07 16:57:32 @alexeyguzey @anguzey Congratulations. America+++!

2022-12-07 18:50:16 @HaydnBelfield There's definitely a tax, the main q is now painful the tax is versus how much you're willing to spend to gain the capability. As this paper notes, tax on decentralized training has been reducing over time

2022-12-07 16:57:32 @alexeyguzey @anguzey Congratulations. America+++!

2022-12-08 15:30:38 @PetarV_93 @andreeadeac22 looks very interesting. quick q - what are the key diffs between v1 and v2 of paper? might write up for import ai and would find pointers (pun intended!) helpful

2022-12-08 13:15:57 RT @RhysLindmark: 1/ Live tweeting talk from @AnthropicAI @AmandaAskell at @southpkcommons. Moderated by the excellent @soniajoseph_! htt…

2022-12-07 18:50:16 @HaydnBelfield There's definitely a tax, the main q is now painful the tax is versus how much you're willing to spend to gain the capability. As this paper notes, tax on decentralized training has been reducing over time

2022-12-07 16:57:32 @alexeyguzey @anguzey Congratulations. America+++!

2022-12-08 15:30:38 @PetarV_93 @andreeadeac22 looks very interesting. quick q - what are the key diffs between v1 and v2 of paper? might write up for import ai and would find pointers (pun intended!) helpful

2022-12-08 13:15:57 RT @RhysLindmark: 1/ Live tweeting talk from @AnthropicAI @AmandaAskell at @southpkcommons. Moderated by the excellent @soniajoseph_! htt…

2022-12-07 18:50:16 @HaydnBelfield There's definitely a tax, the main q is now painful the tax is versus how much you're willing to spend to gain the capability. As this paper notes, tax on decentralized training has been reducing over time

2022-12-07 16:57:32 @alexeyguzey @anguzey Congratulations. America+++!

2022-12-08 15:30:38 @PetarV_93 @andreeadeac22 looks very interesting. quick q - what are the key diffs between v1 and v2 of paper? might write up for import ai and would find pointers (pun intended!) helpful

2022-12-08 13:15:57 RT @RhysLindmark: 1/ Live tweeting talk from @AnthropicAI @AmandaAskell at @southpkcommons. Moderated by the excellent @soniajoseph_! htt…

2022-03-18 23:07:06 RT @TheRegister: Victor Shepelev, @zverok, is a Ruby developer who lives in Ukraine Since Russia invaded his country, he's had more pressi… 2022-03-16 21:34:05 @IEthics @mathemakitten hi there - yeah, this is a tough area to gather data on, so it's sort of chicken and egg - can't write about it until data exists. There are projects at GPAI and OECD trying to back out environmental impact of AI computation, so I'm expecting we'll have data to incorporate soon 2022-03-16 18:54:34 One fun thing @indexingai did this year was survey a bunch of AI researchers about the price they paid for robotic arms. Seems like emergence of new low-cost arms is making a difference. If we make it cheaper to do research on real robots there will be more of it, so that's neat! https://t.co/z0W8IkJoDk 2022-03-16 18:53:22 RT @indexingai: TOP TAKEAWAYS FROM THE 2022 AI INDEX REPORT - A THREAD This year, we partnered with a broad set of academic, private, and… 2022-03-16 17:34:20 @realTinaHuang @dzhang105 @nmaslej they worked extraordinarily hard and helped us make the best report yet ! 2022-03-16 15:38:46 @luislamb @erikbryn @indexingai @StanfordHAI @RayPerrault @terahlyons @jcniebles @MPSellitto @yshoham thank you! it takes a (growing!) village to try and characterize this space. always grateful for the feedback and engagement 2022-03-16 15:38:19 For years, people have said the AI Index should try and do more work on AI ethics. Well, now we have an entire chapter. Some of the highlights include explorations of trends in toxicity and bias in language models, analysis of workshop topics, and more! https://t.co/N7Uc4InJKw 2022-03-16 15:35:06 RT @erikbryn: The 2022 AI Index Report is out today! 230 fascinating pages of facts and figures. https://t.co/4YToMRtRVm @indexingai @St… 2022-03-16 13:49:26 @BMarcusMcCann @YouSearchEngine What is the underlying model? 2022-03-16 06:15:21 @RichardSocher @peteskomoroch What is the underlying model? 2022-03-16 03:33:05 RT @RidT: What are the geopolitics of artificial intelligence? When is AI politically neutral? And what use-cases favor the values of close… 2022-03-13 19:16:07 @jachiam0 @nabla_theta this is how you joker-fy agi. do not do this. 2022-03-13 07:00:56 RT @TrishBytes: Another great point via @jackclarkSF is that due to GDPR, the Meta team wasn't able to use data from Europe, meaning "Europ… 2022-03-13 01:20:40 @jekbradbury Yeah, agreed. I'm going to write this up in newsletter and tweet later once I put thoughts together 2022-03-13 01:20:17 RT @jekbradbury: @jackclarkSF I think your overall point is still important! Several Chinese projects have been collaborations between comp… 2022-03-13 00:14:50 @deliprao @BlancheMinerva @huggingface @BigscienceW How do you build a radio shack computer club for this stuff? My hypothesis is you need a big blob of publicly accessible compute (e.g national research cloud) 2022-03-13 00:13:35 @jekbradbury Thank you very much for the callout https://t.co/JPTNIdGOL9 2022-03-13 00:12:39 @BlancheMinerva @huggingface (as in, HF has explicitly tried to clarify this to me, hence confusion) 2022-03-13 00:11:11 @deliprao @BlancheMinerva @huggingface @BigscienceW Yes, that's true. I think it'd be valuable to have more projects like BigScience utilizing US HPC for this reason. Same reason why I think a national research cloud is also important 2022-03-13 00:10:30 @deliprao @BlancheMinerva @huggingface @BigscienceW I'm a bit confused - what about the Tsinghua and BAAI stuff is massively closed? They have some closed things (e.g wudao) but they have also released a bunch of stuff. (They're not an open-by-default org like those you mention, but they don't seem like a fully closed group either 2022-03-13 00:09:16 @BlancheMinerva @huggingface So, I hear different things. I've actually presented BigScience as a HuggingFace project before and been called out that's it's really an academic project and HF is helping, so I wasn't sure how to present it. Do you think of BigScience as a majority contribution from HF? 2022-03-13 00:08:19 @BlancheMinerva Thank you very much for calling me out here! https://t.co/JPTNIdGOL9 2022-03-13 00:07:38 @deliprao @BlancheMinerva @huggingface @BigscienceW I think BigScience is excellent, though I'm not sure why academia+gov lab = closed science. Baai has published and released models and datasets, as has Tsinghua 2022-03-13 00:06:52 It's rare that I delete tweets, but I think clarifying something is misleading is sometimes less good than just erasing it and logging that you messed up. 2022-03-13 00:06:20 I tweeted a Chinese paper about training frameworks for 174 trillion parameter models earlier, but the tweets were misleading in terms of computational amounts (thanks @jekbradbury and @BlancheMinerva for callouts), so I deleted the tweets as they were starting to do numbers. 2022-03-12 23:18:55 @SashaMTL @BigscienceW Yeah, it's good! It's weird to me there aren't more initiatives like it 2022-03-12 23:18:25 @SashaMTL In what sense? 2022-03-12 18:16:08 @StasBekman @suzatweet @BigscienceW https://t.co/y1vQOwf3Mn 2022-03-12 17:43:59 @Derek_duPreez hey good for you pal! I've got some friends in margate so I'll come by next time I'm in UK 2022-03-12 08:11:00 CAFIAC FIX 2022-01-25 04:12:49 @jjvincent For added joy, try exclaiming 'gosh look at this beautiful nutrient vehicle' at the next thing you eat 2022-01-25 03:23:32 @bigblackjacobin realize unlikely but I feel like Corey Hawkins should have a shot as best supporting actor for role as Macduff, also 2022-01-24 22:29:31 @mtrc it's a pretty crazy stat - 6k a100s = world's 5th most powerful supercomputer https://t.co/I5DpLShSZd 2022-01-24 22:24:52 @mtrc it has 6000 A100s split across 760 DGX servers 2022-01-24 19:49:52 @AndrewLBeam @mark_riedl and do you buy physical cards, or do you rent cards from CSPs? What do these tradeoffs look like? (Very helpful answer already, thank you!) 2022-01-24 19:36:56 @risi1979 @ITUkbh What stands between you and having more compute resources, and what could help? I also care hugely about this! 2022-01-24 19:36:35 @mark_riedl What kind of paths do you have to secure more compute, and why is it hard for you to get more compute? I'm perpetually fascinated by how solid researchers at universities are struggling to access compute. Why does this happen? 2022-01-24 19:13:41 @GaryMarcus @FLIxrisk thanks for signal-boosting this, Gary! 2022-01-24 19:08:23 @hodanomaar yup! dm'd you also 2022-01-24 18:52:41 @hodanomaar Good post, I think also worth putting in perspective against what private sector doing. E.g https://t.co/I5DpLShSZd 2022-01-24 18:48:14 RT @WalterReade: @jackclarkSF I was speaking with an atmospheric physicist this past Friday and he was expressing the difficulty his team w… 2022-01-24 18:41:41 @WalterReade You'd think so, but it's having a hard time doing this, and it's also struggled to make its stuff usable (e.g, using Summit supercomputer is a lot more challenging than using a CSP). There's a NAIRR taskforce trying to work through these problems though, so maybe things will turn 2022-01-24 18:33:56 One of the reasons I massively care about building public AI infrastructure like the NAIRR is that otherwise the private sector is going to be able to 'out-think' the public sector/commons by virtue of having bigger and better computers. We're sleepwalking into end of democracy. 2022-01-24 18:33:04 For perspective, at ~6,000 A100s, Facebook's newly announced AI cluster is on par with Perlmutter, the world's fifth fastest supercomputer (~6,000 A100s). Ultimately, FB is going to scale to ~16,000 A100s. https://t.co/gXiZZhURq3 2022-01-24 15:55:05 I'll be in Washington DC this week, if anyone wants to chat about this role IRL. DM me! 2022-01-24 15:54:36 We're hiring @AnthropicAI for a Head of External Affairs - help translate research into practical recommendations, and work with us to contribute to various aspects of AI policy (e.g, measurement). https://t.co/2G62pbDLTJ 2022-01-24 02:56:00 @twitskeptic But, per your point, since the cycle times in bio are quite long, there might be evidence in the future that this stuff doesn't work/isn't helpful. I feel like it'll be easier to analyze at the end of 2022 for positive/negative evidence of utility. Personally, I'm optimistic :) 2022-01-24 02:55:14 @twitskeptic It's appropriate to be skeptical of new tech, especially when hyped (like AI). From my POV, there are very encouraging signs from a bunch of places AlphaFold has utility. E.g, here's a nice paper about using alphafold to enrich understanding of Omicron https://t.co/BjVzFV1sQh 2022-01-23 20:57:00 One of the dreams of AI researchers is to use AI to accelerate science broadly - and it's happening! Very inspiring. https://t.co/09Ahf7AGHj 2022-01-23 19:04:08 @mcwm congratulations! one thing you might want to invest in is some cycling pants. in my experience, the more it looks like a ridiculous monkey butt, the better it'll be. I did a few 4-5 hours rides a couple of years ago and these were massively helpful https://t.co/oJFvueFCsW 2022-01-23 00:57:31 @lowtheband Point of Disgust 2022-01-22 23:48:58 @iainthomson What's the occasion meriting a doll purchase? 2022-01-22 07:35:04 @happybandits Dude, don't we all. 2022-01-22 07:18:42 @happybandits Don't we all, brother. Don't we all. 2022-01-22 07:14:51 Been crashing on some work stuff last couple of weeks, which means I've been texting my friends increasingly deranged things while working. One example, here. https://t.co/jdttwGcRjI 2022-01-21 23:25:25 @erocdrahs Paging @foie 2022-01-19 22:48:20 @machinaut yeah, I do have an intuition that something like bipeds for doing security patrols could end up being valuable, but on the other hand the economics of fixed cameras + drone patrols means that it might be simpler to do latter 2022-01-19 22:42:59 @machinaut I do think you're right in that there will be way more robots with wheels than anything else because, as you note, wheels are good 2022-01-19 22:42:41 @machinaut I kind of disagree. I'll add to the graph above, but we have quite a few examples now of being able to get IRL flexible navigation/movement for drones, and are also getting RL drone stuff. I'm a bit more bullish on robotics lately because of all the traction for robot cos 2022-01-19 22:39:00 @machinaut I agree. I expect in ten years we'll actually have real world quadrupeds (and maybe bipeds) that do extraordinarily capable things 2022-01-19 22:30:47 @mattbeane Thanks! Yeah, I think one of the best uses of twitter is as a deliberately casual channel for seeking truth. Too many people on here will refuse to ever admit they're wrong and hence don't seem like want a conversation. Dialog is about reflection/realization! 2022-01-19 22:24:52 (since getting some attention) - AlphaFold isn't RL. Explanation here. Thanks! https://t.co/NvI1QrMr8Y 2022-01-19 22:11:35 @natolambert yeah, I was being a dummy, you're right https://t.co/NvI1QrMr8Y 2022-01-19 22:10:11 RT @AndrewLBeam: @jackclarkSF I'm not sure I know onyone who considers AF2 to be RL or even SSL. They train the model to predict structure… 2022-01-19 22:09:30 @AndrewLBeam yeah, I think you're right. I read through the paper quickly and came to same conclusion. Then I made a silly meme using a proper RL example https://t.co/zaqwPR21N1 . Thanks for chiming in, very helpful! 2022-01-19 22:04:08 every time I make one of these graphs, I get a bit freaked out, because if you look across RL / computer vision / language models, etc, you see these crazy capability expansions over time 2022-01-19 22:03:32 RL progress: 2013: Pong 2015: AlphaGo 2019: AlphaStar / Dota 2020: AlphaDogFight https://t.co/0sEY6vmHa6 2022-01-19 21:52:13 Would it be fair to say AlphaFold represents a victory of reinforcement learning in the real world? AlphaFold uses a ton of RL loops to work, though I'm wondering if it's fairer to say it's more like an advance in semi-supervised learning. Thoughts? For a presentation. 2022-01-19 19:18:58 @mattsheehan88 thanks for writing it, I found it extremely helpful! 2022-01-19 19:18:49 RT @mattsheehan88: Very happy to see my recent piece featured in @jackclarkSF's excellent Import AI newsletter, the best digest of what's g… 2022-01-18 20:39:52 @floragraham @beckkubrick 25,000 dollars, not pounds. I myself would have loved to earn 25k pounds, but alas, I instead worked in SEO journalism 2022-01-18 20:38:30 @AlexBNewhouse Show examples! 2022-01-18 18:39:34 RT @tyhan1: @jackclarkSF @advadnoun @minimaxir https://t.co/RDD4EGePaX and https://t.co/xWMK25ogVs can be very satisfying in different ways… 2022-01-18 18:39:29 RT @Nearcyan: @jackclarkSF @advadnoun @minimaxir a few good ones: https://t.co/Hau5eMymy3 https://t.co/kjct9xCo8Q https://t.co/fkv3MXa1j… 2022-01-18 18:39:25 RT @advadnoun: @jackclarkSF @minimaxir @RiversHaveWings @ai_curio https://t.co/YIdqvOejnR (RHW's diffusion, @minimaxir's link may be newer?… 2022-01-18 18:38:02 @SashaMTL @advadnoun @minimaxir oh cool! yeah I should've thought of @huggingface . Will take a look 2022-01-18 18:36:19 @tyhan1 @advadnoun @minimaxir great links, thanks. wasn't aware of accomplice 2022-01-18 18:36:02 @Nearcyan @advadnoun @minimaxir thanks for the links, will check out. Yeah, for presentations I try and generate stuff when I can, partially because I like to talk about the trend for increasingly usable/accessible AI stuff, and/or DIY AI 2022-01-18 18:35:27 @minimaxir @advadnoun @RiversHaveWings great pointer, thanks 2022-01-18 18:35:20 @advadnoun @minimaxir @RiversHaveWings @ai_curio extremely helpful, thanks for sharing! 2022-01-18 18:25:11 I'm making a presentation about recent trends in AI dev, and I want to show how crazy-good text-guided image generation has got. Does anyone have any preferred Colab notebooks for this purpose? cc @advadnoun @minimaxir 2022-01-18 03:47:42 @beckkubrick I was a journalist living in a cheap flat in South London earning $25k a yr, trying to find any excuse/opportunity to write about science/tech. Now I live in Bay Area and work at an AI research company. My main regrets from 20s are missing social stuff for work that didn't matter 2022-01-17 08:11:00 CAFIAC FIX 2022-01-11 21:57:56 @jsotterbach *use 2022-01-11 21:57:42 @jsotterbach BigScience is a project that gives me a lot of hope. If they succeed in training some ~200b models, then I think could be a template for us by other governments. Fingers crossed! 2022-01-11 08:11:00 CAFIAC FIX 2022-01-06 04:17:11 @jeremyjkun Note I think this is meaningfully different in China, where they expend non-trivial national compute towards various (mostly surveillance and some science) ML experiments. 2022-01-06 04:15:57 @jeremyjkun It seems to me like supercomputers are used mostly for (in descending order of size) nukes, weather, then materiels science/chem/bio 2022-01-06 04:15:14 @jeremyjkun I'd love to read a comparative analysis here (doubt it exists, just saying it'd be useful) 2022-01-06 04:01:35 @rajiinio @mer__edith (specifically talking about the 200B+ parameter ERNIE 3.0 model here) 2022-01-06 04:00:26 @rajiinio @mer__edith On the other hand, why would Google give access to models? Lots of these models are being built and not exposed (e.g BAIDU cutting-edhe Ernie models support Baidu search and newsfeed, but aren't available to any researchers) 2022-01-06 03:59:16 @rajiinio @mer__edith your latter point re whether there are enough researchers is def open q. My hacky view is 'build it and see' - if there is interest, great. If not, sub-slice the big computer to support a variety of diff projects, and use the diversity to learn to build flexible research infra. 2022-01-06 03:57:26 @rajiinio @mer__edith Not claiming one class of model is superior, more that various large models ranging from text (gptx) to text-image (clipx) all seem pretty interesting/worthy of broader study. 2022-01-06 03:56:40 @rajiinio @mer__edith Yeah, great to drill into this. So, I definitely think that large AI models merit study - they have quantitatively and qualitatively diff behaviors to many smaller models, so are worth studying by broader set of people than just private sector. 1/2 2022-01-06 03:51:28 @rajiinio @mer__edith Wrt common infra - seems if you give researchers more funding, it's less likely you get great training infrastructure than if you fund a group responsible for creating usable infra for all. (Eg, doesn't make sense to staff 50 on infra for one lab, but might for 100 labs?) 2022-01-06 03:49:09 @rajiinio @mer__edith I guess something interesting here would be studying why academic frameworks died - used to be that people used Theano (bengio lab iirc) and Lasagna (schmidhuber) and other stuff, then switched to the more well funded corp frameworks like TF and PyTorch 2022-01-06 03:47:26 @rajiinio @mer__edith *permutations. Sorry for typo 2022-01-06 03:36:13 @rajiinio @mer__edith Specific statement I'm making is that infrastructure to facilitate exploration, replication, and evaluation of large-scale AI models is important. Industry seems to do well here, but academia doesn't seem to be successful at getting this infra and making it usable 2022-01-06 03:34:32 @rajiinio @mer__edith Yeah, this is an important point. What seems useful to me = equip researchers with a significant amount of computational infrastructure, engineers to build tooling and distributed systems, and then give people the opportunity to explore permissions of gptx (for example) models. 2022-01-05 03:03:37 @jachiam0 Excellent answer. Haven't read that story - will check it out. Thanks Josh! 2022-01-05 02:50:50 @jachiam0 What's something you'd prefer not to be true, but is true? 2022-01-04 19:35:24 @celinehalioua Free healthcare and cheaper housing 2022-01-04 18:49:59 @garibarba @mer__edith Ah, to be specific - most academic institutions I'm aware of aren't able to do the experiments that cost millions in the private sector, so here I think of large-scale experimental infrastructure as a cluster that can do some multimillion training runs 2022-01-04 15:48:46 RT @BlindIodine: @jackclarkSF Not aware of any paper on this subject, but based on my past experience in academia I would say: There is no… 2022-01-04 02:46:57 @jonmsterling Shards of Earth by @aptshadow There is no anti-mimetics division by @qntm 2022-01-03 20:31:57 @buzz @mer__edith @samfbiddle hahah, yes! Back when I was a journalist I covered a Markram talk where he said brain simulation possible by 2023 (not happening) https://t.co/W9wvxkf5lA (Note: This isn't a v opinionated article on my part. I guess in hindsight it feels like his exaflop estimate was off) 2022-01-03 20:30:26 RT @moreisdifferent: @jackclarkSF As others have noted, it's v field dependent. Physicists and astronomers have been incredibly successful… 2022-01-03 20:30:01 @samfbiddle @mer__edith This is a great point re physics, and I imagine some % of funding for physics/materials science these days comes from the fact tech like hypersonics is being actively weaponized. 2022-01-03 20:23:42 @moreisdifferent Yeah, this is the mystery I'm really interested in trying to understand better. If you're aware of any papers/books that talk about how physics does this so successfully, would love to read 2022-01-03 20:21:02 @Ben_Reinhardt @jamesheathers splendid, I shall read! 2022-01-03 20:20:34 @mer__edith and some of the reason we don't have as many of the positive edge cases are that academia doesn't have the engineering resources to explore them, and industry doesn't because doesn't align to incentives/money, etc. 2022-01-03 20:19:55 @mer__edith Yeah - just to be glib to try and explain thinking, it would seem bad if you had big academic infrastructure and all you got as an output was, like, a massive facial recognition model. On the other hand, I feel like the number of beneficial edge cases may be larger than we think 2022-01-03 20:18:29 @mer__edith gotcha, thanks for clarifying. 2022-01-03 20:17:24 @Ben_Reinhardt @jamesheathers Thanks! I had read the PARPA proposal (is that same as FRO), but will read 2022-01-03 20:16:44 @mer__edith Another example (though maybe you count as surveillance) would be using large amounts of publicly available satellite data to build giant earth-sensing models for things like forestry analysis and climate analysis. That benefits from compute& 2022-01-03 20:15:40 @mer__edith Well, not necessarily - you could just run a massive robotics experiment using synthetic data derived from MuJoCo, for instance. There are chunks of AI that benefit from large-scale compute and which can use synthetic data, or simulated data. 2022-01-03 20:14:34 @mer__edith @robreich One issue here is supercomputers tend to be a lot less usable than clouds. That's not a reason to try! In fact, I think gov should increase funding for engineers to increase usability of HPC infra. But it does make it a more challenging proposition. 2022-01-03 20:13:56 @mer__edith @robreich For example, you can get access to the Summit supercomputer via a grant application process for big experiments: https://t.co/dtqQN4WVkG Does that seem better than using a cloud provided by one of the big tech companies (e.g, Amazon/Google/Microsoft)? 2022-01-03 20:13:03 @mer__edith @robreich I understand that point. So, would you be happy in a hypothetical where large swathes of DOE supercomputing was repurposed for academic CS/ML infrastructure, or does a good world look different to that? 2022-01-03 20:11:21 @marypcbuk I've met academic budgets in physics where they have access to massively expensive experimental infrastructure (w/ time-sharing) that isn't really available in CS/ML. 2022-01-03 20:09:52 @mer__edith So I'm mostly curious why CS/ML in academia doesn't have equivalent large-scale experimental infrastructure to physics and astronomy and bio. 2022-01-03 20:09:16 @mer__edith It feels instructive to look at other fields - mixtures of govs/academia/private sector have worked out how to fund big experimental infrastructure in physics (LHC: ~$5bn, ITER: ~$10-20bn, LBNT and DUNE $2.5bn)), astronomy (Square Kilometer Array: $1bn), space, etc. 2022-01-03 20:06:47 RT @MPSellitto: @jackclarkSF Haven’t seen a paper, but I think the funding model explains a lot Most research is funded by grants that ind… 2022-01-03 18:57:12 @bobvanluijt why do you think this is the case? 2022-01-03 18:55:30 Is there a good paper that explains how/why academia tends to under-invest in engineering infrastructure? 2022-01-02 00:06:26 @YungCoconut The next Shakespeare will be trapped inside an NFT 2022-01-01 22:43:16 @tabithagold congratulations, tabitha! 2022-01-01 22:27:30 RIP the Dildo Factory, RIP the Octopus Literary Saloon, RIP the Hole, RIP Ara Jo. Everything is temporary - don't be afraid to live. 2022-01-01 22:27:01 If you've spent any time in the East Bay DIY music scene in the last few years, then I guarantee you you'll find places you visited and people you know in this wonderful photo collection by a local documentarian https://t.co/DPQ1RQSIb9 https://t.co/IeVltHG5CQ 2021-12-31 05:16:43 @AmandaAskell A few people in family/friends I knew died when I was a teen/ young adult, so that made me try to be more generous/kind/charitable in a bunch of ways. Additionally, getting older seems to have made me kinder - I think b/c experience++ helps u have more empathy in general 2021-12-31 00:22:20 @YungCoconut A real one 2021-12-30 04:57:34 @iainthomson Very nice! I made a big chicken soup + stock from ours. 2021-12-29 23:59:49 @joelgrus Oh, San Antonio is real nice - I have a family member who lives there so have visited. Really nice river trail also. And I know what you mean re the tech stuff 2021-12-29 23:52:37 @joelgrus How is it? 2021-12-29 22:35:49 @exteriorpower It's harder to embezzle stuff that is automatically tracked at high levels of detail. 2021-12-29 22:22:17 @npparikh @exteriorpower This is also true! Everyone loves free cake 2021-12-29 22:15:27 @exteriorpower Central authorities tend to destroy the value of currencies they control through money printing or embezzlement 2021-12-29 17:56:02 @AmandaAskell Do you think sentience guarantees contemplation of some kind of afterlife? 2021-12-28 20:28:52 @negroprogrammer 'If you aren't making your old friends terrified of you, you aren't growing as a person' 2021-12-27 08:20:00 CAFIAC FIX 2021-11-06 23:20:00 CAFIAC FIX 2021-12-19 23:36:52 @RishiBommasani @mmitchell_ai @Ted_Underwood I had this takeaway as well - it felt like an attempt to talk to people who think LMs are dumb as a rock and update them. But I haven't asked the author so could be wrong 2021-12-18 08:41:26 @BuyHigh81277108 I don't know too much about other projects, but I'd assume they mostly use GPUs for underlying computation. Is this the case? 2021-12-17 19:12:28 It is quite remarkable how ETH mining ends up building clusters with enough GPUs you could train some really large models. (Per this tweet: https://t.co/UXUpxkLdXK ) [Obviously, the networking and other infra isn't quite right for AI training, but what a lot of GPUs!] https://t.co/LrDMwmH9FF 2021-12-17 06:13:50 @lfschiavo This is the way 2021-12-16 23:26:47 @itunpredictable @GiveDirectly 2021-12-16 22:12:17 @mims I did a conference earlier this year that asked for a PCR test within 72 hours of conference beginning, and provided free rapid tests for everyone to use daily. I could imagine confs providing self-administered rapid tests and that becoming a norm 2021-12-15 22:39:16 @Derek_duPreez I think 'never ever' is one of the best pop songs of that decade and I'll fight anyone who disagrees with me. 2021-12-13 18:34:34 @roybahat I think I somewhat disagree here? Depends on definition of credentials, but in my POV in AI a lot of it is really 'do you have interesting github code' or 'have you done interesting research' (which can include independent stuff just published to arXiv). 2021-12-11 23:25:06 @benergetic It is. I can imagine it depends on what you view critical time periods as for various things. 2021-12-11 22:44:03 @geoffreyirving But I think you're making claim that at the larger level it doesn't make as much as a diff - you can still buy/rent big clusters of chips networked together, and crypto hasn't massively dented this. Might change if crypto-specific chips start to contend w/ others at fab level 2021-12-11 22:42:46 @geoffreyirving I feel like they might, in some cases - e.g, if you're an AI researcher that wants to have a home GPU, then crypto has made it harder and more expensive for you to get started 2021-12-11 22:34:57 @sri_rad I also think compute is definitely finite over a sliding window of a year or two. E.g, TSMC takes orders 18 months in advance. Plus, fab expansion can take many years (half a decade for a cold start fab iirc), so compute is finite over single digit years 2021-12-11 22:34:01 @sri_rad Yeah, I think there's an argument for this! On the other hand, from my POV, crypto has cannibalized GPU markets in such a way that there's less science happening than could be, and more crypto happening (because can turn crypto into money easily) 2021-12-11 22:29:39 I'm not making a claim about the goodness or badness of bitcoin/crypto here. I am claiming that if computation is something that can be used to enhance economic and strategic competitiveness, then can argue crypto might not be best investment of finite compute budget. 2021-12-11 22:28:42 My somewhat heterodox view is that if you think compute is a strategic resource, it's quite useful to get rivals to dump a growing % of their computational budget into inefficient calculations for something that cannibalizes your rivals' FIAT financial system. https://t.co/RuMkFsyUJh 2021-12-09 20:25:28 "I'm sorry Dave, I can't do that" is gonna be a problem with current alignment techniques. Many approaches involve constraining/steering models. This mostly works, but there will be edge cases where you want a model to do something and the priors you've given it prevent it. 2021-12-09 19:09:58 RT @jgreener64: 3. Twitter and pre-prints are legitimate venues to share and discuss science. Without these fast venues, we might only jus… 2021-12-09 00:36:45 RT @AmandaAskell: Although humans might superficially seem to understand language, they don't have the direct access to the noumena. Since… 2021-12-08 18:00:26 @tonypengcomms Where's the paper? Looks interesting. Been tracking ERNIE for a while! 2021-12-07 17:32:52 RT @AmandaAskell: It’s easy to say you want technology to respect local values when those values are unobjectionable. It’s harder when they… 2021-12-06 21:00:01 RT @sleepinyourhat: I just firmed up plans to spend my upcoming sabbatical year at @AnthropicAI in SF. Looking forward to burritos, figs, i… 2021-12-03 22:28:00 @n_miailhe @samuelmcurtis thanks 2021-12-03 22:10:31 @samuelmcurtis Where's the agenda? 2021-12-03 18:59:16 RT @catherineols: So I wanted to just point out that the prompt (written mostly by Jared and partly by me, and which we really didn't spend… 2021-12-03 16:43:42 RT @AnthropicAI: Our first AI alignment paper, focused on simple baselines and investigations: A General Language Assistant as a Laboratory… 2021-12-02 04:59:29 @caffeneko Haven't seen this 2021-12-02 04:54:33 @caffeneko *synthesize. 2021-12-02 04:54:11 @caffeneko Idk I read Sexual Personae like ten years ago and found it pretty interesting. I didn't/don't necessarily agree with it, but I found her thesis intriguing, and it's cool to see someone synthesis that across a ton of literature. She has fallen off a bit recently 2021-12-02 04:44:58 "Camille Paglia has gone from based to cringe" - a quote from a beer tonight that feels very Bay Area Humanities Lover. 2021-11-30 21:34:30 @SwissCognitive @Ronald_vanLoon @WhiteheartVic @etzioni @RoblemVR @demishassabis @ChristopherIsak Please don't highlight these spammy and misleading videos. It's not helpful. 2021-11-30 20:04:20 AI is going to create infinite, procedurally generated multi-modal games. It's going to be an incredibly exciting field! https://t.co/Tv6Vj2QDZr 2021-11-30 02:00:46 @MaharriT @chrisalbon I think graphing corpus sizes over time could be interesting, actually. This definitely helps in vision where you can track emergence of successively larger datasets over time, including with sub-specialisms like datasets for object rec, etc. good idea! 2021-11-30 00:23:53 @MaharriT @chrisalbon I'm not sure there's a specific point, but it's interesting to see the resource-intensity of a given factor in technology dev go up 330X in 2 years 2021-11-29 22:23:55 @iamtrask still shy of Facebooks 12 trillion parameters https://t.co/3LNyUpxavN 2021-11-29 22:09:48 2 years means frontier of dense generative models goes from 1.6billion parameters (Salesforce, following GPT-2), to 530billion parameters (Microsoft Megatron-Turing https://t.co/md03Qz8K90). 330X increase in model size, not too shabby. https://t.co/eQtGUoPLHa 2021-11-29 00:34:46 @micsolana That's by design 2021-11-27 00:15:13 @iainthomson Where's the bbq from? 2021-11-24 20:08:49 @charlotte_stix a daily experience for me : ) iirc you've seen me do it, though with mundane things like 'i'm getting coffee, now i need to go over here, oh look there's a person' etc 2021-11-24 19:55:55 @rrhoover Try and walk 10,000 miles around the beautiful parts of the earth. 2021-11-23 17:19:13 @ayirpelle thank you! I am going to make the most indulgent cheeseboard I can think of, and bring it to a friendsgiving :) 2021-11-23 17:11:40 I'm slightly vexxed about this as there have been cool papers about things like bigger CLIP, new transformer variants, tons of wacky surveillance stuff, etc. But I think it's healthy to sometimes stop yourself from working, so you can recharge. : ) 2021-11-23 17:10:54 Import AI will be taking thanksgiving week off. I'm posting this because I think newsletter writers can suffer from burnout due to always being on. I always try and take 2-3 weeks off from the newsletter each year, and I've decided this week is one of them. See y'all! 2021-11-23 16:01:55 RT @andy_l_jones: What it says on the tin: more than anything, we - Anthropic - really want more great engineers. https://t.co/hvECdsDzkZ 2021-11-22 20:15:59 @adventurared Kilgore Trout is technically Vonnegut's ghost, so your intuition is good 2021-11-22 20:04:26 Small Pulp Books forever https://t.co/1uSaWdtnmp 2021-11-22 19:47:06 Business idea of dubious viability - the Smol Bookshop. Just sells books that can fit in pockets of jeans and jackets. We live in an era of weighty tombs, but I think there's a lot of charm to little penguins, or almost-square half height ones, and so on. 2021-11-22 07:13:59 RT @hardmaru: Combined Scaling for Zero-shot Transfer Learning Another data point for Sutton's “Bitter Lesson”: more data, bigger model, b… 2021-11-19 04:18:43 @negroprogrammer @cdixon I still write a weekly newsletter called Import AI at https://t.co/lgRURdEUEm but tbh miss doing investigative ml reporting but can't do due to obvious conflicts in my current roles 2021-11-14 01:56:34 My partner won her MMA fight in SF this weekend. It is really cool to watch someone work insanely hard at a discipline for years and achieve their (ultraviolence) dreams. https://t.co/53hEvpHpql 2021-11-12 18:56:18 What the collection infrastructure for a surveillance dataset looks like. Most important things wear mundane clothes. https://t.co/9FV9CI35lg 2021-11-10 04:28:59 @peloei Italian Disco is Love 2021-11-10 03:44:49 RT @AlexBlechman: Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale Tech Company: At long last, we have created… 2021-11-09 19:49:51 @himbodhisattva @nickcammarata Thank you so much for reading! I did actually think about @nickcammarata while writing this one : ) 2021-11-09 19:49:24 RT @himbodhisattva: latest story by @jackclarkSF feels like something @nickcammarata would like https://t.co/SQbBeRPwhy 2021-11-08 16:49:14 RT @WriteArthur: Pretty spectacular to see this super technical paper on AI surveillance arguing that "governments and officials must take… 2021-11-06 23:20:00 CAFIAC FIX 2021-11-06 19:50:00 CAFIAC FIX 2021-11-06 18:59:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-10-13 01:21:28 @nolefp @ronbodkin (that was my bad, I replied thinking of the number used in the Inspur GPT3 system) 2021-10-12 21:47:07 @jquinonero Honestly, Post-It notes for stuff I'm doing that day, and gCal for things I'm planning ahead. 2021-10-12 18:48:10 @AGamick Very cool project(s) - good luck! 2021-10-12 18:44:54 @ParanoidAnalyst Maybe single digit millions? It depends on efficiency, how good the team is running the infra, the underlying hardware, etc. 2021-10-12 18:44:03 @ParanoidAnalyst A few hundred thousand to a million. Expensive, but once you have the model you can do a ton of stuff with it. Academia spends way more money on telescopes, acoustic chambers, gigantic wave pools, etc. 2021-10-12 18:07:40 @ludgerpaehler I haven't seen much evidence that HPC infrastructure is usable relative to cloud computing infrastructure. The software stack ends up being really important and clouds are all nicely optimized here. I'd love to be surprised, though. Wdyt about usability tradeoff? 2021-10-12 16:55:03 @russellwald Yes, other places that are building models of this scale include Russia (SBERbank), Korea (Naver Labs), China (Inspur, Huawei), and others that are probably happening but not (yet) public. 2021-10-12 16:09:16 @ronbodkin Good catch! Yes, I'm not quite sure either. 2021-10-12 16:04:32 @ronbodkin They use 2000 GPUs though idk if they disclose the type of chip (A100 or whatever). I suspect you can napkin-infer the time by looking at efficiency combined with the data tokens. Re price - think if you're in position to use this much compute you can likely negotiate price 2021-10-12 15:18:42 @marypcbuk Really interesting slides thanks for sharing! 2021-10-12 15:18:23 RT @marypcbuk: Inspur is one of those companies you've never heard of that have huge presence in cloud 2021-10-12 13:47:58 @Ozan__Caglayan It's expected but doesn't seem desirable. I'd like academia to have vastly greater computing resources, if only to provide a counterweight to industry, and to improve the ability for academia to define different paths with large compute projects. 2021-10-12 13:38:11 Here's the Inspur paper: https://t.co/A1w4TomNRr Writing up for this week's Import AI. Very notable that all the GPT3 replications have come from industry. Academia nowhere in sight (except in China, where BAAI has been training vast models like Wu Dao). 2021-10-12 13:36:58 @Jsevillamol Yes please add it! Here you go: https://t.co/A1w4TomNRr 2021-10-12 13:31:01 Inspur announcement was roughly same time (within a week) of Microsoft announcing its GPT3 successor, which weighs in at 530B parameters. https://t.co/Ftvo87RPKl 2021-10-12 13:29:48 Chinese company Inspur comes out with a 245B parameter language model. This follows other Chinese GPT3 equivalents, like Huawei's PanGu. Notable that one of the research contributions from Inspur is a system for massive internet data scraping and filtering. https://t.co/81q0U9uivv 2021-10-11 22:11:51 One way to work on power asymmetries in AI development is to broaden range of AI developers from industry to also include academia - Stanford has a job ad out for a very high-impact engineering role to support training of large-scale AI models. https://t.co/0P9vFSpn8u https://t.co/NW2WI6sCx8 2021-10-11 20:49:25 @marypcbuk Would be interested to read! 2021-10-11 20:41:12 One of the reasons I co-wrote this paper https://t.co/vljhYFXyGC was because it's clear AI development is becoming very resource intensive and generating a bunch of related issues (large datasets have problems, big compute creates asymmetries), so should make legible to govs 2021-10-11 20:38:53 @marypcbuk This trend is also why I wrote this paper https://t.co/vljhYFXyGC - one way to improve things is increase funding for the research teams that analyze this stuff. I generally think making it more legible to governments will help create a better policy environment. 2021-10-11 20:37:11 @marypcbuk No, but it's interesting to me that corporations like Microsoft are saying this. 2021-10-11 20:36:04 @marypcbuk I've read all these papers and I'm not claiming this stuff is good. I'm noting that the trends point in this direction, and coming up with alternatives is useful. 2021-10-11 20:31:10 @marypcbuk And similarly, just training on the library of Congress is challenging, as the library of Congress contains lots of data that people would find problematic. It feels like there will be datasets that encode different value systems. 2021-10-11 20:30:17 @marypcbuk I'm always eager to hear proposals. One thing ASR gets us is ability to mic up people and collect data. But what if people say problematic things? There's a chewy problem here which nets out to filtering out problematic stuff from data at scale of multiple libraries of Congress. 2021-10-11 20:25:33 @marypcbuk Yes, I'm just noting that commoncrawl is a subset of the internet, and the trends point to training on the internet as a whole. 2021-10-11 19:59:32 @Miles_Brundage Yeah, I mean largest GPT3-style model here. (Facebook has a 12 trillion recommender model, also!) 2021-10-11 19:45:14 @AydinGerek Yes, but wasn't a gpt3 style language model. E.g, Facebook has a recommender model which is 12 trillion parameters, but apples to oranges in terms of comparison. 2021-10-11 19:43:58 @marypcbuk I think these bias notes are pretty useful, though I find myself wondering what the end point is. A sizeable chunk of the internet will be 'problematic' to diff perspectives, and trends point to training on most of the internet, so I'm not sure where we end up. 2021-10-11 19:34:06 @tonypengcomms Do you have a link to the Inspur announcement? I was googling for it recently and couldn't find. Want to do a quick writeup for this week's Import AI. 2021-10-11 19:24:08 I should note that this is the largest public model in existence, in the sense of being publicly disclosed. 2021-10-11 15:19:26 Datasets here. It's going to be very interesting to watch the politics around datasets play out. Effectiveness of this data-centric approach suggests people are going to try and (eventually) bottle up all the text on the internet to feed their models. https://t.co/1GryZcYVMZ 2021-10-11 15:17:17 Microsoft trains a 530billion parameter GPT3-style language model. This is the largest LM in existence. (There's also the mysterious multi-modal 1.5trillion+ 'Wu Dao' MOE model but little known about it). Microsoft trains on 'The Pile' dataset. https://t.co/md03QzqlxA 2021-10-11 12:41:46 RT @lewisshepherd: “For a long time #AI had 2 big resources: data & 2021-10-09 22:29:00 @Tiago_Data thank you for reading! : ) 2021-10-07 17:54:43 RT @AmandaAskell: We're looking for an ops generalist for our human-AI interactions work at @AnthropicAI. I think this kind of work is goin… 2021-10-07 02:24:12 2021 vibe is military information being declassified by military nerds just trying to play their game. https://t.co/pFrE2z3Onz 2021-10-05 20:01:16 These roles could suit Masters and PhD students, independent researchers, and generally anyone who has a demonstrable interest in issues relating to the evaluation of AI systems, as well as measurement methodologies for analyzing complex systems. 2021-10-04 17:59:57 One of the things I'm most excited about is working closely with successful candidates to look across the literature in AI ethics and AI alignment and figure out metrics that have emerged, then assess usage and synthesize. I'm confident we'll find some interesting stuff : ) 2021-10-04 17:59:05 Hiring for people to help synthesize metrics around AI ethics and AI alignment, as well as hiring for people working on our 'Global AI Vibrancy Tool' index. These roles will yield public outputs that improve the information in the index and which therefore will impact policy. https://t.co/whlJaBIPvQ 2021-10-03 05:32:07 @martin_casado Shards of Earth 2021-09-27 23:24:12 @jjvincent this mouse golfs 2021-09-23 16:01:56 @girl_hermes please write a scene in which people have a picnic on the ham 2021-09-22 11:58:22 RT @S_OhEigeartaigh: From the UK Government's just released National AI strategy: "AI risk, safety, and long-term development The governme… 2021-09-15 15:33:37 RT @pstAsiatech: H/T @jackclarkSF Ant Financial, a subsidiary of Chinese tech giant Alibaba, has written a fun paper about how it uses co… 2021-09-15 11:07:54 @shirlmeow diy music shows - if you go to real indie places you'll tend to have shows where it's just a bunch of musicians swapping instruments and forming new ad-hoc bands on the fly. ditto with certain types of art shows. 2021-09-15 07:56:26 @MaximilianKien2 You should chat to @AmandaAskell 2021-09-14 19:15:06 RT @ilparone: Plato's cave in the age of AI: "We present a passive non-line-of-sight method that infers the number of people or activity of… 2021-09-12 13:15:34 @ShannonVallor @Harkaway it is an extremely good book! sequel is good, also. By coincidence I'm also traveling and am reading 'shards of earth' which is great so far also 2021-09-11 17:36:33 @jmugan @GaryMarcus Isn't ERNIE 3.0 an example of this? They attach a big LM to a knowledge base. https://t.co/A7oVbzRuM6 2021-09-10 16:23:02 I look forward to working with the other distinguished board members, and also old colleagues in the AI world like @adrian_weller and @ruchowdh . This also means, COVID-permitting, I'll be spending a bit more time in the UK and trying to engage more broadly in UK policy. https://t.co/a0gWUJkkCz 2021-09-10 16:21:30 I'm on the new advisory board for @CDEIUK. I'll be using my expertise in evaluating, measuring, assessing AI to help CDEI do useful and impactful things. Thrilled to work alongside a star advisory team and with @felicityburch et al to do this! https://t.co/jgQ7FuR8Ut 2021-09-09 16:59:46 @jdunnmon awesome, had been wondering what was up with xview. really nice extension of it! 2021-09-09 16:59:34 RT @MPSellitto: Great opportunity to participate in a competition to develop solutions to identify illegal fishing with ML - $150k in prize… 2021-09-06 18:31:00 @sd_marlow As I write in the paper, the goal is to develop government capacity to do this itself, not rely on external parties. There's a good history of government developing own capacity to do highly complex tests, and it'll allow us to build a measurement capacity in the public sector. 2021-09-06 16:52:20 @sd_marlow so it's quite a neat illustration of how industry can deploy something, a third-party can analyze/measure it, then results can inform capacity development in gov. A lot of what I'm advocating for is greater investment in third-parties to analyze/measure systems. 2021-09-06 16:51:43 @sd_marlow I think industry is commonly meant as private sector (or at least, that's what I meant). For facial recognition, it was academic researchers testing commercially deployed systems, then those results subsequently informed gov (specifically, NIST's FRVT eval). 2021-09-06 16:41:45 @sd_marlow it's not industry regulating itself, though. The face recognition stuff was found through third-party audits by academics, which led to changes as a consequence of public outcry and policy pushback. It only happened because an external actor did some measurement. 2021-09-06 16:40:58 @DrCuff haha, you're not wrong! 2021-09-06 16:29:17 @sd_marlow @jesswhittles Yes, this is similar to weather - at best, we can keep a running list of trends informed by lots of sensory equipment at various resolutions. But I think we'll see similar benefits to weather monitoring - ability to make short-horizon predictions and observe longterm patterns 2021-09-06 16:23:36 @sd_marlow I'd somewhat disagree here - public failures of ML systems for computer vision on fairness grounds have directly influenced subsequent development of datasets, eval techniques, and audit approaches. Logging failures has a meaningful influence on ecosystem development. 2021-09-06 16:15:43 @QueerZoomer ngl, I almost started crying at this gig (happy tears, but also reflecting on how important music is to me, and how much I felt the lack of it during the height of wave 1 covid). 2021-09-06 15:52:50 @JMateosGarcia @jesswhittles thanks for the thoughtful writeup! 2021-09-06 15:52:43 RT @JMateosGarcia: "Why and how governments should monitor AI development" Or: let's create datasets & 2021-09-06 04:41:47 It is so precious to listen to music played by friends, among friends, in backyards with worn amps and small fires. #VOTEDIY2021 https://t.co/VArHd31n8n 2021-09-04 16:22:56 May all your weekends be as great as this squirrel's. #VOTENATURE2021 https://t.co/QBsdtR81Lj 2021-09-03 05:34:25 Went to a gig tonight and when I heard the first feedback screech I started smiling and haven't stopped. Music is a community and tonight I felt what we lost the past 1.5 years and had a gutfeel that it'll be okay. If Toner tour, see them https://t.co/2ZzD3FFVDM https://t.co/KKDcjlfBQG 2021-09-02 22:16:31 @deepfates had this take earlier. Good thread here: https://t.co/i49ASWjfmQ 2021-09-02 22:15:56 @deepfates oh great! hadn't seen this. I agree strongly! 2021-09-02 22:08:40 Colabs are becoming the Zine for AI research - people photocopying eachother's stuff, sharing it around, developing fondness for colabs with a certain style (even though they all use similar ink, aka GPT and CLIP models). 2021-09-02 01:58:14 @lcastricato @BlancheMinerva @rajiinio @ericjang11 @ChrisGPotts @mark_riedl @beenwrekt @Miles_Brundage Agree! That's exactly why I wrote this paper: https://t.co/DOnsr8JjYH 2021-09-02 01:40:25 @BlancheMinerva @rajiinio @ericjang11 @ChrisGPotts @mark_riedl @beenwrekt @Miles_Brundage Can you say more? Would be a useful example! 2021-09-02 01:38:22 @rajiinio @ericjang11 @ChrisGPotts @mark_riedl @beenwrekt @Miles_Brundage One thing I worry about is when a well-resourced actor starts selectively banning model access due to academics/others doing critiques/analyses it doesn't approve of. I think access programs are necessary but not sufficient in this respect. Curious what you think re that? 2021-09-02 00:06:48 @ericjang11 Curious what @ChrisGPotts @mark_riedl @rajiinio might say 2021-09-01 20:55:21 RT @LMSacasas: Deep fake excuses. From @jackclarkSF's weekly AI newsletter: https://t.co/rAq2RHtPvJ https://t.co/c22EwnTBxd 2021-09-01 18:54:23 @IMordatch @jesswhittles One hope with this project is it creates more of an 'attractor' system for development of widely-shared and studied benchmarks. I think it might, if done well, create better incentives for researchers to push for big shared benchmarks 2021-09-01 18:53:43 @IMordatch @jesswhittles You also definitely want to do analysis of conferences - monitoring trends in publishing areas at diff confs can give a sense of how field is changing over time. Also worth doing analysis of who is driving results - e.g, some countries may have specific core areas of expertise 2021-09-01 18:53:00 @IMordatch @jesswhittles Excellent question! I think that widely-studied and competitive benchmarks for relatively general capabilities (e.g, image rec/object detection: ImageNet, speech rec/detection: SwitchBoard) are the right level of granularity. Individual papers feel a little hard to analyze 2021-08-31 16:44:32 @automataetc find the DIY music scene wherever you end up, and that'll help you make friends. 2021-08-31 15:27:45 @liminal_warmth thanks for the great questions - I can understand the skepticism and I hope we can figure out a way to get some good stuff done. 2021-08-31 15:23:53 @liminal_warmth So to specifically answer your question, I think my ideal world is where we 10X measurement capacity and a big chunk of that will be in non gov entities. But gov will need some capacity to have leverage. Wrote an earlier paper called Regulatory Markets https://t.co/UZMcx6aGnK 2021-08-31 15:22:46 @liminal_warmth Oh, we also suggest this in the paper. You definitely need bits of both, and private sector can do a lot. But one core position is if gov entirely outsources this, it makes gov vulnerable to wireheading/capture by these entities, so you need gov to build some own capacity 2021-08-31 15:21:33 @liminal_warmth (NIST = FRVT https://t.co/SaHUjePGdd ) 2021-08-31 15:20:52 @liminal_warmth I also think there are cases where gov has done useful stuff as a consequence of measuring/monitoring aspects of research - DARPA work on self-driving cars and robots is good here, as is NIST work developing very rich facial rec evals (including standardizing bias assessments) 2021-08-31 15:19:09 @liminal_warmth I'd argue some of that is because the technology is illegible to government, so a lot of what we get is policy made by private sector using information government can't itself validate or check. 2021-08-31 15:18:21 @liminal_warmth My ideal outcome is treating tech as something akin to weather, where we have a lot of public infrastructure producing legible information about patterns and changes, which can ultimately be used to get better at forecasting and predictions 2021-08-31 15:17:29 @liminal_warmth Government is already very interested in different bits of AI, but it lacks the tools to accurately measure systems, so things are being deployed that have harms government is already being asked to intervene on (eg fairness issues) but it lacks tools to diagnose 2021-08-31 15:02:14 @_TimOBrien A lot of this paper argues government needs to build own direct capacity to measure/monitor itself, rather than being entirely dependent on outsourcing. Part of why it's so critical gov invest in this is to develop its own capacity so it doesn't get fleeced. Agree! 2021-08-31 14:53:12 I'm going to spend the next few months trying to get various govs to implement measuring and monitoring schemes. I'd also be excited to debate this idea - do you massively disagree with it? If so, reach out and let me know! Let's get into it! 2021-08-31 14:53:11 The longer the information disparities go on, the more the private sector is incentivized to produce information that steers governments towards the outcomes it'd like, and the more dependent government becomes on the private sector - that's a very fragile way to build a society https://t.co/wOuSEkw744 2021-08-31 14:53:09 Why's information about AI important? It tells us about where the technology is heading, where it has weaknesses, and where it may be deployed in the future. All of this is useful stuff for governments to know 2021-08-31 14:53:08 Our proposal is pretty simple - governments should monitor and measure the AI ecosystem, both in terms of research and in terms of deployment. Why? This creates information about AI progress (and failures) for the public 2021-08-31 14:53:07 AI is influencing the world and right now most of the actors that have power over AI are in the private sector. This is probably not optimal. Here's some research from @jesswhittles and I on how to change that. https://t.co/vljhYFXyGC 2021-08-30 18:25:47 @wolftivy Give it as much info about itself, the state of the world, and ask it to compute how the two might intersect. Ask it what should be done to maximize benefits and minimize chaos over various time periods. Give it a prior about making non-chaos-inducing interventions re climate 2021-08-30 01:24:54 RT @samsurinwelch: Nice @jackclarkSF slides, former @OpenAI policy head @azeem's TLDR: "fundamental [AI] models present power asymmetries… 2021-08-29 23:28:37 @MPSellitto How dare you 2021-08-28 20:11:01 @sleepinyourhat BABA GAHNOUSH FOOM. A delicious conundrum. 2021-08-27 19:14:57 @RiversHaveWings I think it's challenging if there are clear economic incentives for development (e.g, if scaling up a model improves a major internet search engine or recommender system) 2021-08-27 16:29:45 RT @AmandaAskell: This might improve as people from fields like linguistics, psychology, and philosophy get more involved in evaluating lan… 2021-08-24 16:20:31 @siddkaramcheti good luck! hope to be helpful as you scale-up. 2021-08-24 16:17:49 Looks like Stanford is trying to make it easier to train non-trivial models (though note here they're just releasing some GPT2-scale models). Will be curious to see how Mistral develops as Stanford seeks to train large-scale 'foundation models'. Good luck! https://t.co/hg3UT98xbG 2021-08-24 02:16:29 @boahen_k Thank you very much! 2021-08-23 23:56:54 @NotTriggerAtAll so when I say power asymmetries, I mean it quite literally - we have a small number of entities that are over-powered relative to entities meant to regulate (government) or critique/develop (academia) them. Historically, this doesn't tend to end well. 2021-08-23 23:55:21 @NotTriggerAtAll power asymmetries are usually the things that destroy civilizations. It's not a woke phrase. You end up with fragile systems if you have over dependence on a small number of actors. Currently, AI development has an over-dependence on a very small set of private sector actors 2021-08-23 23:49:01 @drfeifei thank you very much Fei-Fei. I found the conference interesting and hope to attend for a decent chunk of tomorrow, also. glad to contribute! 2021-08-23 19:39:55 @TaliaRinger @geoffreyirving yup! near the start here https://t.co/eGBUaFojce 2021-08-23 19:22:10 @geoffreyirving @TaliaRinger I used the term 'big models' in my presentation, because I think a lot of the issues come from scale and also resource intensity. Plus stuff like CLIP isn't strictly a pure language model (but it is quite big) 2021-08-23 19:09:03 @Amirmizroch Thanks, I had fun with that section : ) 2021-08-23 18:31:05 @yaganub ah, I think private sector should also do this, and governments should as well. My point was more this is a high-leverage way for academia to have impact on industry - industry will basically pick up most eval suites that exist because it helps characterize these things 2021-08-23 18:00:42 @BlancheMinerva @percyliang @BigscienceW percy also gave some specific, concrete shoutouts to both Eleuther and Hugging Face in his talk : ) 2021-08-23 17:49:07 @BlancheMinerva @MentalHealthAm that's one example among many, and some of it is inspired by watching projects like WoeBot and comments from people. But I think if I had a do-over I'd have written a longer list - good feedback 2021-08-23 17:43:38 @BlancheMinerva @percyliang @BigscienceW Added a note clarifying this explicitly in the slides I just shared, that's good feedback, thanks! https://t.co/C1hBOSVvh5 2021-08-23 17:41:39 @BlancheMinerva @percyliang @BigscienceW I did mention that there are other attempts to train these models, I was just noting public replications (hence highlighting Huawei and AI21). Not my intention to exclude. Will add in speaker notes there re eleuther 2021-08-23 17:39:53 Here are the slides for my talk at Stanford today: https://t.co/SaeJqa9hDN tl 2021-08-23 17:36:31 @avnish_ks thank you very much! 2021-08-23 17:34:03 @info_sprinkles awesome, that's what I was going for. Will publish slides in a bit 2021-08-23 17:33:23 RT @stanfordnlp: It’s not reasonable or realistic for academia and civil society to depend on big companies to give them access to #foundat… 2021-08-23 17:33:16 @ghchinoy thanks! 2021-08-23 17:32:41 @KatharinaKoern1 thank you! 2021-08-23 17:31:59 RT @Carmen_NgKaMan: Helpful examples & 2021-08-23 16:40:06 @mmitchell_ai it's really gross, I'm sorry that happened to you 2021-08-23 16:33:51 @mmitchell_ai that's abnormal and it's also illegal in some states (california requires you to tell people they're being recorded, for example) 2021-08-23 15:49:45 @alexhanna there will be a recording. Which probably means I should wear a shirt that doesn't have salsa on it! 2021-08-23 15:49:23 @mer__edith (Zoomed out enough, all AI development increases power of Intel and NVIDIA. I'm imagining you're thinking more about cloud service providers like Google/Amazon/Microsoft here, though). 2021-08-23 15:47:33 @mer__edith I'd be interested to explore architectures that don't do this - have you written about that, or know people who are? Would love to read! 2021-08-23 15:43:16 I'm giving a talk at the Stanford workshop on Foundation Models this morning. The main idea I'm going to be exploring is how to reduce power asymmetries in model development, interrogation, and deployment. Foundation Models feel like part of broader industrialization of AI. https://t.co/OkGMIXAFJ2 2021-08-23 14:28:38 RT @_joaogui1: We may not be in the darkest timeline, but we're surely in the most absurd one Image source: @jackclarkSF's ImportAI newslet… 2021-08-20 18:27:08 RT @RishiBommasani: @moinnadeem Scaling laws don't exist for every phenomena/metric (e.g. bias). I think the claim we make in our work is… 2021-08-19 17:52:03 @michael_nielsen Who should build the 5th largest accelerator? (This might be what the NAIRR needs to do, also). 2021-08-19 17:38:30 @michael_nielsen You could also make a physics/astrology argument - to come up with the ideas, you need experimental infrastructure. Really big models might be a necessary component for experimental infrastructure, similar to LHC, Hubble, etc 2021-08-19 17:37:58 @michael_nielsen I do think that being able to come up with new ideas about some of this stuff will require the ability to develop some of these models, otherwise idea formation is contingent on access to models developed by private sector actors 2021-08-19 01:58:20 RT @nickfrosst: Excited to share a @CohereAI preprint on detoxifying language models! We use a language model to find hateful text by cal… 2021-08-18 21:10:51 @EthanZ @kevinroose @BrandyZadrozny @BostonJoan Thanks for clarifying 2021-08-18 21:00:00 @EthanZ @kevinroose @BrandyZadrozny @BostonJoan good post - you may want to clarify in it that pushshift seems defunct (from a quick skim of the site https://t.co/8mnnsphVx1). 2021-08-18 20:54:56 The NAIRR is one piece of infrastructure that could help reduce power (and capital) asymmetries between private sector and other stakeholders (e.g, academia) in AI. Great opp to submit ideas/constructive criticism for what a public AI research resource might look like. https://t.co/tolzW1APUA 2021-08-18 18:36:33 @nathanbenaich @spinoutfyi these terms are so crazy, thank you for highlighting this stuff 2021-08-18 16:55:48 RT @jjvincent: completely forgot i'd written a short story about self driving cars failing to deal with edge scenarios back in 2018 for @ja… 2021-08-18 01:43:13 RT @gstsdn: New paper! https://t.co/4oiC9A9SuI We use big language models to synthesize computer programs, execute programs, solve math pro… 2021-08-17 22:18:03 @mer__edith @andrewthesmart yeah, the 'for who' and 'for what' are good qs. I'm going to integrate a bit of this into the talk. 2021-08-17 22:15:22 @mer__edith That's a good one, I'm going to try and think through some of this. (https://t.co/hyvR7e0mVG). Another lens I use is 'why does our system incentivize big models, and could we change the system', also. 2021-08-17 22:13:21 @mer__edith I think a frame of 'what would an alternative NAIRR look like' would be useful, if you have time (though doesn't need to be for RFI). Or proposing something which has more of the attributes you'd like (I also suspect some other gov will fund a counter to the NAIRR) 2021-08-17 22:07:55 @mer__edith There's the National AI Research Resource being developed - that's something that could enable access to significant infrastructure for other people. There's an RFI out now which is worth contributing to. I think your points would be read with interest! https://t.co/MAH0zDoYey 2021-08-17 22:03:07 @mer__edith Yeah, that's my understanding 2021-08-17 21:57:45 @mer__edith Eleuther compute stuff is mostly from Google's TensorFlow Research Cloud (TFRC) initiative. They also have an infrastructure partnership with Coreweave. 2021-08-17 21:34:09 @beckettws Thanks for the helpful link, will read! 2021-08-17 21:29:08 @beckettws Yes, it might be similar to social networks, or people that produce computer chips, etc. (I'm not claiming this lack of responsibility is desirable) 2021-08-17 21:24:59 @pstAsiatech Not really planning to talk about Anthropic, nor would I make any claims about extent to which were changing anything, (we're just getting started!). I will talk a bit from perspective of an actor that may train some of these models, and thoughts from that pov. 2021-08-17 21:22:08 @beckettws That's a helpful clarification. Are there exemplars for mutually beneficial relationships between single actors and the username of the internet? I think the challenge here is scale, which may change the problem 2021-08-17 21:12:42 @beckettws like, do you mean that if something trains on, say, arXiv, all the scientists who wrote papers that went onto arXiv should get given 0.00000001 cents. Or should it be a donation to arXiv? Or to arXiv host institution (which iirc is cornell). Or to the moderators of arxiv? 2021-08-17 21:11:45 @beckettws Maybe a harder version of this question is if you replace communities with 'all users of the internet' (as I think that's the direction this is trending). But depends on what abstraction you're using. 2021-08-17 20:38:58 @andrewthesmart Yeah, I think we should think about the system that incentivises these models. Personally, I feel like it's interesting to model out how to create different incentives, or to deal with some of the underlying power issues. 2021-08-17 20:25:21 @andrewthesmart There's also usually an economic aspect - big models are more efficient (on some dimensions) than smaller models. We've seen this show up especially in large-scale neural translation engines. So economics tends to push deployment of these, also 2021-08-17 20:24:27 @andrewthesmart Because they're more useful for a broader range of purposes than small models. I guess it's similar to why technology tends towards some resource intensive non-specialized components - why have general semiconductors, why have general algos, why have general platforms, etc 2021-08-17 19:03:38 @mmitchell_ai @mer__edith I think it's great you're attending and would also encourage you to publish your slides. I also think a conference on power and harms of resource-intensive AI models would be useful, also. Perhaps that could be a sequel to this 2021-08-17 19:01:05 @BlancheMinerva Yes, this feels like a point worth exploring more. Will see about integrating into talk 2021-08-17 18:46:51 @nickmvincent Like, distribution of gains wouldn't be quite so important if there was a larger set of actors able to create the gains in the first place, if you see what I mean? That might also lead to healthier dynamics wrt underlying data sources (though that's a fuzzy intuition on my part) 2021-08-17 18:45:45 @nickmvincent That's something I'll cover, but the larger intervention seems to be about distributing capabilities to train these systems. The bigger issue feels like academia not being able to train these models increases dependencies on others and means less focus on policy issues 2021-08-17 17:50:39 @MattRosoff I guess in a decade or so we'll look back on self-driving as the fusion power of our time (it was always ten years away) 2021-08-17 17:22:41 @aiexplorations I'm not sure, but I'm planning to publish my slides after the talk (and I tend to make pretty detailed/wordy slides, so should hopefully be useful) 2021-08-17 17:17:13 Some other things I'll cover: - Why we need to ensure academic institutions can build big models - Issues of measuring/analyzing foundation models - How 'model access' leads us into a world of private sector dependency - Role of govs wrt supporting analysis/deployment of models 2021-08-17 17:16:07 I'll be giving a talk next week re 'foundation models' (gpt3, etc). I'm going to discuss ideas around how foundation models alter the ecology of the internet and also present opportunities (and risks!) with regard to concentration of power in AI. What else should I discuss? https://t.co/8EX7aih7de 2021-08-14 23:47:49 GET POPULAR GET RICH GET THE SCI-TECH GET GONE - Octavia Butler. Notes on display @ the Oakland Museum Afrofuturism 'Mothership' exhibition. https://t.co/T988JYoCgX 2021-08-14 04:50:56 @random_walker Why is this? 2021-08-12 19:28:26 @the__dude98 yeah, exactly. People have been betting on robotics in this area for years and there's been a bunch of false-starts, so would be interesting if we've started to crack the economic viability side of things. Bring me the smart production lines so we can build the moon factory 2021-08-12 19:26:03 My general sense has been of a ton of pent-up energy in AI research which wants to 'land' in robotics, but need to figure out the right application and specialize in areas where contemporary methods work well (e.g, DL for vision). This article suggests might be happening. 2021-08-12 19:24:32 This story by @_KarenHao sheds some light on rollout of smart industrial robots augmented with contemporary AI techniques (mostly vision): https://t.co/xeblrE1dic One analyst says about 2,000 of these newer-gen systems have been deployed and area is scaling rapidly. 2021-08-11 20:26:41 @BlancheMinerva @scottlegrand @jamescham good point! 2021-08-11 20:19:35 @jekbradbury Maybe my position can be summarized as: asymmetries tend to be dangerous, and the compute asymmetry between industry on one side and gov/academia on other, feels quite dangerous. Will try to write more about this publicly as would be good to get more great feedback like this 2021-08-11 20:18:46 @jekbradbury That's true, though part of the point I'm making is that 'just' providing a commodity can lead to very significant issues (e.g, 'the prize' is a good history of oil, and it's amazing how much massive stuff in US politics got driven by power of commodity oil producers) 2021-08-11 20:03:19 @jekbradbury That's a good point. I found this comment a bit illuminating: https://t.co/VFYDoytcTb I also think it's generally true that there's not as much expertise in gov at the between 10 and 1000 GPU scale than is optimal. Though you're right that there may be some high-scale gov actors 2021-08-11 20:01:47 @jekbradbury Put another way: is it good for Google/Amazon/Microsoft to wield the fundamental computational power in the US, without being counterbalanced by other actors? Right now both gov and academia are in a deeply asymmetric dynamic here, which drives lots of pathologies. 2021-08-11 20:01:08 @jekbradbury the growth of the oil companies led them to ultimately establish their own intelligence services to look for foreign oil deposits. This meant intelligence capabilities of private sector eclipsed gov. Gov massively increased intelligence investment in response due to critical gap 2021-08-11 20:00:25 @jekbradbury I guess I worry the government is disempowered, the greater the perception of it having a critical gap with industry. Whenever governments perceive a massive gap, extreme things occur. For example... 2021-08-11 19:41:59 @gnperdue @tamaybes That's really good feedback. Why do you think this is? It feels like having credits that back onto some cloud infra would be useful 2021-08-11 19:24:20 @tamaybes Top500 != Capacity to run big AI models. There's a whole bunch of software and expertise. Additionally, a surprisingly large amount of that capacity is pre-allocated to nuke work. 2021-08-11 19:19:56 @scottlegrand @jamescham I'm really worried about dynamics where industry hands models over to gov. I actually think it's a more stable environment if gov has more of its own direct development capabilities. This is mostly from POV that asymmetries can be dangerous and current situation is asymmetric 2021-08-11 19:18:17 @_jack_poulson I find your threads really valuable summaries of stuff, thank you for doing them 2021-08-11 19:17:49 @scottlegrand @jamescham Mostly, though from some things I've gleaned I think people might overestimate certain capabilities there. 2021-08-11 19:15:01 It's pretty crazy to me how little compute government is able to wield relative to industry. This is a bad asymmetry because it empowers industry actors relative to government and also prevents skill growth re compute utilization in government. https://t.co/j8Uw0qxer9 2021-08-11 05:29:25 @selenalarson @BelferCenter @lzxdc That's a good place with great impact, congrats 2021-08-10 10:11:04 @JMateosGarcia @S_OhEigeartaigh +1! 2021-08-09 22:04:51 @vermontgmg @EliSugarman @radleybalko Can confirm - the horror of the goose people is expressed to us during school, whereas Americans think it a mere bird. They are gravely mistaken. 2021-08-09 17:45:12 @MikeIsaac Eli's Mile High Club in Oakland does vax-only entry - some of the punks protested, but I was there this weekend and it had a lovely outdoor area that also felt reasonably safe, since everyone was vax'd. 2021-08-07 01:33:09 @colinmegill Very nice. I had an inkling feedback/amplifier were too close but oh well 2021-08-07 01:08:54 Extremely fun test! Kind of like asking an AI model to find least related features in a big space https://t.co/MCrfpLOKOA https://t.co/tkceWMVMjA 2021-08-06 01:28:19 @moskov @adammarx13 @alexrkonrad https://t.co/BRZm0m5pFo 2021-08-06 01:27:07 @moskov @adammarx13 @alexrkonrad You simply need to download it as a webpage thing where it downloads everything including images into a local folder. It's amazingly unintuitive. 2021-08-05 00:58:49 RT @indexingai: AI Index Steering Committee Co-chair @jackclarkSF will be one of the keynote speakers at @StanfordHAI's Workshop on Foundat… 2021-08-04 23:35:00 RT @ch402: Excited to be on the 80,000 hours podcast today. It's my first time on a podcast! I spoke with Rob about a variety of topics: s… 2021-08-04 19:37:24 @JMateosGarcia the only missing thing is an artificial moon with an external world interface 2021-08-03 16:42:55 @ChrisGPotts Really enjoyed this talk! I think in future @indexingai reports we'll note, as you point out, that saturating our existing benchmarks should make us more nervous/suspicious than joyful 2021-08-02 21:48:50 Wrote about this for Import AI last week (https://t.co/XG9fhYbsbd). Society needs to contend with the fact that AI systems learn to naturally discriminate about all kinds of things we legally don't like to discriminate about, and these discriminations can be inscrutable. https://t.co/yqOiGAd0mp https://t.co/BgVmqCNLZd 2021-08-02 20:06:35 @JesseDodge @Yuki_arase @percyliang @colinraffel @nlpnoah @royschwartzNLP @strubell @IGurevych Is there a way to submit thoughts asynchronously / contribute to some doc? One of the reasons I work on the national research cloud proposal at Stanford is it gives us a way to create a large compute resource to fund academic experimentation https://t.co/T9hqgbRvGa 2021-08-02 19:52:26 @mhbergen @thisisinsider @POLITICOEurope @axios Heaven is real 2021-08-02 15:25:57 There's an entire future unfolding in front of us on the preprint servers 2021-08-02 15:24:51 You'd think after ~5 years I'd tire of waking up each Monday to start work on the next issue of Import AI, but the opposite is true - I'm more thrilled and excited each time. Chinese OpenAI Gym variants! New 'Perceiver' stuff from DeepMind! Methods to track drones in the dark! 2021-08-02 15:22:41 RT @kevinroose: Old journalist excuse: my editor wrote the headline New journalist excuse: my recurrent neural network wrote the headline… 2021-08-02 15:22:33 @silasmorkgard @dylanbeattie thankyou! 2021-07-30 16:28:11 RT @tonypengcomms: Baidu sets new natural language understanding SOTA with ERNIE 3.0. "ERNIE is one of a few models developed primarily by… 2021-07-29 21:15:08 I'm sure if you showed this Capex chart to someone in 1999 they'd laugh you out of the room. Amazon? Bigger than Exxon?! Aramco?! Google bigger than Walmart? The Internet economy is eating physical reality. From @platformonomics by @charlesfitz https://t.co/jRhJc5FibZ https://t.co/d200z5cMVD 2021-07-28 22:04:54 @jordannovet To be fair, PyTorch has done really, really well. 2021-07-28 19:43:15 @hlntnr congratulations Helen, huge deal!!!! 2021-07-24 04:29:47 The kids are alright https://t.co/swCCvkTeKO https://t.co/t8i9gHQiFt 2021-07-23 14:47:08 @rsiilasmaa @MattiAksela Thanks! 2021-07-22 21:28:24 @davegershgorn nice one, excited to see what you do 2021-07-22 19:41:40 @rsiilasmaa I enjoyed your talk (and also sent a message through website contact form). Can you point us to any research papers that discuss these areas? Would be great to read in more detail. 2021-07-22 16:32:06 @gwern @demishassabis @DeepMind I'd read a short piece on why AlphaFold succeeded where Folding@Home didn't* *I don't have domain knowledge here so my take on Folding could be wildly off 2021-07-22 15:20:15 @demishassabis @DeepMind really amazing demonstration of how AI can help speed up scientists around the world. Excellent stuff 2021-07-22 15:19:43 RT @DeepMind: Today with @emblebi, we're launching the #AlphaFold Protein Structure Database, which offers the most complete and accurate p… 2021-07-22 03:46:18 @edzitron I find a weekly newsletter sometimes nudges me close to burnout so idk how you manage multiple posts a week. I really enjoy your writing, please take care of yourself 2021-07-21 22:26:47 @michael_nielsen Google has been doing some pretty interesting work on learning database stuff. There've been more recent papers but can't find right now https://t.co/au0TH8VrUE 2021-07-20 00:27:08 @MikeIsaac paging @TheStalwart for visible fry inflation 2021-07-17 23:21:07 @eliotpeper 100% 2021-07-16 03:07:59 @kevin2kelly Wrote a short story in 2019 about what happens when we create fully synthetic media personalities then the IP protections inspire and we get variants of synthetic actors. I think within five-to-eight years we'll see first fully gen'd media personalities https://t.co/HFA6PIwKvj https://t.co/RqjDr0ViuU 2021-07-16 02:45:35 @Ted_Underwood https://t.co/O5SoELoI37 2021-07-15 19:37:36 RT @CSETGeorgetown: We’re very happy to announce that Dewey Murdick has been appointed the new director of CSET! Dewey has been with CSET… 2021-07-15 18:27:48 @sterlingcrispin this is an amazing piece 2021-07-15 16:49:37 @aneel Chrome Profiles helps with this a bunch. Been using recently and pleasantly surprised 2021-07-14 17:15:59 @lukeprog @paul_scharre @Miles_Brundage @AllanDafoe Agreed. It's the sort of thing I'd love for someone to spend a few months doing an analysis of. I am writing up some project plans at the moment regarding this, hope to share more soon for interested parties! 2021-07-14 15:52:12 @gwern @kevinroose Though there is a charming nested russian doll element to all of this, as well 2021-07-14 15:51:53 @gwern @kevinroose A good point - I believe we need to build more systems that can observe/measure the things happening on these large platforms. It feels like the private sector is developing a rich ecology of different software services and our ability to understand them is minimal 2021-07-14 14:37:21 Something implicit to this (excellent) story from @kevinroose is that we can't trust the private sector to reliably output trustworthy information about its own products - rather, external tools that monitor these platforms seem like a necessity for accountability. https://t.co/ypPdiFZTXI 2021-07-14 13:09:03 @paul_scharre @Miles_Brundage @AllanDafoe As in, how much is spent on this research in aggregate? Kind of hard to parse tbh. I might determine some compute cutoff, then count number of papers with compute above that, then analyze costs of that compute on various public clouds, and also work out some discount prices 2021-07-13 13:32:02 @LindsayPGorman @WhiteHouse @WHOSTP @WHNSC Fantastic! 2021-07-13 13:27:46 Secretary of Commerce Gina Raimondo today emphasized how existential semiconductors are to her, her office, and America writ large. It's interesting to see the policy machine clank into action with regard to compute. (Also, I'm here in DC for the summit, say hello). https://t.co/GmVs2ETqU2 2021-07-11 21:00:07 @iainthomson Unacceptable 2021-07-11 07:54:48 @BlancheMinerva @deonteleologist @doctorow @maxgladstone Jorge Luis Borges for sure. Also M C Escher 2021-07-10 20:10:14 @swfong Haha, I might have firgotten, yes. 2021-07-10 17:58:18 @chr1sa marvelous blooper 2021-07-10 17:19:06 State machine for an autonomous fir extinguishing robot. https://t.co/4CipIQrmSz 2021-07-10 16:26:56 @s_m_i @moorehn I once went on a press trip to Amsterdam - the hotel room hadn't been paid for and I didn't have a credit card, so I had to use most of the euros I'd brought to give to the hotel for room + security deposit. Thus followed a very dicey and frugal couple of days 2021-07-10 05:01:10 @samuel_wade @yudhanjaya Thank you! 2021-07-10 04:57:12 @yudhanjaya What software is this? 2021-07-10 01:58:45 RT @ruima: Hah the guy who wrote Three Body Problem joins an AI startup as a … SciFi Director: SenseTime announced that Mr. Liu Cixin, the… 2021-07-09 02:57:35 RT @indexingai: We are hiring a research associate who will work on data collection and analysis for our annual report & 2021-07-08 22:55:58 @ruchowdh That's earthquake numberwang! 2021-07-08 22:55:38 Contribute to citizen science by filling out this earthquake response form! https://t.co/ZqxlwDg6UN 2021-07-08 22:53:48 @Wordie Yeah felt like a good few seconds for me, one of the longer ones I've experienced here in several years 2021-07-08 22:52:08 California Earthquake Twitter checking in - just felt a decent amount of shaking in the East Bay. 2021-07-08 21:45:34 @RyanFedasiuk @RitaKonaev @realTinaHuang @jordanschnyc @_ainikki Cc @timhwang 2021-07-08 20:53:28 RT @josh_tobin_: Excited to share a bit about what @vmcheung and I have been working on this year! At Gantry, we build infrastructure to h… 2021-07-08 17:54:28 @_DaveSullivan haven't read that 2021-07-08 17:05:55 RT @XandaSchofield: @jackclarkSF Is it a bad take to suggest the published Legends of Dune series (Brian Herbert and Kevin J Anderson) as f… 2021-07-08 17:05:40 @XandaSchofield This is an ideal suggestion! I think fanfic might have been too narrow a definition on my part. Thank you! 2021-07-08 16:42:21 I've recently been getting into fan fiction for far future worlds - it's a fun way to tap into some of the weirder strains of scifi and how it portrays AI. To that end, has anyone read any good fan fictions about the Butlerian Jihad? Iirc Herbert doesn't really cover it 2021-07-08 03:06:42 @DavidKlion @coldbrewedtool Eastern Span! Haven't read it, but on the pile. https://t.co/InDjZs9RWe 2021-07-03 05:35:53 @YungCoconut Magic! 2021-07-02 22:14:50 https://t.co/tzBaGMCCoy https://t.co/6vx04nDzhr 2021-07-02 20:03:09 @antoniogm Put another way - one of the risks any emerging industry faces is policymaker 'surprise' (see how rapid rise of oil cos led to a lot of rapid and probably poorly scoped antitrust legislation in response). We need to invest in things that reduce the surprise downside. 2021-07-02 20:02:26 @antoniogm Like, a lot of trad policy stuff advocates pauses/moratoriums, etc. Doesn't work for fast-moving fields like AI, crypto, bio, etc. Instead we should build institutions that can observe and understand this activity, which will ultimately create better deployment environment. 2021-07-02 20:01:16 @antoniogm I'm of the 'floor it, and' persuasion, where 'and' = try and build some institutions adjacent to where the innovation is happening that can observe and translate the innovations to unlock large amounts of government funding/activity. Asymmetries tend to be dangerous in long run 2021-07-02 18:59:55 @Sam_L_Shead Great scoop here, Sam! 2021-07-02 18:58:22 This is also true for countries writ large as well as companies. 2020s are going to be about rise of 'chiplomacy' in international trade. Some promising stuff here like CHIPs act for semiconductor reshoring in US, but feels to me like more can be done. 2021-07-02 18:56:12 Fun to be in the reverse globalization phase, where the companies you outsourced your strategic manufacturing to have now become wealthy enough to start buying out your remaining domestic manufacturing capacity. https://t.co/slbP3jV05z 2021-07-02 00:17:53 @Aella_Girl Write a clear and succinct summary of the idea, theory of change, and funding requirements. Send it to the richest and/or most entrepreneurial people you know, get critical feedback and adjust, then send it to the combo of richest+likely to be interested people in yr network 2021-06-30 16:39:19 @BasicScienceSav Congratulations 2021-06-30 02:52:25 @robinsloan @davidtlang @vgr Work in public, but clearly have a secret unrevealed grand plan 2021-06-29 17:48:14 @Bandrew don't jinx it! 2021-06-29 00:37:19 @atroyn Don't forget Bankers! (Lots of UK elite colleges filter their ppe (politics, philosophy, economics) grads through to Goldman, etc). 2021-06-28 17:52:45 @MartijnRasser @CNASdc Congratulations, Martijn! 2021-06-27 16:31:51 @alex Plus 10. Loved these books 2021-06-26 19:33:19 @advadnoun Fwiw, feels like it's valuable for you to keep the Patreon - otherwise you're setting things up so your new employer implicitly has a say over what you do with your personal time. 2021-06-26 15:43:38 Right now, apps like Premise are mostly a way for the US to export gig worker models into emerging economies, then aggregate insights back. What happens when US citizens become gig workers for apps like Premise fielded by other countries? Rest of issue: https://t.co/fyTEHoq539 2021-06-26 15:41:37 'Choose your own Sensorium' - a short story I wrote for Import AI 254 about the experience of being as gig worker in the future in America, working for apps like Premise Data. The WSJ did some reporting this week on @premisedata about its use by intelligence... https://t.co/F3UfMl4177 https://t.co/Czrxq2S1mM 2021-06-25 04:55:52 @vgr I think of these things as akin to a funhouse mirror - they somewhat unpredictability magnify and minimize a bunch of variables, so am curious to see how they'll get deployed 2021-06-24 19:23:31 People continually underestimate the 'economies of insight' that contemporary AI techniques generate. In many senses, AI as a technology 'wants' to gather more and more data and intermingle it into a single system. Authoritarian export-driven states have weird advantages here. 2021-06-24 19:22:27 One of the curious second- or third-order effects of outsourcing manufacturing to China is that China now has a bunch of useful in-country data that will let it train AI systems that can understand, characterize, and broadly 'see' the basic ingredients of global commerce. https://t.co/MH9SmzHFsF 2021-06-23 18:35:53 EuroCrops is one of those datasets that is unbelievably finicky to compile (e.g, harmonizing diff crop parcel schemas across multiple Euro countries into one dataset), but ultimately will be a utility tool for all kinds of analysis. Dataset compilation is dull but v important https://t.co/6auwv2GKRT 2021-06-23 16:29:08 Anyway, I'd quite like to start using @instagram and this seems like such a clear case of impersonation (with timestamped evidence!) that I'd really hope someone over at @InstagramComms can deal with. Submitted a report weeks ago. Submitted again today. 2021-06-23 16:27:41 Also, here's another post they ripped: https://t.co/zgm0zjNPjK 2021-06-23 16:25:21 In case anyone from @instagram can help: @ jackclarksf on IG isn't me. It's someone impersonating me. Seems like a spam account. They've even used one of my old tweets! I'm @ sfjackclark on IG and would like to get my handle so I can use it. See: stolen IG post. My twitter post https://t.co/u9iNKQs8vg 2021-06-23 13:36:00 @Ben_CDI Yup, that's the one! 2021-06-22 19:05:01 Writing up a National Security Agency AI paper for Import AI this week - interesting to see NSA using libraries including A2C, A3C, DQN, DDPG, APEX-DQN, and IMPALA. Neat illustration of how these generic RL systems can be applied widely. Security world is changing! 2021-06-22 02:20:19 @theshawwn Do u think that works in worlds where stuff goes from hundreds of GPUs/equiv compute, to greater amounts? (My bet is we'll see a lot of HPC capacity get repurposed by various countries for public projects, and also maybe schemes like National Research Cloud. Curious what u think? 2021-06-21 20:40:40 @arankomatsuzaki @ak92501 Reminiscent of @BrundageBot 2021-06-18 17:23:31 @Ben_Reinhardt @mmitchell_ai Will take a look at this essay, thank you! 2021-06-18 16:51:52 RT @mmitchell_ai: @jackclarkSF we should organize a workshop where we take creative people who can write foreseeable outcomes of current te… 2021-06-18 16:51:00 @emilymbender @mmitchell_ai yup! wrote a story in that one, was an interesting experiment 2021-06-18 16:34:48 @mmitchell_ai Yes! I would be very interested in this. Also discussed some stuff with @hannu in this area last year (then got a bit busy and dropped ball) 2021-06-18 03:33:09 @GerritD Nature is healing 2021-06-18 00:12:14 @bcmerchant yes dawg! go get that edit! 2021-06-17 23:07:54 @worldwidekatie congratulations, this is great news! 2021-06-17 19:48:22 We're hiring @indexingai - help us make progress on AI measurement and assessment, which is one of the critical issues in AI policy. The AI Index is a prototype for the larger gov (and multi-gov) schemes of the future. Apply for this and define the template for the future. https://t.co/9owCUMDQNI 2021-06-17 04:25:40 @nickcammarata Human jellyfish lessssss go! 2021-06-16 17:02:49 RT @jesswhittles: I chatted to @jackclarkSF at @CogX_Festival earlier this week about how better measurement and monitoring of AI capabilit… 2021-06-15 20:46:41 @_ganeshp It feels to me like we're in the 'first draft' phase of AI assessment/measurement for broader policy purposes, so similar to journalism/writing - your first draft should be really long so you can then cut/compress down to the core. But way harder to find core if first draft short 2021-06-15 20:46:05 @_ganeshp My basic view is if we massively increase investment in assessment/measurement we'll get a 'cambrian explosion' of test/eval methods, then once we have a ton we'll figure out some standard testing suites for different uses and contexts 2021-06-15 20:36:40 Today, I spoke on a NIST panel about measurement/assessment and tried to give a sense of how this works in AI and how it is changing. Here are my rough comments. Note these comments are specific to a pretty narrow slice of AI development. https://t.co/EmO1CpnEiD https://t.co/gWVfmoO55C 2021-06-15 16:10:03 I'm going to be speaking about measurement and assessment on @NIST panel that runs from now through to 1030am PST. Will be discussing some of the challenges of measurement inherent to contemporary AI systems. https://t.co/pK0ajWTC7p 2021-06-14 21:27:19 @arankomatsuzaki Could you get TPUs? 2021-06-14 18:58:37 @code_lazarus Nothing! That's why I write these stories, I think they can serve as different lenses on the present as well as speculations on potential futures 2021-06-14 15:59:05 @PeterLoPR @jesswhittles it's 2k21 let it all hang out! (I may regret this, but currently I'm attempting to be overly transparent online, maybe just as a consequence of getting bored after a year of presentable zoom) 2021-06-14 15:53:34 @jesswhittles @dw2 #covidhair 2021-06-14 15:45:25 @dw2 Thanks for watching and asking good questions - don't think got to all of them. 2021-06-14 15:44:53 Slides from my very short presentation here: https://t.co/csVmhDjHew 2021-06-14 15:42:17 Had a great time chatting about measurement and assessment with @jesswhittles today. Here's a presentation tip I'm giving myself for next time - try not to do zoom talks mid moving house : ). https://t.co/0paxdcvpXX https://t.co/jwJzD1cgND 2021-06-14 15:41:04 RT @dw2: The case for investing in technology and institutions to monitor AI systems more systematically and continuously. @JackClarkSF at… 2021-06-14 15:40:47 @bai0 @KavyaPearlman wow, that's an awesome story - thank you! History doesn't repeat but it rhymes 2021-06-14 14:58:52 I'll be talking at CogX today with @jesswhittles about issues of measurement and assessment in AI policy. I've cheekily changed the title of the presentation to "Why AI policy is messed up and how to make it better". : ) https://t.co/OpDFx6kZSM 2021-06-14 01:21:38 @madame_curtis Hot girl summer 2k21! 2021-06-13 20:43:56 @8enmann Yup, absolutely. I think this is going to get far more individualized over time. Right now we're kinda doing dragnet deepsea fishing stuff on whole populations and making individual predictions. I wonder what artisanal attention harvesting looks like also. But good point! 2021-06-13 18:54:23 Since I get this question a lot these days - I am working on a story collection from Import AI. But I want it to be something that a person who has read every story I've published might still want to buy. I'm editing, extending, tweaking a whole collection. A few months to go! 2021-06-13 18:51:45 One of the categorical errors AI/tech people make is believing themselves to be 'just capitalists', but as these systems are becoming factors in sociopolitical change, technologists should see themselves as having more the attributes of cultists, propagating ideologies. 2021-06-13 18:49:35 Today's AI systems use inputs like data and compute. Tomorrow's AIs will consume human attention, feedback, and interaction. There will eventually be incentives to harvest human attention for AI systems. "The Religion Virus", a story from Import AI #252 https://t.co/o26pDKqpn1 https://t.co/UENWLAjvbW 2021-06-11 18:24:23 Re-upping: https://t.co/pRA5FV4XTs consultation on classifying and defining AI systems closes 30 June, so if you'd like to do feedback, this month is the time. I wrote a thread about the classification framework, why it's useful, and how to engage here: https://t.co/5iX9FrXpZe https://t.co/3RzcXNxvSx 2021-06-10 23:39:30 @QuinnyPig reminds me of when Facebook said it was raining in its data center "This resulted in cold aisle supply temperature exceeding 80°F and relative humidity exceeding 95%" https://t.co/WOvssVegF0 2021-06-10 05:57:03 Tonight, I watched the sun set at Berkeley Aquatic Park. I sat with old friends on a Frisbee Golf concrete slab, swapping stories as the day dimmed. Later, there were geese, and after that the distant figure of an egret. #VOTENATURE2021 https://t.co/tGj0j6jGbb 2021-06-09 19:30:04 We've loaded the AI Index @indexingai reports into the OECD AI Policy Observatory to make them more accessible to a broader set of policymakers. Another little brick in the big 'ol measurement & 2021-06-09 16:30:51 @BlancheMinerva Did ACL give much information on why they rejected the paper? It's odd to me, as from my POV, I see The Pile getting used a bit and even turning up in other papers now. 2021-06-09 16:16:24 @annadgoldie The machine that builds the machine which trains the machine. Congratulations on your delightfully useful and reassuringly recursive project. Really cool stuff! 2021-06-09 06:19:37 @zdubsf I watch CPSAN all the time. This was far more enjoyable and stranger. 2021-06-09 06:06:00 @the__dude98 @JarnoDuursma yeah I think when the boss tweets the SREs get paged, probably 2021-06-09 06:03:28 @JarnoDuursma 100%. I think we forget that most policymakers are actually huge nerds about certain issues, and mass media doesn't let them get too wonky - which is a shame, it's really interesting to hear experts get into the details 2021-06-09 06:00:57 @jmelaskyriazi this is the cyberpunk future I was promised. This is genuinely exciting and great. 2021-06-09 05:57:08 Bill just passed on air. Wild. 2021-06-09 05:54:33 @JarnoDuursma agreed, plus I think the resulting feeling of informality means the conversations become a lot more wide-ranging. And there isn't some trad journalism attempt to explain everything. And thanks re newsletter! 2021-06-09 05:45:47 @JarnoDuursma it's amazing. I think the conversation has been far more detail-oriented and wide-ranging than you'd find with traditional media. 2021-06-09 05:43:43 This Twitter Spaces interview between all the crypto people online and the President of El Salvador is truly remarkable. The best part is the conversation is like half wonky-technical policy stuff, and half a load of internet brainworm people having fun. Amazing. 2021-06-09 05:16:21 @jachiam0 It's 10X more accessible and chaotic than Clubhouse and I love it. Someone just called the president of El Salvador a boomer on this spaces (which he corrected as he isn't). It's amazing. 2021-06-08 23:01:55 RT @BrendanBordelon: The U.S. Innovation and Competition Act, formerly know as Endless Frontier, a bill authorizing tens of billions of do… 2021-06-08 19:38:37 @ghalfacree @TheRegister awesome gig, glad to hear of this! 2021-06-08 19:11:14 RT @StanfordHAI: A new opportunity to help build the future of AI has opened up at HAI: We are looking for a director who will oversee rese… 2021-06-08 04:46:21 @QuinnyPig @taylorswift13 (This sounds sarcastic but it's actually genuine!) 2021-06-08 04:45:57 @QuinnyPig If you play stupid games you win stupid prizes - @taylorswift13 2021-06-08 02:31:33 @leedsharkey Thank you so much! 2021-06-07 18:09:35 @MikeIsaac More like newsletter writers get less information from readers (eg, I use open rates to help me pick out good times to send the newsletter. If I have less of that information, I'll be less calibrated there, which will probably adversely impact readership) 2021-06-07 17:21:29 @IEthics Random q - but I'm wondering if anyone has done a comparative study of research ethics in Western academia versus in Chinese academia (they seem quite different, and one indication is how people talk about ethical issues in papers like this) 2021-06-07 16:54:22 @atroyn haha, yes! extremely good 2021-06-07 16:51:35 Synthetic data, like many things in AI, is generally applicable to a variety of use-cases. For Import AI today, I wrote about synthetic data being used to improve a surveillance capability. We're in the mass-diffusion phase of AI capabilities, right now. https://t.co/9tGDvTWwH4 https://t.co/vtXVFAxkBm https://t.co/tfJ9mpr9Og 2021-06-04 23:36:50 @BlancheMinerva @lcastricato This is really cool work, congrats to y'all! 2021-06-04 18:38:10 RT @reboot_hq: we follow @gradientpub's newsletter! https://t.co/irFr3MvP2V also, @jackclarkSF's Import AI (https://t.co/I4HCB3m4rO) and @… 2021-06-03 18:38:06 @nickcammarata hooray. Also, if you read it, we should have at least one very enthusiastic conversation about it. The way LeGuin approaches 'magic' is fantastic and I think you'd find it delightful : ) 2021-06-03 18:37:19 @michaelbshane thanks! I've been looking at this story a bit for a few days. It feels like there's insufficient information to really write about the AI aspect. I expect we'll get a fuller report in a while 2021-06-03 18:35:43 @nickcammarata Kind of a sideways recommendation, but I've always got homely vibes from The Wizard of Earthsea - it's a set of books that I think are fundamentally about a nomad/placeless person finding their way to put down healthy roots in the world, and to help others do the same 2021-06-01 16:45:32 Measurement Woodstock! If you're interested in AI policy relating to assessment/measurement of AI systems, @NIST is hosting an online workshop June 15th-17th (Thanks to all people who sent this to me.) 2021-05-31 23:45:51 @RichardMCNgo it'd be interesting to do a long-term study (15 years+) of people who spent a non-trivial portion of every day 'unplugged' from most electronics, while working in an electronic-presuming environment (e.g most first world countries). 2021-05-31 17:58:33 @pamegup Do you have a link to the framework anywhere? Will have a read 2021-05-31 16:07:50 RT @LightOnIO: Today, @LightOnIO was featured in both @jackclarkSF 's newsletter and in the German newspaper Der Spiegel. https://t.co/… 2021-05-29 21:05:52 @yudhanjaya @Francesco_Verso @sriganeshl Congratulations! Looking forward to reading 2021-05-29 19:49:37 @antoniogm Where did you go? (I'm visiting Reno soon and have ambitions to eat steaks amid tawdry splendor, so this seems ideal) 2021-05-29 05:43:44 @n_miailhe @AnthropicAI . We're building out a team to try to bridge tech - policy divides via measurements/assessments! https://t.co/9DJK7bD5Oz 2021-05-29 03:49:09 @hlntnr @AnthropicAI Thank you! 2021-05-28 18:02:45 @ESYudkowsky @KelseyTuoc Hiya, our announcement is here: https://t.co/cDLS57Ywrg We chatted with Kelsey for her story, also. 2021-05-28 17:58:35 @koljaverhage @AnthropicAI I am aware - a few people sent this to me, but I had forgotten to sign up due to all excitement of launching @AnthropicAI . Signing up now thx for reminder! 2021-05-28 17:42:28 @NPCollapse thanks, Connor! 2021-05-28 17:14:37 @baykenney @AnthropicAI thank you, Matthew! 2021-05-28 16:58:02 RT @AnthropicAI: Hello world! You can read our launch announcement here: https://t.co/2tpKKJ43Uj 2021-05-28 16:24:37 RT @KelseyTuoc: Exclusive: In December, a bunch of AI safety researchers at OpenAI left. Ever since then I've been wondering what they're u… 2021-05-28 16:11:38 @gwern @AnthropicAI Hey! So @ch402 and I realized this while we were on a pre-launch call and both laughed about it. Genuinely unintentional! 2021-05-28 15:55:19 RT @DanielaAmodei: Excited to announce what we’ve been working on this year - @AnthropicAI, an AI safety and research company. If you’d lik… 2021-05-28 15:47:39 Here’s what I’ve been working on recently: @anthropicai. I’ll be spending a lot of my time on measurement and assessment of our AI systems, as well as thinking of ways govs/others can assess AI tech. There’s a lot to do! 2021-05-27 22:29:55 RT @GrahamStarr: Some big news from my team: We're hiring a reporter to cover algorithms and society — everything from AI and machine decis… 2021-05-27 22:26:06 @nickcammarata @ArtirKel @gwern Punk shitposting. A+ 2021-05-27 22:02:47 RT @calebwatney: Almost forgot! - $10B to Commerce Dept for regional tech hubs - $10B to NASA for Human Landing System program 2021-05-27 22:02:45 RT @calebwatney: So by my count, the (close to) final levels of new $$ over the next 5 years in EFA is: - $12.4B to NSF proper - $26.1B to… 2021-05-27 18:07:04 Worth following - Endless Frontiers is the most significant lump'o'science funding we've had in years, and @BrendanBordelon reporting is excellent and thorough. https://t.co/kbrydsEPk1 2021-05-26 18:14:21 @zehavoc @andyhickl @arankomatsuzaki cool, good luck! will add a mention of this in ImportAI (where I'm writing up the Naver system). Thanks! 2021-05-26 18:08:18 @zehavoc @andyhickl @arankomatsuzaki Nice! Do you have plans to go further than 1.5bn parameters ? 2021-05-26 17:29:57 A South Korean GPT-3 appears, though (per @arankomatsuzaki ), there aren't too many technical details. This, plus Pangu (Chinese GPT-3), and stuff from other actors (e.g, SBERBank, Eleuther), means we're heading into a multi-polar generative model world. Interesting times! https://t.co/v6hrC0lyld 2021-05-26 03:50:40 @nlpnoah Anyway, thanks for the discussion, won't take up more of your evening, but if you'd like to chat about this or other issues email (though per your tweet, maybe you've got other things you prioritize). 2021-05-26 03:49:58 @nlpnoah I understand that, and I get that everyone needs to weigh obligations, so I completely understand if you don't want to engage here as you're too busy. My broader point was this is part of how OECD tries to do public engagement. There's a valid side discussion on pay for labor 2021-05-26 03:47:01 @nlpnoah Anyway, if you'd like to chat more about this - jack@jack-clark.net, or email OECD. Feedback is really helpful, and it can ultimately improve how governments think about AI stuff so, per my tweet thread, it's one of these relatively low-effort high-impact ways to engage 2021-05-26 03:46:05 @nlpnoah most governments and multinational orgs run consultation initiatives where there's a small public servant staff and a load of external contributors many of which are unpaid. Eg GAO, NIST, etc in US do workshops. CDEI, Turing etc do stuff UK. Congress does unpaid expert stuff. 2021-05-26 03:39:16 @nlpnoah Mostly, helps governments get smarter about AI, so I usually view it as a form of public service. But to each their own, have a nice day. 2021-05-26 03:24:27 @nlpnoah We'd love specific feedback and this is the right time to engage on this - feel free to DM me or send an email re this. Also feel free to classify your own 'AI' systems via the survey - great way to help improve things! 2021-05-26 02:46:44 @askhuq @OECD Most research papers define a system they're concerned with, but the needs of researchers re classification can be different to those of policymakers. While compiling this we looked at what other people do today re classification (e.g, model cards, datasheets, etc) 2021-05-26 01:21:49 RT @pcihon: The framework could have policy impact in and far beyond the 37 member countries, today and in the future. I was involved in a… 2021-05-25 22:45:25 @NickEMoran @OECD For an extra fun time, make an AI system that creates its own definition of AI to use as an input into AI classification 2021-05-25 22:35:20 @jonathanstray get ready to not be surprised 2021-05-25 22:14:38 @yudhanjaya This is very helpful feedback, thanks! 2021-05-25 21:12:25 You can find out more about this whole project here: https://t.co/sodTSvNgQc I frequently gets qs from technical people on how to engage with AI policy. One of the best ways is to give direct, specific feedback on things like this. Happy to answer any qs here/DM! 2021-05-25 21:12:24 Help governments classify AI systems! A thread... Right now, AI systems are broadly illegible to policymakers, which is why AI policy is confusing. At the @OECD , we're trying to make them legible via a framework people can use to classify AI systems. Here's how you can help: 2021-05-24 22:17:00 @rajiinio Dynabench is quite interesting here - moving to a world of continuously assessing and iterating on models with a suite of metrics rather than just one. https://t.co/6pg1DhHfEf Cc @sh_reya 2021-05-24 21:14:25 * This is fine dog . jpg * https://t.co/E7TmCJvwyh 2021-05-23 18:10:41 @Idearim Yeah, I think schemes like this are useful and well-intentioned on part of companies (albeit also serving some strategic purpose via getting people onto their stacks). I view fact these exist as kind of a demand signal for something much larger/more ambitious/national in scope. 2021-05-23 18:00:08 RT @konklone: Amazing tech policy job alert! The majority staff for the Senate Committee on Homeland Security and Government Affairs (htt… 2021-05-23 01:43:50 @AllDeepLearning @robreich Maybe, though you still have the problem of running the software to make the stuff usable, which is non-trivial. (E.g, openstack oss project has been trying to do an open variant of the stacks available for aws/azure/gcp and hasn't made a huge difference) 2021-05-22 21:22:04 @GaneshNatesh 100%. I like the National Research Cloud proposal (https://t.co/XRqEV4qszI) as it gives us a general infra layer that we could distribute across a bunch of different institutions 2021-05-22 20:57:59 @Bored_AI_Junkie Also, what's the alternative? I suppose universities could build a shared cloud for themselves, but I'm skeptical they can compete with aggregate capex of $10bn+ a year into clouds which is roughly floor of what tech giants doing 2021-05-22 20:56:58 @Bored_AI_Junkie Not really - cloud computing is eventually going to be partially treated as national infrastructure and regulated like a utility. We can also repurpose various HPC facilities for AI (though I/O tends to be a challenge) 2021-05-22 20:50:24 We need to invest in large-scale compute for academia to give more people ability to experiment on a greater number of things. https://t.co/RRy3EL6mn3 2021-05-22 17:37:49 @YungCoconut Fresh Discs 2021-05-21 20:37:08 @josh_sokol When I moved to America I knew no one beyond colleagues at @TheRegister . I made friends by: - Going to a local bar once a week and chatting - Going to two indie/DIY music shows a week (found through FB/Instagram) - Asking friends if they had friends who lived in my city 2021-05-21 14:53:33 RT @jjvincent: fascinating report from @parmy on Google apparently ending a years-long negotiation with DeepMind for the latter to have mor… 2021-05-21 06:07:23 @Neetish Yeah, the arrival of big neural nets for multilingual translation has been pretty crazy 2021-05-21 04:39:12 @_joaogui1 @DavitSoselia_ These are useful examples, thanks! 2021-05-21 01:12:47 @metaphdor This is a fantastic little data point I might have a real or fictitious memory of, thanks for sharing it! 2021-05-20 22:55:25 @DavitSoselia_ One example is gScan which I wrote about here: https://t.co/DaQ4pNm2cS AI systems get successes of 0% to 5% on some of its tasks, though haven't seen much activity. https://t.co/bEJvHcNH83 2021-05-20 22:46:59 @parker_matsu this is tru 2021-05-20 19:30:57 @akashpalrecha98 @DeepLearningAI_ @paperswithcode . There will be some tech tales announcements soon! been working on a project through the pandemic 2021-05-20 19:27:42 @hamandcheese Great post, though the substance is immensely disheartening. 2021-05-20 19:17:11 RT @LauraGalindo: Today the OECD is launching a public consultation on the #OECDAI Systems Classification Framework. This framework aims to… 2021-05-20 19:16:50 @DavitSoselia_ I'm always on the lookout for metrics/datasets where we're doing really badly today. Iirc there are a couple of benchmarks that contemporary systems get like 5% on. Are you familiar with any particularly good hard tests/measures? 2021-05-20 19:08:46 AI is such a rapidly growing field I think we forget how juvenile it was within recent memory 2021-05-20 18:38:09 @akashpalrecha98 @DeepLearningAI_ @paperswithcode Thank you very much! Suggestions and criticism always welcome at the email address 2021-05-20 18:37:44 RT @akashpalrecha98: Three of my favourite AI newsletters that made it after years of subscribing/unsubscribing from lot of low-quality con… 2021-05-20 00:00:44 Another example here is @IBM which back in 2018 responded to Gender Shades study from @AJLUnited by acknowledging problem and doing technical analysis of a new system - classy, and better than reactions of some other companies investigated. https://t.co/K19ZZ4ho6O 2021-05-19 23:52:37 @mayfer @Twitter one step at a time! 2021-05-19 23:37:05 It's quite instructive to contrast Twitter actions with those of other big tech companies. In the past few years, other companies have responded to criticism (both externally and internally) by misdirecting, spinning, or gaslighting. Nice to see @ruchowdh et al carving diff path 2021-05-19 23:36:07 Really excellent post from @Twitter about how it investigated potential for biases in its deployed auto-cropping algorithms, including a discussion of some results (it has some slight biases) and some actions Twitter is taking. A+ for acknowledging problem https://t.co/Qb6OlXcghJ 2021-05-19 23:23:00 @KristenThomasen @rcalo cc @legalpriority 2021-05-19 23:18:10 @timnitGebru @AIESConf Thank you. Will check out the ICLR one, also. 2021-05-19 23:11:15 @timnitGebru @AIESConf will a recording of your keynote be subsequently available elsewhere, and/or streamed? 2021-05-19 16:52:56 RT @CSETGeorgetown: New Report New research from CSET tests GPT-3 for its disinformation generating abilities. How will the disinformat… 2021-05-18 21:29:37 @calebwatney @JHWeissmann @Noahpinion Also seems to migrate a much larger % of money into NSF/DOE and less allocated to the weirder and more novel tech directorate. Which seems like it'll also reduce the flexibility of the deployed R& 2021-05-18 20:03:11 @hamandcheese Watching this happen is... Maddening. 2021-05-18 20:01:18 @calebwatney Love to live in the libertarian cyberpunk future imagined by Neal Stephenson et al 2021-05-18 18:58:16 @DavitSoselia_ @mrgreene1977 @indexingai Given that a vast amount of AI systems are already deployed in the world and proliferate via open source, model development is going to continuously happen due to economic value incentives. Therefore, we need to create funding for assessment/eval development. 2021-05-18 18:46:55 RT @jackclarkSF: @mrgreene1977 Yes, this was a big theme of @indexingai report this year - we're developing new technical capabilities fast… 2021-05-18 18:46:52 RT @mrgreene1977: @jackclarkSF From where I'm sitting, it feels like the breakthroughs in generative models are coming far faster than brea… 2021-05-18 18:46:34 @mrgreene1977 @indexingai Put another way: https://t.co/o0ibaoapE7 2021-05-18 18:46:07 @mrgreene1977 Yes, this was a big theme of @indexingai report this year - we're developing new technical capabilities faster than we're developing the datasets and evaluation criteria to assess them for a range of societal issues/policy concerns. 2021-05-18 17:41:32 Some interesting discussion of the challenging issues w/generative models in the PR: "Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information." 2021-05-18 17:40:47 At IO today, Google said it is going to be integrating LM technology into Search, Assistant, and Workspace (so far, so normal). Sundar also said experimenting with ways to give devs/enterprises access to the tech (somewhat spicier). Some PR here: https://t.co/SJ50ZRElIr 2021-05-17 14:40:14 @geomblog @WHOSTP @AlondraNelson46 @BrownUniversity @BrownCSDept @Brown_DSI @senykam @UtahSoC Fantastic 2021-05-15 19:28:57 @realGeordieRose @parker_matsu Today I learned about scorpion tail spiders so no everyone else must do the same. I am sorry I do not make these rules (the spiders make them) https://t.co/iL8LkVQcuf 2021-05-15 18:41:43 @parker_matsu How we tryna be in summer 2k21 2021-05-15 18:09:41 @parker_matsu https://t.co/eRRz5W8bUN 2021-05-15 16:34:42 @deliprao @AndrewKemendo @etzioni I've found that using metrics like this is a helpful way to have discussions with policymakers about the need for, say, more science funding in America. Any individual metric has problems, but papers are a useful proxy for some aspects of national capacity. 2021-05-13 00:07:41 @jekbradbury Ah, it's not something super recent - this is a paper from last year on a somewhat niche subject that I'm doing a massive analysis thing on. Will have something public in a month or two! 2021-05-13 00:03:55 There's something really fun about being so into a subject that you can find a little crumb of information and find it so surprising it changes you physically briefly. I genuinely love being an AI metrics/measures nerd. 2021-05-13 00:03:07 Jack: *looking at laptop* Hmm, oh wow. *clenches jaw* * spouse comes into room * Jack: *furrowed brow, staring intently at screen* good god. That is surprising. * spouse, now worried*: What? Jack: Oh, sorry, this is what happens when I read surprising AI research papers! 2021-05-12 20:25:28 @calebwatney This is a radically different EF vision. 2021-05-11 22:36:08 RT @erikbryn: We keep hearing amazing anecdotes and examples suggesting rapid technological advances, but reliable and comprehensive data i… 2021-05-11 22:29:28 @iainthomson @oliviasolon Sounds like you want to Stop collaborate, and listen. 2021-05-11 21:45:22 @AGIThoughts For something like that, we have individual tests, and maybe could stitch together into a suite (e.g, incorporating ImageNet-adversarial, etc). But I suspect we really want to measure for some property of computer vision 'intelligence' and don't know how, etc. 2021-05-11 21:44:43 @AGIThoughts And with a lot of these intelligent capabilities, our ability to design targets is poor. We can measure one-in-five on imagenet, but how about 'one-in-five and you're not gonna fall for optical illusions and you'll not make weird semantic segmentation errors" - 1/2 2021-05-11 21:43:05 @HyperboIeva @AGIThoughts Thanks for the explanation here, I appreciate it! 2021-05-11 21:42:46 @AGIThoughts Yes, this makes some intuitive sense, thanks for the explanation! And you're right - my feeling is our ability to make progress towards certain intelligent capabilities is basically limited by our tests - it's hard to direct systems if you don't have a target 2021-05-11 21:33:23 @AGIThoughts What do you mean by Yoneda? I'm not familiar with the term and would like to understand 2021-05-11 21:26:04 What's the meaning of this? My general sense is that AI progress is going to be counterintuitively slowed by a lack of good hard benchmarks and, in the future, developing new benchmarks will help orient us as to state of progress in field. We're entering an interesting era! 2021-05-11 21:25:07 At the same time, we're seeing benchmarks getting outmoded almost as rapidly as they're developed (e.g, SQuAD> 2021-05-11 21:24:11 @AndrewHires by this point, it's probably 90% 2021-05-11 21:23:29 It feels to me like the pace of development in AI is speeding up 2021-05-11 21:21:51 Graph#1: Adoption of technologies by households in the United States Graph#2: Time it has taken different AI benchmarks to saturate due to reaching eval ceiling (from Dynabench: https://t.co/v3TkBgSATM ) Is this a spurious correlation? Yes! Is there some signal here? Yes! https://t.co/cNXayayVa0 2021-05-11 20:26:54 @theshawwn This is good, thank you for writing it 2021-05-11 18:14:29 @klonkitchen Don't blow up my vegetable long game, Klon! 2021-05-11 17:40:49 @robreich A little bit of column a and a little bit of column b. One sad thing about COVID is that it wiped out a bunch of remaining DIY venues, where I never had to deal with Live Nation. Watching indie bands migrate to L/N venues is bittersweet - maybe good for bands, bad for punters 2021-05-11 17:39:14 @scottlegrand looking forward to when Live Nation only processes payments in LIVEBUX 2021-05-11 17:35:50 Criminals: We have worked for years to construct this perfect scam, allowing us to skim a couple of dollars from each of these food items, then launder proceeds through our complicated series of fronts. Live Nation: Yeah we just charge people $13 to buy tickets on our website. 2021-05-10 19:30:41 @carlbfrey @pstAsiatech @MikeNelson @SammSacks @JonKBateman So from my point of view, as someone who works in the field and thinks about peer competition, China has some structural advantages which make it have better prospects for producing inventions and advancements - it's deploying more stuff 2021-05-10 19:29:59 @carlbfrey @pstAsiatech @MikeNelson @SammSacks @JonKBateman One reason to be bullish is that a lot of AI advances have come from practioner-heavy groups (e.g, Google, which is constantly doing large, compute-intensive, applied projects). China is likely doing larger-scale AI deployment of various comp vision things (surveillance) 2021-05-10 18:43:58 @carlbfrey @pstAsiatech @MikeNelson @SammSacks @JonKBateman Maybe to flip this - what's a fundamental advance in your view? Something like invention of backpropagation? Would the transformer architecture count? And if so, what's less impressive about a ResNet? (Maybe argument is it has precursor of Highway Networks?) 2021-05-10 18:42:59 @carlbfrey @pstAsiatech @MikeNelson @SammSacks @JonKBateman The ResNet paper that he shared has been used in pretty much every consequential computer vision system since it's invention in 2015. It's hard to draw a line between fundamental and applied stuff, but ResNets are a genuine no-shit advance with huge impact 2021-05-09 14:46:12 We used a ton of MAG data for @indexingai and are currently identifying new orgs to partner with on bibliometrics analysis. MAG was a useful tool - I hope the MS people who worked on it get other gigs. https://t.co/otZUKdZLPv 2021-05-08 20:42:21 @mhdempsey I will, thank you! 2021-05-08 20:41:29 @mhdempsey It'll be in Import AI on Monday : ). A new series I'm working on. 2021-05-08 20:39:41 Research papers from alternative futures, a series. Number one. https://t.co/prwh6XqDR2 2021-05-07 19:50:56 RT @DigEconLab: There are only a few days left to submit your AI policy proposal. We're looking for radical ideas that will shape our AI-po… 2021-05-07 13:48:35 @sebkrier @StanfordCyber @MarietjeSchaake Extremely good 2021-05-07 13:48:27 RT @sebkrier: PERSONAL UPDATE! Next week I'm joining @StanfordCyber as a Senior Technology Policy Researcher! I'll be working on issue… 2021-05-06 21:17:47 @yoavgo It feels crazy to be in a world where you need to justify using electricity for scientific experiments. 2021-05-06 19:10:51 @dpatil @drfeifei @StanfordHAI Here's a letter send to prior admin https://t.co/nviOavY7y4 Iirc NRC is now being studied by admin per legislation in NDAA (@russellwald would know more) 2021-05-05 23:30:52 @packyM https://t.co/WK43woiRDa 2021-05-05 20:27:40 @cstross @yudhanjaya @Hugo_Book_Club Ah, I am slightly. An excuse for a reread. Thank you! 2021-05-05 20:23:13 @yudhanjaya @Hugo_Book_Club Ooh, I remember this. it was a story involving a big quantum computer and multiple forms of math existing in superposition. I imagine @cstross may recall the title. Great story 2021-05-04 23:11:00 @rodolfor "we hypothesize these glyphs are a form of abstract art, meant to signify the increasing complexity of the civilization prior to the Tragic Event" 2021-05-04 23:07:32 I love how AI research occasionally produces beautiful aesthetic objects as a direct consequence of scientific investigation. Look at these things! Imagine someone finds them shorn of the paper 500 years from now - 'what strange glyphs, this civilization trafficked in!' https://t.co/7PYZneATsm 2021-05-04 22:18:41 @yudhanjaya Please do! 2021-05-04 22:17:05 @yudhanjaya I'm one of the three people and would read this! 2021-05-04 18:38:03 @S_OhEigeartaigh Haha, well, thank you. Perhaps on the occasions you disagree with me we could try and discuss in public on twitter? Sort of how pop musicians start fights to climb up the charts? Just an idea! 2021-05-04 18:33:17 @S_OhEigeartaigh 1) yes 2) would love to hear more of your subjective opinions/takes on stuff 3) can't think of stuff yet! 2021-05-04 18:21:06 RT @AllanDafoe: Building on this, we will be launching a Cooperative AI Foundation to support research that will improve the cooperative in… 2021-05-03 22:00:57 RT @AmandaAskell: If a company says it's doing X because X is the right thing to do, people tear into the company's ethics. If it says it's… 2021-05-03 20:29:12 @jachiam0 2021-05-03 18:59:44 @mattsheehan88 @sebkrier wrote a section here https://t.co/cToQeIvKbQ , will probably write more in a while but want to read the whole thing carefully, so need to simply find three days spare (haha!) 2021-05-03 18:55:46 @mattsheehan88 @sebkrier 2021-04-29 19:57:12 @genekogan @JanelleCShane this is fantastic 2021-04-29 19:28:11 And also thanks to @Synced_Global for partnering with @indexingai on this event. As I said, measurement is a global endeavor and we're only going to be as good as the scale/breadth of the community allows. This was an experimental event from the AI Index and we'll do more 2021-04-29 19:26:32 It's notable to me how these questions feel similar to ones being asked in Western policy contexts. I was somewhat (positively) surprised by the interest/qs re fairness/diversity. Thanks to @dzhang105 for being an absolute champ and translating between English and Chinese! 2021-04-29 19:25:29 What are good ways to measure the fairness of an AI system with regards to performance for people from different demographics and genders? How has COVID influenced the development of AI technology itself, rather than influencing attendance at conferences? 2021-04-29 19:24:51 Here are the questions I got from a large Chinese audience last night re measurement/assessment of AI: - How can we measure the security of AI systems? - How to interpret bans of facial recognition in Western countries? - Which orgs have best data on diversity in AI? https://t.co/4oYShJRvVH 2021-04-28 19:28:28 @alannamcardle_ My partner trains martial arts five nights a week and is, by all reasonable descriptions, an extremely fit hardass. Bmi says they are obese. it's a bullshit metric 2021-04-28 16:58:47 Measuring and assessing AI must ultimately be a global endeavor, or we'll end up with different countries having different standards. We're trying to encourage formation of this community @indexingai 2021-04-28 16:55:50 RT @indexingai: Announcing the Chinese (Simplified) language edition of the 2021 AI Index Report — in partnership with @Synced_Global. Re… 2021-04-26 22:25:08 @ljin18 Originally read this as 100k! Was even more shook 2021-04-26 16:18:52 RT @AiCommission: Expand access to #AI resources through a National AI Research Infrastructure that provides access to cloud-based compute… 2021-04-25 20:00:43 @CjColclough @mer__edith @n8fr8 @keithporcaro @j2bryson @vdignum @SandraWachter5 @laurenceb @_jack_poulson @jonniepenn @azeem @KasparRosager https://t.co/4HMtVW204K 2021-04-25 20:00:20 @CjColclough This sounds mostly like a policy thing where you might mandate you can only use inferences that have been based on recently refreshed data. This feels similar to how we have policies for criminal records getting expunged after a few years 2021-04-25 19:49:48 @CjColclough Can you describe a hypothetical scenario you're thinking of? Not sure I'm parsing you 2021-04-23 04:29:38 @tobyordoxford these are incredible, thanks for sharing them 2021-04-22 21:51:33 @MaraHvistendahl @EBKania My bad for not reading closely enough, thanks for clarifying! 2021-04-22 20:40:33 @EBKania @MaraHvistendahl It's especially puzzling given Safra Catz role on the NSCAI 2021-04-22 20:20:19 RT @indexingai: It has become tricky to measure the performance of NLP systems. "Academics are coming up with metrics they think no one can… 2021-04-22 17:43:38 @amcasari @sleepinyourhat I've read some of their work, but I'm not sure exactly what you're referring to. Can you think of some particularly good examples of tech integrations /sociotechnical impacts? 2021-04-22 17:18:42 @antor Any. There's obviously a lot to argue about wrt European Commission stuff, but from the various policy perches I sit on I see a general, worldwide desire for greater ability to assess and characterize AI systems for salient policy-relevant traits. Assume this will be global. 2021-04-22 17:17:29 @amcasari @sleepinyourhat More broadly, I think we need new organizations and systems here. The current status quo of companies rolling their own proprietary metrics is untenable. Academic/OSS metrics are seeing decent adoption. But there's probably something larger waiting to be done/discovered. 2021-04-22 17:16:50 @amcasari - Governments expand funding for grants via funding authorities, disseminating resources to academics in a variety of fields to develop assessments. - Core AI assessment labs (e.g, @sleepinyourhat @ NYU) seek funding to support further technical metric development 2021-04-22 17:15:16 @amcasari Great question, I think it splits across sectors of society and across research fields. Sort of thing I'm imagining: - Companies grow investments in teams doing this (e.g, Google's former ethical AI co-leads were doing this) - Governments expand funding for agencies (e.g, NIST) 2021-04-22 17:08:22 @amcasari Basically, we need something like a 10X greater investment as an AI research community into creating better/harder tests and measures, especially to help respond to concerns of policymakers in future. I'm optimistic this is doable but requires some urgency/attention. 2021-04-22 17:07:16 @amcasari Currently, we have relatively few tools available to assess and measure AI systems, especially for policy-relevant traits like 'fairness'. Proposed legislation assumes ability to assess/measure high risk systems. AI researchers risk getting run over by policymaker train. 2021-04-22 17:01:40 The European Commission legislation makes one thing clear: we need a whole-of-society effort to develop better tools to assess and measure the capabilities of AI systems, especially to help characterize traits of "high-risk" systems. Attached: visual analogy of current world. https://t.co/lE2HMXbjqu 2021-04-21 23:46:01 My talented friend Michael has written probably the best song ever about working as a chimney sweep Also, if you're in the bay area and need to get your chimney swept, check out Mike's Clean Sweeps! https://t.co/4G9FMx5vTV https://t.co/votcRmlFjT 2021-04-21 21:50:54 @worldwidekatie Cool milestone, congratulations! 2021-04-20 19:31:25 The significant thing here: recommendation models are far more societally impactful than models that get more discussion (e.g, GPT3, BERT). Recommender models drive huge behavioral changes across FB's billions of users. Papers like this contain symptoms of advancing complexity. 2021-04-20 19:30:16 Facebook is deploying multi-trillion parameter recommendation models into production, and these models are approaching computational intensity of powerful models like BERT. Wrote about research here in Import AI 245: https://t.co/DJpUnl2E78 Paper here: https://t.co/3LNyUpxavN https://t.co/GpBMmkKApU 2021-04-20 18:14:14 @halhod @antonioregalado @Austen I would love to read a book on recent biotech rev, but not sure what I'd read. @zavaindar might have recs 2021-04-20 18:00:07 @halhod @antonioregalado @Austen +1, Turing's Cathedral is a wonderful book 2021-04-20 00:08:11 @BarneyFlames @RMac18 One situation I once had was a source provided some information and it would clearly damage another person at the same company as originating source. I spent a month reporting out that the source wasn't motivated by a grudge against other person, then once satisfied did story 2021-04-20 00:06:42 @BarneyFlames @RMac18 Yes, usually because of legal costs, validating sources, cross-checking sources. At some publications maybe 80% or more of the actual reporting never makes it to the article - it's just been done to validate other sources. 2021-04-20 00:05:06 @Kantrowitz I mean, sure. I'm commenting that I haven't seen much. Newsletters are great for source dev, would be curious to see major investigations. (@CaseyNewton has done a bit of this with Platformer and @EricNewcomer same with theirs, I wonder what is the most complex scoop we'll see) 2021-04-20 00:01:28 @jongold @viamirror I had seen some of these but not tons, thanks! I'll update positively if I see someone fund investigations via this. Funding model is also unclear 'fund me to investigate fraud at $company' makes reporting story harder, can we fund generic investigators? 2021-04-19 23:57:54 I think the Creator Economy is nice in a bunch of ways, but investigative journalism is a very weird beast that has traditionally been heavily subsidized by other parts of a newsroom. At Bloomberg some of the (amazing!) investigative reporters would spend months chasing things 2021-04-19 23:56:16 This might be overly cynical, but could the enthusiasm tech is showing for creators be tied to the fact that the 'creator economy' doesn't really support investigative journalism? (Aka, extremely long-winded investigations that take months, get low readership, change things). 2021-04-19 23:53:57 @iainthomson @marmite Even I am somewhat suspicious of these tbh, and I layer marmite on my toast by the multi-millimeter 2021-04-19 17:06:59 @tate8tech I hope it's a Brecht film so an AI character can break the Fourth Wall in a million different ways 2021-04-19 17:06:36 RT @tate8tech: Latest @jackclarkSF Import AI newsletter - Synthetic content means games are infinite, now - has me also wondering what wi… 2021-04-17 07:31:09 @bernardgolden I searched over the past three years and sadly haven't used this! Might search more tomorrow 2021-04-17 04:06:30 @alexisgallagher Sometimes when I'm afk I'll do notes in google keep on mobile then transplant them into the main file. I also occasionally write stuff like "didn't journal yesterday, oops, recap!" to both a) retain history and b) teach myself that not journaling creates work for future jack 2021-04-17 04:04:52 @alexisgallagher I have a text file constantly pinned to my main computer which is always open whenever the computer is on. It took a while to build the habit, but once I intrinsically wanted to journal each day I found getting it done was just a matter of keeping it in front of me. 2021-04-17 03:06:42 @andrey_kurenkov Yeah, I'm similar. I'd say 80-90% of my entries in recent years are 2 paragraphs or less. Sometimes I go longer. Most days it's a summary of what I did and usually a sentence or two on how I felt. Recently, I've tried to specifically note things that have made me happy/laugh 2021-04-17 01:32:57 @rcsaxe Honestly, give it a whirl! Try writing a sentence a day for a couple of weeks and then you might be surprised by the results. 2021-04-17 01:28:58 Also, to be agonizingly clear, I haven't been the first person to 'discover' this, I more mean I've discovered some surprising (to me) links between journaling and my own mental health and happiness. : ) 2021-04-17 01:21:45 @kathytpham That's a wonderful thing to hear! I hope you can find time during your very busy schedule to (and I hope you're well) 2021-04-17 01:18:47 @NicoleHemsoth Yes, good observation! I did have fitbit for a bit (and now just use Google fit). I think the update I've noticed (realized this in 2020/2021) is that journaling has had quite a meaningful impact on my life, way above any physical habits I've done/got. 2021-04-17 01:15:52 9. The modern world wants us to expose/sell/propagate every part of ourself. Having private journals has felt important to counterbalance this. My true self is unbranded and unmonetized, joyfully. The journals are an engine for mental reflection/maintenance. FIN/Happy friday! 2021-04-17 01:14:31 8. Intriguingly, the routine of journaling has trained my brain to 'optimize for novelty', because I find journaling more interesting when I have novel stuff to write about. This has caused me to have more intrinsic motivation to put myself in novel situations, which I love. 2021-04-17 01:13:49 7. Journaling has helped me get out of pathological spirals - if you read the past two weeks of your diary and see you've turned down every social commitment, that can be a real wake up call. Ditto with saying 'yes' to every work commitment and seeing stress it causes. 2021-04-17 01:12:57 6. By journaling, I both reduce the emotional labor others need to invest in me, and I'm better able to spot cases where I should thank or acknowledge acts of kindness others have done for me. This has been especially helpful in my marriage. 2021-04-17 01:12:25 5. The act of reflection helps you spot things about yourself that you may not have realized otherwise. I've had emotional and intellectual breakthroughs that have come through writing about myself, to myself, and for myself. There's something very raw and freeing about this. 2021-04-17 01:11:52 4. Journals let you 'see yourself'. It's very hard to understand how you as a person behave if you aren't logging it. If you log it, it's easier to see when you're being an asshole, being unfair, or otherwise messing up. You can also tell when you're deluding yourself. 2021-04-17 01:11:32 3. By teaching myself to frequently journal, I've meaningfully changed how I approach socializing. Put simply, journaling taught me that denying myself social opportunities for some kind of 'work' is a reliable prelude to my mood dipping and/or depression. 2021-04-16 15:38:53 @realTinaHuang @StanfordHAI Fantastic gig and you'll work with good people! Excited to do some @indexingai stuff together 2021-04-15 22:26:41 @vgr I write a weekly wonky newsletter about AI research and applications: https://t.co/lgRURdEUEm 2021-04-15 20:42:55 @DeweyAM @mattsheehan88 Also, increasingly autonomous drones that can coordinate with eachother for a range of tasks, both economic and military. 2021-04-14 22:52:17 @foie Shut Ups 2021-04-14 22:43:00 @gwern @RiversHaveWings This gets an 8 bananas out of 10 on the 'what can generative models do' banana-surprise scale 2021-04-13 18:25:33 Soon, increasingly capable AI models will encode and reflect peoples' lives and potential life paths. https://t.co/UHaLqe0amF is a story from Import AI 244 about what happens when people move on from consuming synthetic imagery to consuming synthetic identities https://t.co/QPgV3byAKy https://t.co/410xuvs5fM 2021-04-10 18:15:23 @JanelleCShane It's getting harder to detect GAN images over time. @sensityai does this sort of detection today 2021-04-09 19:02:09 I'm working on a somewhat crazy short story about the above ideas, which will go out in Import AI 244 on Monday. Read past issues here: https://t.co/lgRURdEUEm Subscribe here: https://t.co/sSTAt6TuDW 2021-04-09 18:01:12 @artificialnix That said, I suspect there are some things you really don't want to make trivial/easy to do, and some you might, and having heuristics for the threat model here feels challenging 2021-04-09 18:00:50 @artificialnix Yeah, that's a position I've slowly updated in direction of. I used to much more be of the school 'oh, X is potentially dangerous, should be super careful'. Now I'm more like 'oh, X has been invented, so someone's gonna do a weird thing with it, we should get ahead of this' 2021-04-09 17:59:43 @rharang Oh, 100%! Rolling your own local Linux OS that plays nice with your dev environment and GPU drivers is/was an effective filter for getting rid of 'drive by' opportunists, etc. 2021-04-09 17:56:42 Another way to think about this is ease-of-use: thispersondoesnotexist is trivial to use. The easier/cheaper it is to use or do something, the more of that thing there will be. Interfaces to AI models are therefore factors in use/misuse of them. A fun puzzle! 2021-04-09 17:55:49 It's weird to remember that one of the main vectors for creation of disinformation via synthetic identities is https://t.co/1B7IQbluLp, a hobbyist site that lets people experiment with @NVIDIAAI 's StyleGAN2 stuff. Weird stuff comes from commodity AI rather than 'secret' AI. 2021-04-09 00:24:06 @Austen Most people work in a slow-growing part of the economy 2021-04-07 16:44:53 Further thoughts on a National Research Cloud: As @matthewclifford notes when thinking about why we should close compute gaps: "If academics and others can’t replicate or scrutinise cutting edge work in the field, there’s a risk of a democratic deficit." https://t.co/REkH5mVdH9 2021-04-06 18:16:13 RT @josheidelson: Scoop w/ @NicoAGrant & 2021-04-06 17:40:53 In today's installment of 'today's arXiv papers are just the source material for future cyberpunk terms': GhostVLAD https://t.co/I9Txmnaugz 2021-04-06 00:08:28 RT @MattHaneySF: VP Kamala Harris announced today while IN Oakland that the mass vaccination site at the Oakland Coliseum will stay open.… 2021-04-05 18:25:08 RT @CamilleEsq: Excited to be named a @SchmidtFutures 2021 International Strategy Fellow! I’ll be joining an interdisciplinary network of l… 2021-04-05 02:44:12 Birds are ridiculous dinosaurs in disguise and I Iove it. Some fine specimens seen at Lake Merritt today. #VOTENATURE2021 https://t.co/jrsQCJSMW6 2021-04-05 00:07:55 @ESYudkowsky Mostly, you build fabs. Frontier fabs cost 10-20billion. And I think COVID her probably permanently increased compute demand because it accelerated some web consumption trends, which mean all the hyperscale clouds have a demand for chips that has been ahead of their projections 2021-04-01 22:09:49 @Miles_Brundage We need both credits and also an infrastructure push. Something discussed today was to do both - try for a fungible load of cloud capacity, and also explore a DOE-led supercomputer API. The NDAA stuff is the NRC, also. 2021-04-01 21:24:46 @ronbodkin @VectorInst This is a really interesting submission, thanks for sharing it. How well do you think this works for scale-up experiments? E.g, could an NRC-like scheme serve as additional 'stretch' capacity for this kind of stuff? 2021-04-01 21:09:04 @yieldthought Yeah, this conversation has made me think I should spend a few days doing more detailed research into HPC utilization and understand pain points. Feels like some useful information to have when thinking about this, thanks for great questions! 2021-04-01 21:06:30 @yieldthought It's definitely true that supercomputers are far harder to use than AWS/GCP/Azure. Plus, most (recent) AI academics are more familiar with clouds than HPC, because that's the context they came up in. 2021-04-01 21:05:11 @yieldthought It's literally too hard - bunch of issues 2021-04-01 21:02:04 Most problems in AI policy seem to stem from information asymmetries. We need significant investments in both the capacity to measure (compute resources for academia, others), and also in institutions to measure/assess the artefacts that are built using this capacity. https://t.co/5BgkTs9ZLa 2021-04-01 20:53:32 @yieldthought I also think the scale of these issues is such that it's worth running two bets in parallel. Bet 1: Can we leverage vast private sector capital investments to create a cheap, accessible, compute resource for academia Bet 2: Can we create tooling to increase usability of HPC 2021-04-01 20:52:49 @yieldthought Good q! Was asked this today. I think we should try to experiment with a fungible layer that sits on top of mainstream clouds (AWS, GCP, Azure, etc), as well as a layer that gets built by DOE to mediate supercomputer access. Supercomputers seem generally harder to program against 2021-04-01 20:50:21 Things that this talk references: - AI & - Scaling Laws for Autoregressive Generative Modeling: https://t.co/PXBl39EVnD - Platformonomics analysis of capex by major cloud providers: https://t.co/jRhJc5FibZ 2021-04-01 20:48:39 The longer we have computational asymmetries between academia, government, and private sector, the more dangerous this becomes. Eventually, I worry that private companies will be building stuff that is impossible to replicate or analyze by third parties (arguably, we're close). 2021-04-01 20:48:05 Right now, the world is in a really peculiar position where some of the most well known and/or large-scale systems are developed by private sector actors. This isn't bad in itself but it is dangerous - information asymmetries create a landscape that can be exploited. 2021-04-01 20:47:25 The National Research Cloud is a policy proposal to create a huge pool of computational capacity for access by academics. https://t.co/XRqEV4qszI We need something like an NRC to let academics do large-scale computationally-intensive AI research 2021-04-01 20:45:54 Universities need access to computational infrastructure at equivalent scale as private sector, or I worry about long-term democratic governance of technology. I gave a short talk @ Stanford today about why I think a National Research Cloud would help. https://t.co/ekujbo3GAb 2021-04-01 19:37:20 @robinsloan The main problem is universities lack infrastructure to not only train models like GPT-3 but also to run them for inference. We lack the capacity in our academic base to do these models. Really weird situation! 2021-03-31 21:36:26 @Gabuhamad Thank you very much! 2021-03-31 21:36:18 RT @luislamb: The great AI Index Steering Committee Co-Chair @jackclarkSF presenting the findings of the 2021 AI Index Report at @StanfordH… 2021-03-31 01:26:00 @YungCoconut Summer 2030 fashion tip https://t.co/RKr1sWQE7r 2021-03-30 21:51:44 RT @BlancheMinerva: Call a dataset “public data” if it is de facto easily accessible to the public. It can be found with, e.g., a Google se… 2021-03-30 21:46:44 @XYOU Great point - and it's definitely true that if you prompt GPT2/3 with input copy that sounds SEO-generated, it does a great job of responding in kind. Maybe this will just be another 'style' of text we can get our systems to generate 2021-03-30 21:01:22 @tlbtlbtlb Yes, that's something I've been thinking about - if the sentences are coherent/valuable, then maybe it's not an issue. Also interesting to think if we could tag data with versions of AI systems that generated it, then train on predecessors specifically also 2021-03-30 20:20:57 Some discussion from DeepMind researchers about potential for language models to train on data containing outputs of other LMs. From this paper: https://t.co/XLMl6AL6lS https://t.co/jYvF2txueZ https://t.co/m2CEOVUG7R 2021-03-30 18:04:26 @j2bryson Cc @dzhang105 who has worked quite a bit on this aspect 2021-03-30 18:04:08 @j2bryson Significant amount of this is based on what can be measured today - as we note in the report, we lack decent data for public funding of AI (potential area for GPAI coordination). Much of it is based on what is measured within index, so we're generally extending vibrancy over time 2021-03-30 16:07:31 @kharijohnson @WIRED @ScottThurm @tsimonite @willknight Congratulations. They're lucky to have you. You're gonna do amazing things there! 2021-03-30 15:33:46 Tomorrow, I'll be discussing the AI Index report and thoughts on importance of measurement/assessment of AI systems, and need to develop larger institutions dedicated to this task. Come along if you are interested in the above, or you'd like an update on my pandemic hair. https://t.co/2OpfRQ38LM 2021-03-30 14:57:00 @thomeagle @iainthomson what do you make of this DoubleMarm ? 2021-03-30 14:56:21 @thomeagle Marm and Marm? Get out. 2021-03-29 23:00:35 @hlntnr Amazing. Congratulations, Helen! 2021-03-29 22:58:23 @default_friend @nwilliams030 +1 to Study Hall 2021-03-29 22:58:14 @nwilliams030 Craigslist is pretty good. Plus, following writers and pubs on Twitter 2021-03-29 06:52:35 @mcoppsldn @covgardenladies @MandyLJandrell @thomeagle @RoseNaomiTom @wesbrownwriter @TorilCooper @girlhermes Same 2021-03-29 00:53:17 @JayZuccarelli Thank you! 2021-03-29 00:51:21 Add the EverGiven to your own sim here. https://t.co/4yZboWhbRr 2021-03-29 00:16:44 At some point, we'll develop measures for 'organic' and 'synthetic' media. I wonder if this might lead to people pouring resources into @internetarchive as it will host huge amounts of 'pre-AI-dominated internet' https://t.co/A3QawDog90 https://t.co/YhN8pw1v8Q 2021-03-28 23:20:11 @Gregory_C_Allen Also, general is such a fuzzy concept that it's probably fine to use on a sharpie mindmap, but I think a true accounting would need to disentangle on multiple dimensions. Eg pretrained imagenet models enable downstream generality/domain specificity via fine-tuning etc 2021-03-28 23:19:07 @Gregory_C_Allen I don't think there is one yet. I think systems like GPT-3, CLIP, BERT, etc, all exhibit more general traits/downstream uses than other systems. We're also seeing the eval space move from single evals of systems to suites of evals (e.g SuperGLUE) which is symptom of generality++ 2021-03-28 22:54:00 This piece is part of a series. I'm publishing these mindmaps because I'm trying to be more transparent about the ideas I'm fiddling with while thinking about AI policy. Prior piece here: https://t.co/lrEELZb1wG 2021-03-28 22:46:00 @IgorCarron 100%. We're going from artisanal to industrial production techniques 2021-03-28 22:44:42 @datamapio By deploying AI systems into society, the data that most people train on will start to contain an increasing % of AI generated content. This will have downstream impacts on subsequent systems and/or require clever techniques to filter out. This may or may not be important - early 2021-03-28 22:38:16 @Miles_Brundage Yes, it has gone from 'slightly shaggy' to 'human dog' and now is in the 'unruly haystack' phase. Cut it in Feb 2020 and decided to keep growing it till COVID feels over/or when I'm fully vaxxed 2021-03-28 22:21:51 Here's a fun timelapse of me making this piece, on a beautiful sunny afternoon in Oakland, California. https://t.co/JpgON9DKHq 2021-03-28 22:17:29 Arrival of increasingly general AI systems means next few years will be defined by a massive expansion in the ways we measure the impacts and capabilities of AI systems, how humans use them, and how AI systems influence the world. Measurement is crucial to effective AI policy. https://t.co/Dcy5ciArLt 2021-03-28 17:13:43 https://t.co/I4tPC8jzTs 2021-03-28 17:08:36 Every so often I remember that @Microsoft latest flight sim is one of the craziest metaverse 'simulacras' of reality, and I smile. We're entering the era of innumerable shadowy digital worlds, layered upon reality and accessed through net-connected computation portals. https://t.co/ZGLfDn0Y1i 2021-03-27 20:41:50 @Ana_Chubinidze_ I'd also (politely) disagree with one part - you say we have all the ingredients except will. My POV, we have the will, but lack some specific ingredients. Namely, some actionable/shovel ready ideas for how to implement this stuff in practice. I'd love to see more here 2021-03-27 20:40:40 @Ana_Chubinidze_ I guess one question is how to make stuff more inclusive - what are the organizational systems that can be developed which incorporate a broader set of stakeholders? And how can these systems map to the incentives of companies (which sometimes push against this). 2021-03-27 20:39:57 @Ana_Chubinidze_ Thanks for including my thoughts! Don't mind at all. A slight tweak: I left OpenAI a while ago, so maybe say 'formerly of OpenAI'? Or you could say 'of the AI Index', or something. I like your observation these are socio-technical issues and require more inclusion. 2021-03-26 16:35:27 @C___CS @oiioxford Congratulations! 2021-03-26 16:20:48 @whimsley @rajiinio @geomblog Yeah, I know what you mean. https://t.co/benRcAnKZD 2021-03-26 16:20:32 @IAmSamFin @geomblog @rajiinio I'm definitely not indicating I support this! But I'm trying to be fair - I could imagine people saying 'art brings joy, this is a way to bring more joy for certain fans of certain artists' 2021-03-26 16:19:53 @jeremieclos @geomblog @rajiinio It's definitely a bit macabre. Frequently, the family of the dead person are the ones making money - same way that Tolkien's family license out Tolkien these days. Could be positive for the familymembers 2021-03-26 16:19:19 @jeremieclos @geomblog @rajiinio I'm not sure, but it's a use case that society has already used in some movies, where we've seen actors' estates let studios reanimate them. We've also seen stuff like Tupac's family do a hologram Tupac at Coachella 2021-03-26 16:14:23 @rajiinio @geomblog This, plus the broader area of film/TV, where we'll see these techniques do things like let actors live on after they die in the mortal realm (e.g, a studio may own the IP of a face/voice of an actor, then have them do films post-death). 2021-03-25 21:08:55 @Blankzilla @beka_valentine Not the same, but slightly related: Hench is about the people who work at a temp agency where they get to be henchpeople for supervillains (who periodically get killed by superheros, forcing the hench people to seek further temporary employment). https://t.co/NX29HKHSNg 2021-03-25 16:58:33 @GiorgioPatrini No one Absolutely no one ML researchers: Our method allows users to put eyes anywhere on a face. 2021-03-24 22:32:49 @atroyn dude - those baxter books are amazing. I loved the Manifold series - genuinely blew my mind. Also feels a bit like Baxter anticipated the existence of Elon Musk and wrote characters similar to him before he started 2021-03-24 22:21:53 @YungCoconut This is great, make more timelapses! 2021-03-24 21:58:33 RT @sensityai: BREAKING: a confession video of Yangon’s Chief Minister Phyo Min Thein was aired on Myanmar TV. The unlikely confession has… 2021-03-24 16:26:27 @pujaarajan @sarthakgh @OpenAI Actually, I moved on from OpenAI a while ago to work on a new AI organization with some colleagues. (But this also helps prove your point - it's a much different role to pure comms, etc) 2021-03-23 22:07:41 @GaryMarcus Nick Bostrom already wrote the first half of this story! https://t.co/yvGDUjPiJc 2021-03-23 20:37:53 @gwern Ah, I'm specifically talking about using ASR to convert the audio into text, then process over that. There is a lot of demand for more text data from more places, i think. Raw audio does seem further away in terms of demand 2021-03-23 19:23:13 @gchrupala It might be that you want to have a subslice of your data be 'gold standard' transcriptions and then you try and train a model that learns the error diff between ASR and gold standard, as one way to inject a bit more high quality stuff 2021-03-23 19:22:31 @gchrupala Because employing humans to accurately transcribe 120billion words is probably unfeasible 2021-03-23 19:22:09 @WordOfZac asr = automatic speech recognition, so listening to an audio stream and successfully transcribing. NLP = natural language processing, so basically NLP is stuff you do with text, asr is one way to create text 2021-03-23 19:21:15 @ClementDelangue ooooh I wasn't aware of this. I'll add a link to this week's Import AI, as I'm talking about this data stuff a bit there. 2021-03-23 19:17:06 I also don't have elegant solutions here. Probably the best thing is to hope we get automatic speech recognition (ASR) improvements in both quality and cost, so we can more easily point ML models to audio streams, then automatically construct larger datasets (w/errors from ASR) 2021-03-23 19:16:09 Source for 16k here: https://t.co/dBfI17oRm1 (though this number could skew by many thousands and it'd still net out to verbal being v significant). 2021-03-23 19:15:23 It's worth thinking about this, because it ties up with some of these issues of cultural bias in machine learning. Purely by training on text (even if you proactively filter for certain kinds of representation), you're cutting out vast swathes of population. 2021-03-23 19:14:30 Thinking about text - Internet has a lot of text, but tiny compared to speech. Avg person says 16k words a day. Pop NYC 8.5m. Call it 7.5m to account for non-verbal children, adults. 7.5m*16k = 120 billion words. Comparison: Spotify podcast ASR dataset: 600 million words. 2021-03-22 20:16:23 @iandanforth I think it works pretty well for software domains, but it continues to have a hard time with the physical world. Stuff like recommendations and logistics and routing are starting to get meaningfully improved by RL 2021-03-22 05:24:00 Sun and Sunset in Yosemite. There were wild turkeys in the underbrush and vultures circling and, as dusk arrived, the glorious chattering of all the valley birds. #VOTENATURE2021 https://t.co/BPDKYC7DDL 2021-03-22 05:13:29 @robinhanson The movie The Adjustment Bureau deals with a few of these ideas (pretty good, worth a watch, kinda a classic B+ movie) 2021-03-22 05:09:55 @avitaloliver Yeah, I think that's a very valid point, and it highlights some of the challenges. In places like policy and PR you try to present a singular view/face, whereas in other places you want to have as many different experiments/initiatives as possible 2021-03-21 23:57:03 @gwern Very apt! Supplying factories is a very opinionated act. 2021-03-21 23:56:17 @vinayprabhu I think the broader point you're raising is interesting - at what point do we need to think about what generic infrastructure is doing? And who thinks about it? Due to the @timnitGebru and @mmitchell_ai firings, Google's thinking in this area is now (sadly) externally illegible 2021-03-21 23:55:24 @vinayprabhu I think the scale aspect matters - this is more compute in more parallelized and useful form than what you get in Colabs. It's also interesting that this is a very public project and Google hasn't really made its thinking legible. 2021-03-21 23:19:52 @Nearcyan Yeah, that's what I'm inferring. But that argument nets out to "we don't approve of certain things made in factories, but we're happy to give people access to unrestricted factories". Which kind of suggests Google wants to have it both ways here, which feels odd. 2021-03-21 23:14:32 Here, Google is supplying some of the compute for the GPT-3 replication. But it's not clear if this means Google's stance as an organization is that these models should be released, or whether it think it's better that it supports release via proxy orgs it supplies with compute. 2021-03-21 23:12:54 One of the more confusing aspects of modern AI is that companies who won't release certain models (or necessarily discuss the ethical issues of release/non-release) will supply compute to people doing releases. https://t.co/bMw7RpRpKE 2021-03-21 06:58:16 Yosemite Sunset - the birds began singing after the sun disappeared over the ridge. The light dimmed slowly as the sun passed over the central valley and then down into the Pacific and, a little later, the world became at peace in the dark. #VOTENATURE2021 https://t.co/tXkp1EqHd4 2021-03-19 23:23:47 @jamescham @avicgoldfarb @robmay I miss your excellent dinners! 2022 will be the year of dinnering. 2021-03-19 17:31:18 @kevinroose bossware = great term 2021-03-18 22:05:15 @Conaw @RoamResearch A 2D schematic of a high-dimensional mind palace 2021-03-16 23:53:59 @Smerity @jamescham @TheRegister @katyanna_q @diodesign might have a link to a style guide, or @iainthomson 2021-03-16 19:08:21 I'm hoping that the 2020s see more leaders in technology 'go public' about the immense changes that are enroute. Instead of pretending this stuff isn't happening, we should loudly explain that it is happening and propose specific ways to deal with it. Hope we see more! 2021-03-16 19:07:02 AI and other technologies are going to have such a massive effect on the world we need to rethink economic policy and move taxation to capital and away from labor. Here's one highly specific approach from @sama https://t.co/kDr4t4LTxn https://t.co/JkvMclgUI4 2021-03-16 00:03:32 @mattsheehan88 have you written one? 2021-03-15 18:33:29 @ch402 big fan of this rough note format! Interesting ideas. You've inspired me to do my own at some point 2021-03-15 18:33:03 RT @ch402: Neural networks interpretability has a lot of analogies to neuroscience. But there are also several advantages that make it a mu… 2021-03-15 16:31:36 @ylecun @togelius Under capitalism, it'd be inappropriate (and unwieldy) to get Facebook to externalize its political factions in a form where people could vote for them. Therefore, that suggests to me some decisions re AI should be able to get voted on, so needs to be set by government, etc 2021-03-15 16:30:39 @ylecun @togelius Best way to highlight trickiness of this = - Facebook has multiple internal factions with different views on aspects of AI deployment - These groups are opaque from the outside, so though they are vying for internal power, we (as Facebook's users), can't vote for them 2021-03-15 15:39:41 ImageNet gets redone in response to ethical concerns. +1 to idea that we'll see emergence of P2P/underground AI dataset economy, where I imagine the unblurred ImageNet will circulate. https://t.co/0DxGF4gOLO https://t.co/mVHGwxN3Nv 2021-03-15 14:16:19 @togelius AI politics implies the possibility (or highlights the absence) of elections 2021-03-14 21:41:28 Also covered: staying focused and avoiding distractions, how synthetic media will change culture via emergence of hybrid media creations, and more! 2021-03-14 21:40:06 Had a fun time discussing AI& 2021-03-14 01:38:22 @rupertg very well put! Do you think Intel could work a lot more closely with people on specific qs? (kind of like an extension of the ultrabook strategy) 2021-03-14 01:37:44 RT @rupertg: @jackclarkSF It really is all about the SoC. There is nothing magic about ARM, it's about v tight integration and no off-chip… 2021-03-14 01:26:03 (As a couple of people point out, it's more that Apple did the very impressive SoC work, with the ARM stuff being less important. I used to cover ARM for pcs back in 2010s, and I think past jack would have been surprised to see a top-of-the-line apple ARM machine in 2020) 2021-03-14 01:24:46 @Aelkus @togelius that's good feedback, shoulda also called out the huge SoC stuff. I think the ARM thing is still a bit surprising as I remember covering ARM a lot in 2010 when iirc they were taking a first swing at supporting broader class of machines 2021-03-14 01:16:17 @Freemanalan I am about to try and do a muilti-screen environment, though, so I'll let you know how I fair 2021-03-14 01:15:53 @Freemanalan I haven't - this is basically my newsletter machine (https://t.co/lgRURdEUEm) - it lives in a backpack and its main job is to show me lots of arXiv PDFs, support a ton of Chrome tabs, and be my main all-purpose wordpressing system. I do some light coding on it but nothing huge 2021-03-14 01:14:15 Something that also feels odd: While using this machine I think, about twice a week, 'good lord, I can't see how Intel can really match this'. Maybe if we see reemergence of Wintel, but broad SKU range makes it hard to see. Very bullish on ARM chips for normal PCs now. 2021-03-14 01:12:39 After a few weeks of using an M1 MacBook Air, here's my review: - Best laptop I've ever had - The keyboard works - Battery life is ridiculously good. - Most software generally feels snappy - Software support for M1s is good and growing rapidly 2021-03-13 08:32:43 @bigblackjacobin As we may think by vannevar bush (1945) is good https://t.co/IsLxIKSO1F 2021-03-13 00:13:41 @MichaelFriese10 This is fantastic 2021-03-11 19:38:54 @chrisalbon I started doing the same a few years ago - included a 'why does this doc exist' section at top, though shifted to using 'purpose of doc' more recently. 2021-03-11 02:31:36 @GretchenMarina A+ cloud tweet, ty 2021-03-11 01:05:30 @draecomino Ethereum layperson here. Can you explain how the gas prices are gonna go down enough to let your FAANG replacement thing happen? It seems like the underlying network is expensive and a bit slow, but I haven't dug in a ton so could be wrong 2021-03-09 18:54:53 In light of the HuffPost layoffs, happy to chat to any journos thinking about moving out of journalism into another field. It feels really scary at the time, but there's really interesting stuff on the other side. DMs open! 2021-03-08 22:51:18 RT @ai4allorg: It's AI4ALL's 4th birthday! Since our founding, we’ve impacted over 15,000 people in all 50 states and globally with approac… 2021-03-07 20:24:59 RT @ch402: When we make assumptions about what features exist in neural networks, they often prove us wrong. It turns out that 4% of CLIPs… 2021-03-07 00:05:41 @Lucas_Shaw You could say you had a quick bite of quibi 2021-03-05 01:02:18 @baykenney That was fast 2021-03-04 23:21:33 @erikbryn (from: @BartekMoniewski https://t.co/6kPWB5mLYR ) 2021-03-04 23:20:52 @erikbryn https://t.co/5jR0KnwZvx 2021-03-04 21:17:53 RT @ch402: I've spent a lot of time investigating what goes on inside neural networks. Working on this project was the highest density of m… 2021-03-04 19:09:10 Talk is cheap, money isn't: Along with the US government talking more about AI (see Congressional mentions of AI in latest @indexingai ), it is starting to spend more money on civil applications of tech, rather than primarily defense applications. https://t.co/FxZu7q0gS4 2021-03-04 07:48:03 @MichaelTrazzi Done! 2021-03-04 07:40:33 @MichaelTrazzi Still happy to chat : ). My response rate varies wildly from 30 seconds to up to a month*, so you're very much in distribution! * This is not a useful response window and it's something I try to work on, but I also tend to have a lot of plates in the air. 2021-03-04 06:15:56 @kevinroose They will be WEIRD 2021-03-04 05:01:10 @maria_axente @StanfordHAI @Sam_L_Shead @RobMcCargow @AnandSRao @GolbinIlana thank you! happy to answer any qs etc. Huge credit to @dzhang105 who put in a vast amount of work on it this year! 2021-03-04 02:42:15 @balajis @AriDavidPaul @pdavison @andrewchen And also when people know they're being recorded/transcribed they always tend to self moderate more and be less loose in conversation, so it might mess with the vibes 2021-03-04 02:41:24 @balajis @AriDavidPaul @pdavison @andrewchen Former journalist here - I suspect one of the attractions of clubhouse is that the stuff isn't naturally transcribed, which makes it harder to get stuff quoted out of context (see recent kerfuffle caused by people tweeting quotes from CH) 2021-03-04 01:21:01 Here's @indexingai steering committee member @terahlyons discussing the lack of data around diversity in AI - one of the problems we encounter with the Index is the general lack of data here (though this year's report has more than before), which makes harder to make progress. https://t.co/aMdhmapjLD 2021-03-03 23:16:03 @k8em0 @russellbrandom @PennStateLaw @amatwyshyn amazing idea/gift. bravo to you 2021-03-03 21:14:36 This is my general takeaway after having spent 4/5 years doing the AI Index - we've gone from "huh this research might be commercially viable" to "oh, this is obviously of huge commercial value" in a very short space of time. It also means the stakes are higher. 2021-03-03 21:13:49 Axios on the latest @indexingai : "Artificial intelligence is becoming a true industry, with all the pluses and minuses that entails, according to a sweeping new report." https://t.co/xYtS47zC21 2021-03-03 19:48:42 One measure of the vast AI journalism ecosystem of China: @indexingai report came out less than 9 hours ago and this Chinese site has a very detailed, translated article up, which requires reading through vast amount of report to write. https://t.co/KI2l2Niz5y 2021-03-03 19:31:40 @strwbilly One bit of feedback - tons of labs have been working on sparsity for multiple years (not just Numenta). Very important area! 2021-03-03 19:30:27 @strwbilly Jeff Hawkins has been on this thesis for some time! Spent several months reporting a feature on him in 2014. Curious about what progress they've made in recent years (am reading thousand brains now) https://t.co/3NqcsFvI9w 2021-03-03 19:19:22 @edzitron Ed it's too early for these truths save this for late night twitter 2021-03-03 19:16:01 RT @Fermat15: Based on personal experience, AI may be the single most bipartisan issue on the Hill. Always had great support even while fac… 2021-03-03 17:55:10 Another datapoint from @indexingai - mentions of AI in the U.S Congressional Record are increasing dramatically 2021-03-03 16:57:07 @geomblog Yes - part of why I love doing the AI Index is that we're committed to measuring things annually/close to annually, so I hope in a few years we'll be able to assess this data and figure out what was signal and what was noise. Brick by brick, I'm hoping we can create useful data! 2021-03-03 16:54:53 @Ana_Chubinidze_ @StanfordHAI @indexingai Maybe I should have said "pleasantly surprised" - but I know what you mean, it's somewhat intuitive. I hope conferences learn from the lessons here and try to do hybrid virtual stuff in future - seems to help for accessibility. Glad it was helpful for you! 2021-03-03 16:53:54 @geomblog you do raise a good point which is that we should start tracking conference prices on a year-by-year basis - that could be a really interesting datapoint to collect! (cc @dzhang105 ) 2021-03-03 16:53:28 @geomblog yes, different conferences have slightly different methodologies for counting what qualifies as attendance (where some is activity and some is reg), so there's some merit to idea that cheap tickets = higher # of no shows, but I don't think that'd discount the repeated effect 2021-03-03 16:48:40 @drfeifei @jcniebles @MPSellitto @erikbryn @terahlyons @yshoham @RayPerrault @dgangul1 @StanfordHAI Also massive credit to @dzhang105 ! Daniel joined us on the Index in 2020 and has been doing amazing work. 2021-03-03 16:47:50 RT @drfeifei: Drum roll! The freshly released #AIIndex2021 from a group of leading scholars & 2021-03-03 16:45:12 Also, this trend isn't a quirk - it shows up (more dramatically) in some of the smaller conferences, like ACL, ICLR, etc. https://t.co/rgiJYZNa9N 2021-03-03 16:43:59 @victorstorchan @vkokoszka Oh this is a great question! Because it's currently hard to find data for this that lets us quantify this over time. As we note, we think making more of the diversity stuff legible through data gathering is really, really important. Tons of work to do! 2021-03-03 16:43:11 Note: IROS had a lengthy virtual conference which lasted for months which skewed its attendance numbers out of distribution, hence dashed line. Still, a surprising silver lining of pandemic is it may have improved accessibility. 2021-03-03 16:42:17 One surprising finding from the @StanfordHAI @indexingai report: COVID was... good for conferences? AI conferences have had physical capacity issues 2021-03-03 16:38:49 @ggdupont @indexingai ah, I think we're somewhat similar - I'm optimistic about it (especially if they're genuine collaborations, rather than people rotating affiliations). Feels useful to track as a factor in the space, I wonder how the trends will evolve over time 2021-03-03 16:09:33 @ggdupont @indexingai A big chunk of them are going to be skewed by collaborations, though many of them will have academics on the paper and it'll say *work carried out at $company It also seems notable because industry has different standards to academia for research, so measuring skew feels useful 2021-03-03 15:49:13 @NurAhmedB This was really interesting research! Thanks so much for producing it and glad to amplify if 2021-03-03 15:48:26 One way to visualize shift from academia to industry for AI development - tracking conference publications, where industry now 20%/30% of conf pubs. @indexingai report (out now! as of a few hours ago) has a bunch of datapoints around this. https://t.co/DTsx5TKNZr 2021-03-03 15:45:58 RT @NurAhmedB: The latest #AI Index (@indexingai) report is here. Excellent work by @erikbryn , @jackclarkSF and others. The best report so… 2021-03-03 02:06:43 @chaykak I've been writing a newsletter for several years and the relationship to my readers has been life-changing. 2021-03-02 20:22:09 @lawrennd @Sam_L_Shead @nathanbenaich @BVLSingler @samim @OScharenborg @iamtrask @NandoDF @DaniCMBelg @egrefen @verena_rieser @j2bryson @NoelSharkey @JohnDanaher @DameWendyDBE maybe this is how physicists felt in the late 1930s and it'll certainly be how the weird biohackers of today feel in about 10 yrs time 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired