Francois Chollet

Profil AI Expert

Nationalité: 
Français(e)
AI spécialité: 
Deep Learning
Occupation actuelle: 
Chercheur, Google
Taux IA (%): 
59.11'%'

TwitterID: 
@fchollet
Tweet Visibility Status: 
Public

Description: 
François est un chercheur en apprentissage profond de chez Google. Il a developpé la bibliothèque libre en apprentissage profond Kera. François était présent à AAAI symposium pour des discussions avec d'autres experts en intelligence artificielle pour traiter des sujets d' abstraction et d'analogie. Il propose de temps à autre son code sur les réseaux sociaux comme il l' a fait avec l'approche du dual encoder. François pense que pour créer une bonne application, il faut créer ce que l'on souhaite utiliser. François milite pour l'application des bonnes pratiques en intelligence artificielle notamment sur tout ce qui concerne la création et l'organisation des données.

Reconnu par:

Non Disponible

Les derniers messages de l'Expert:

Tweet list: 

2024-03-01 00:00:00 CAFIAC FIX

2024-03-11 00:00:00 CAFIAC FIX

2023-05-21 01:23:06 What you can do is the intersection of what you have the potential to do (that's huge!) and what you think you can do (typically much smaller).

2023-05-21 01:21:49 If you want your kids to develop their abilities faster, the most effective trick I've found so far is to help them gain confidence in themselves. The effect size is ridiculous.

2023-05-20 17:05:55 Before multiple accounts run out of a drab office building in the suburbs of Saint Petersburg start replying "what's your proof? this is a conspiracy theory!" -- Le Pen being financed by the Kremlin is an extremely well documented fact. Going on for years. https://t.co/1jszCa6JZi

2023-05-20 16:58:01 The Kremlin's involvement in local politics in e.g. France, Germany, Brazil or India is not limited to propaganda. It also involves direct campaign financing for far-right candidates. For instance Le Pen in France is partly financed by Kremlin-affiliated banks.

2023-05-20 16:56:00 Of course, in order to run propaganda campaigns, you need eyeballs, so not all of their content is a propaganda payload -- there's also a fair amount of engagement farming designed to collect receptive followers. Often around stock trading tips, crypto/NFT trading tips, etc.

2023-05-20 16:52:55 Always the same coordinated messages, day by day. Themes: Ukraine= evil. Russia= righteous defender of conservative values. The West is in decline because of immigration and degeneracy. The Covid vaccine is evil. Crypto will save you. Plus support for local far-right candidates.

2023-05-20 16:48:44 The Kremlin doesn't spend its social media propaganda budget purely towards the US audience. It maintains a very large network of accounts that cater to audiences in France, Germany, Italy, India, Brazil, and many others. These accounts look like this (for example) https://t.co/hd3C3d8ygL

2023-05-20 15:49:08 Glad you found the book useful! Thanks for the kind words https://t.co/KnFgoCqrQz

2023-05-20 03:33:47 Love this city! https://t.co/4bRpHCldI8

2023-05-20 01:49:49 @MNateShyamalan Deep learning was worth it after all

2023-05-20 01:34:28 @clmt Congrats!

2023-05-19 19:00:00 CAFIAC FIX

2023-05-21 19:00:00 CAFIAC FIX

2023-04-24 01:58:27 RT @duncanmgibb: We've all seen the stories about how Paris is quickly getting rid of cars. The impact on local air pollution is simply ma…

2023-04-23 18:22:59 @pfau If you can describe what you want to do in a precise step by step fashion, you've written a program. Translating it into the syntax of a programming language is the easy part. Thinking clearly about the steps is the hard part.

2023-04-23 18:21:21 @pfau Programming. You just invented programming. https://t.co/tDUUcbqFSe

2023-04-23 18:17:25 @pfau They're literally describing the usual programming workflow (when using SO/etc as a code snippet reference). Why do people insist on saying "ChatGPT wrote this program" when this is not at all what's happening? Do they also say "my compiler wrote this" or "SO/Google wrote this"?

2023-04-23 02:43:56 I was paying for Twitter Blue for the "undo tweet" feature, but now I'm starting to feel embarrassed by the checkmark. Products that win are those that make their users proud of being users. Twitter Blue is an astounding case study in how *not* to market a product.

2023-04-21 22:38:55 @skalskip92 2.5M devs, aka 60-65% of the ML community -- including most of the services you use everyday, like Twitter, YouTube, Snap, TikTok, Maps, etc.

2023-04-21 21:20:53 If you're a Keras user, star the repo on GitHub: https://t.co/eu4PEy1PXg

2023-04-21 18:23:18 In general, language acquisition is better understood from the lens of toddlers being agents trying to achieve goals in their environment. Using adult-level psychology concepts is not super relevant. Nor are children "statistical parrots" trying to predict the "next token"...

2023-04-21 18:20:29 I think the "no" phase is just that -- mode collapse into your most effective behavioral strategy. Until its effectiveness wears off due to over use, and you grow out of it.

2023-04-21 18:18:06 And so they start relying on "no" more and more (+ related behaviors). Because it works. When you discover a really good tool, an almost inevitable pitfall is mode collapse: it becomes your go-to (unconsciously in this case). It becomes the only lever you pull, all the time.

2023-04-21 18:14:27 At some point they learn to say "no" purposefully. And this turns out to be an exceedingly effective strategy for shaping their environment to their liking, because while expressing what you want is hard, saying "no" is easy and gets you the same affordance power (if not more).

2023-04-21 18:12:36 Children acquire language primarily as an *act* -- speaking is something they can do to influence their surroundings and achieve goals -- like walking or reaching for things. It starts before actual words, e.g. with crying on purpose in order to get attention.

2023-04-21 18:09:46 Around age two, children start using "no" a lot (alongside a range of opposition behaviors), often reflexively. The pop science explanation for this relies on adult psychology concepts -- a desire for independence or a need to assert their own personalities. Which feels off...

2023-04-21 17:39:32 I see many folks in the replies asking for a definition of intelligence. Yes, it is 100% necessary to rigorously define intelligence before you can judge AGI progress or lack thereof. Here's the one I've been using: https://t.co/djNAIV0cXc Under this definition, ~0 progress.

2023-04-21 00:00:01 CAFIAC FIX

2023-04-19 22:29:28 The tech industry prides itself on being data-driven, but for the most part it is moved by narratives, and those narratives are more often than not untethered from actual data.

2023-04-19 19:04:18 @molly0xFFF Celebrities who were promoting NFTs in 2021 made money on it (usually they weren't even buying the NFTs with their own money in the first place, they were getting paid to promote specific series). It's the gullible marks who bought them that lost everything.

2023-04-19 05:10:22 RT @luke_wood_ml: Excited to share the new KerasCV Object Detection guide! This guide has been in the works for almost a year now! https:…

2023-04-18 15:34:39 Really happy with how the object detection workflows in KerasCV have turned out. Both simple and easy to customize in depth! The API brings everything together, from bounding box-aware image augmentation to model evaluation. https://t.co/wdBCYeYNGf

2023-04-18 04:14:58 RT @fchollet: A new Keras starter guide for the Vesuvius Kaggle competition: https://t.co/GJq6Ca0g07 Clean, fast, efficient, runs fully on…

2023-04-17 23:36:17 RT @jbischof1: Our updated guide for object detection in KerasCV is now live! Here's a cool demo: we generate cat pictures from our Stable…

2023-04-17 16:04:14 @ClementDelangue @github Congrats!

2023-04-17 14:48:29 A new Keras starter guide for the Vesuvius Kaggle competition: https://t.co/GJq6Ca0g07 Clean, fast, efficient, runs fully on the Kaggle P100 instance without OOM, and gets you 0.11 on the leaderboard! By @ariG23498

2023-04-15 15:58:12 The quickest way to gain respect for the implementation choices made by a complex system is to try to solve the same problems yourself from scratch :)

2023-04-14 20:01:14 Never sell humanity short.

2023-04-14 19:58:34 There are two opposite kinds of perspectives on AI flying around lately: the humanistic one, focused on how AI can be used as a tool to help us do more, imagine further, create faster -- and the anti-humanistic one, focused on human obsolescence &

2023-04-14 19:16:20 The only rational move is to ignore all such pronouncements, much like you should have ignored all such pronouncements in 2021 and 2022. News worth paying attention to is not deliberately formulated to induce strong (negative) emotional responses.

2023-04-14 19:16:18 Really striking how every AI hype tweet is explicitly trying to induce FOMO. "You're getting left behind. All your competitors are using this. Everyone else is making more money than you. Everyone else is more productive. If you're not using the latest XYZ you're missing out."

2023-04-13 03:50:59 "Google" is, to this day, one of the most popular search queries on Google (usually #1), which says something worth noticing about how people use technology. Did you know you can just type queries into your browser address bar instead of going to the Google homepage first?

2023-04-12 18:45:49 @srush_nlp @egrefen Just ask AutoGPT to summarize it for you. No need to understand what it is or how to use it if it can do it for you.

2023-04-11 14:30:28 RT @RisingSayak: Last week @hugginface and #keras completed yet another sprint. This time we focused on DreamBooth Keras, which leveraged t…

2023-04-10 03:42:18 My hope is that we'll use AI to lift up the human spirit, not to crush it. The people who are so eager to talk about how humans will soon be obsolete make me sad. Humans are the source of the ground truth of the models being trained, and their only audience. AI is our creation.

2023-04-10 01:10:36 @j_mora That's still training on human generated data. Data creation and data curation are on the same spectrum. No recursive self improvement here.

2023-04-10 00:59:00 @LechMazur That's not recursive self improvement, since you're still relying on human minds to provide ground truth for you. You're still tethered to human outputs. Data creation and data curation are on the same spectrum.

2023-04-10 00:45:19 @d3nm14 AlphaGo is not a curve. It has a Go simulator and a search process attached to it. You can use search to mine certain types of spaces (those that can be generated programmatically) and fit a curve to that.

2023-04-10 00:42:45 @jfischoff And if you are additionally training on a new human generated signal such as likes, you're injecting brand new information.

2023-04-10 00:40:51 @ekdnam Query it and train a model on the outputs.

2023-04-10 00:40:28 @jfischoff You can add regularization to the first curve to improve it. But you can have done that in a more direct way than training a new model. And regularization has fast diminishing returns -- there can be no recursive improvement by adding regularization.

2023-04-10 00:35:33 @MilesGarnsey GANs fit a training dataset in a roundabout way. All the info in a GAN comes from the original training data. It's still curve fitting, like diffusion models.

2023-04-10 00:33:55 That's because these models are big curves fitted to a dataset. If you use one curve to sample new interpolated points and you fit a new curve to that, you are only approximating the original curve. The new curve will only generalize, at best, as well as the first one.

2023-04-10 00:32:15 The notion that you can use data generated by a model to train a bigger, better model is a nonstarter. Sure, you can use a larger model to train a smaller model -- that's distillation. But it won't learn anything new. There's no recursive improvement.

2023-04-09 20:34:08 @AdamSinger @kevin2kelly It's a social and political problem. Most people would choose to have kids if they were able to. We've engineered a society that makes raising kids as difficult and expensive as possible.

2023-04-09 16:36:43 @johnwic22870744 Yes, definitely. Would be happy to support you with that as part of KerasNLP if you're interested in contributing this.

2023-04-09 16:35:40 We owe so much to the people who invented coffee

2023-04-09 00:25:25 ...but there's also the fact that in order to teach, you have to project yourself into the way the other person thinks. When that person is completely unfamiliar with the topic, it gives you a nice reminder of what it's like to look at your domain through fresh eyes.

2023-04-09 00:23:58 In more ways than one. Of course, there's the fact that teaching requires you to have a model mental of the topic that's sufficiently clear and sufficiently simple to be *communicable* -- it forces you to really understand what you're teaching...

2023-04-09 00:21:09 Teaching something is always a learning opportunity.

2023-04-08 21:25:02 I vividly remember the ~2014 one. 80% unemployment rates were supposed to be around the corner. If machines can beat people at Jeopardy, who's ever going to need a human doctor anymore? I wrote a post debunking the whole "automation causes mass unemployment" thesis at the time.

2023-04-08 21:21:32 "You're fired!" -- with the progress of automation, human labor is now outdated. Machines are replacing you. Such panics have been a regular occurrence since the early 20th century. Of course, in reality, that's not how the economy works. But it's a message with viral potential. https://t.co/w15SCVvutc

2023-04-08 02:46:32 I think there's a lot we can learn from IBM Watson, good and bad.

2023-04-08 02:42:03 The first "AI is coming for you job" mass panic I experienced first-hand was in 2014 and was based on the same premise -- that human doctors would soon be a thing of the past. It was triggered by the now-defunct IBM Watson. https://t.co/HsYAl3GjRT

2023-04-07 21:30:17 @alepiad I am. I talk about my view of Searle's argument here: https://t.co/A3BZX1J4Fq

2023-04-07 21:26:50 @Zedmor See e.g. https://t.co/7otvdJVUVy "Our experiments reveal that while powerful deep models offer reasonable performances on puzzles that they are trained on, they are not better than random accuracy when analyzed for generalization."

2023-04-07 21:25:23 @Zedmor The system is out there. You can just interact with it and test the hypothesis by yourself! You see an immediate correlation between question familiarity &

2023-04-07 21:23:45 @Zedmor Exactly. Except with the opposite conclusion. Every study so far that tries to test GPT-N for actual generalization has found that it scores no better than random on genuinely new problems -- brand new coding problems in particular. This is why it can't do ARC either.

2023-04-07 21:02:36 This is an extremely common and enduring misconception about the Turing test. It was never an actual test! Turing introduced it purely as a philosophical argument -- a thought experiment. https://t.co/xkLkY1bQTa

2023-04-07 20:47:09 @Plinz If you test it on exam questions is has exactly seen before, then sure. Most exam questions we give humans are taken from question banks. It still works as a testing system because no human can come close to reading through and memorizing all of the relevant question banks.

2023-04-07 20:39:32 Even a hashtable can pass the bar exam giving enough training data. But you probably don't want to be represented by a hashtable.

2023-04-07 20:37:07 @tdietterich Yes, adaptation is skill acquisition. But you cannot measure it without controlling for what the test taker already knows, i.e. you have to take into account "generalization difficulty", which is a function of both the task and the test taker's preexisting knowledge.

2023-04-07 20:28:28 Skill, on its own, is not a sign of intelligence. But skill acquisition efficiency over arbitrary skillsets is. And that looks very different in the context of human test takers vs. machines.

2023-04-07 20:27:06 If you meet a 30-year-old human who is exceptional at chess, you can assume that they are intelligent, because you know they must have developed this skill with less than 30 human years of practice -- they were not born with a brain already specialized in chess.

2023-04-07 20:23:36 Pretty much every human test (except memorization tests like spelling bees, or pure algorithm execution tests like mental multiplication) tries to gauge your ability to handle a *new* situation, and relies on this fundamental assumption.

2023-04-07 20:21:34 Don't score AI using tests designed for humans. In particular because, with humans, the default assumption is that *they haven't already seen* the content you're giving them. With a LLM, the default assumption should be that, *if it's on the Internet, it's already been memorized*

2023-04-07 19:22:36 Showcased during the Keras community meeting: a model wrapper that makes your Keras model risk-aware. https://t.co/AceWqtpH64

2023-04-07 19:18:53 I'd love to buy a sizeable light field screen and frame it on a wall and make it look like a window into another world (AI generated and morphing continuously over time, maybe in 3D, or at least with parallax layers). Does that exist?

2023-04-07 19:00:49 @juancopi81 @LambdaAPI @huggingface @mervenoyann Congrats!

2023-04-07 19:00:09 RT @juancopi81: My model (Riffusion-Currulao) was one of the winners in the Keras DreamBooth sprint - Wild Card category!! Thanks to @Lam…

2023-04-07 18:35:08 RT @mervenoyann: Our Keras DreamBooth sprint has come to an end, and here are the winners in each category I will post them in this C…

2023-04-06 19:10:43 Featuring some neat Keras patterns like creating RNNs from custom RNN cell layers... https://t.co/f6NxrvD4xo

2023-04-06 19:06:03 This is an implementation of Temporal Latent Bottleneck Networks, proposed in this paper: https://t.co/cGZl2teF1e It's a dual-stream approach that combines the strengths of both RNNs (compressed sequence representations) &

2023-04-06 19:03:23 Can you combine Transformers and RNNs to improve generalization? Find out in this https://t.co/m6mT8SaHBD walkthrough by @ariG23498 and @halcyonrayes: https://t.co/Ii1vk33SqA

2023-04-06 15:51:05 RT @kyledcheney: DOJ Update on Jan. 6 cases: -1,020 charged -339 charged with assaulting/impeding police (107 w a dangerous weapon) -533 h…

2023-04-06 15:42:49 The faster technology moves, the faster humans adapt to it and make it theirs. What was amazing last year becomes mundane, what was weird and gimmicky becomes practical and useful. Extreme adaptability is the hallmark of human intelligence

2023-04-06 00:31:25 Dreams would not make very good stories. The best stories are 80% formulaic, and leverage that last 20% to produce a fresh take on the formula.

2023-04-06 00:29:28 To be productive, creativity must operate within a highly structured framework, with only a small window for novelty. Productive creativity is like a writer's imagination, erudite and studious -- while raw creativity is what you see in your dreams, formless and chaotic.

2023-04-05 15:02:55 You can argue that sequence-processing models aren't a good fit for ARC since ARC grids aren't natively presentable as sequences (albeit IMO flattening grids is completely fine and has turned out to work well for many other problems "sequence of rows" problems, like coding)...

2023-04-05 14:57:35 Yes, ARC is fundamentally 2D. But dimensionality != modality. A 2D grid of discrete symbols can be natively represented as a graph where symbol slots are connected via "relative position" edges. Unlike natural images, there's no need to process ARC grids visually. https://t.co/aLhcbduZ4e

2023-04-05 02:11:49 RT @JeffDean: Paper: TPUv4 system has an optically reconfigurable network to assemble groups of 4x4x4 chips like legos (4x4x12? 16x16x16?).…

2023-04-04 18:16:09 @lacker Is [::-1] a visual concept, or a symbolic concept? Looks pretty symbolic to me. You can visually encode something, but that doesn't mean it's visual in nature.

2023-04-04 18:14:49 @she_llac It certainly does matter in many languages

2023-04-04 17:02:37 A 2D grid of symbols can readily be encoded as a 1D sequence of symbols simply by flattening it (with a line break marker). Since each ARC grid row is very short (between 1 and 30 symbols, *much* shorter than a code line), this is not particularly harmful to context tracking.

2023-04-04 17:00:33 In the human testing interface, ARC tasks are visually encoded (by replacing symbols with colored squares), but that's an arbitrary choice of encoding. The ground truth of the data is a JSON string.

2023-04-04 16:59:38 A common misunderstanding about ARC is to believe it's a visual reasoning test. ARC is not visual. ARC tasks are purely discrete and symbolic. Each grid is a 2D grid of discrete symbols (just the same way code is structured as a sequence of lines of discrete symbols).

2023-04-04 01:09:41 @togelius Stream insane games of League

2023-04-03 23:29:11 (For the record, I am pretty bullish on generative AI's upcoming product impact. It's the GPT-AGI takes and the GPT-apocalypse takes that will age poorly.)

2023-04-03 22:57:42 When you keep forecasting the apocalypse and it doesn't happen, what's next? Do you just deny you ever said the things you said, or do you try to make it happen yourself?

2023-04-03 22:48:06 Or closer to the present -- like how people in 2016 predicted that RL applied to game environments would lead to AGI within 5-10 years

2023-04-03 22:42:52 In 2033 it will seem utterly baffling how a bunch of tech folks lost their minds over text generators in 2023 -- like reading about Eliza or Minsky's 1970 quote about achieving human-level general intelligence by 1975

2023-04-03 17:47:03 RT @TensorFlow: Have a cool project you’re building using TensorFlow? We want to highlight you! Submit your project today for a chanc…

2023-04-03 01:20:43 Intelligence expresses itself as play, imagination, exploration, creative analogy-making, passion, and very, very fast learning.

2023-04-03 01:19:24 Many people have a view of intelligence as a cold, hyper-rational optimization process. But watching kids -- the most intelligent beings on the planet -- should quickly cure you of this notion.

2023-04-01 16:02:58 RT @elie: Twitter use Keras-Tuner to hyper-tune their models. Glad they found it useful maybe we should start a list of companies that us…

2023-04-01 01:01:17 Sophistication is making success look easy. Only poseurs need to look like they're working extra hard &

2023-04-01 01:00:10 Real sophistication: having less code, simpler abstractions, and using off-the-shelf components for everything outside of core differentiating value factors.

2023-03-31 20:38:05 So, Twitter open-sourced The Algorithm... I was so close (at least for the toxicity/abuse/nsfw detection models) https://t.co/I4aiSkkcHm https://t.co/5DJUCvUbov

2023-03-31 19:13:43 A good thread on the antivax cinematic universe. https://t.co/b3StlMx2fw

2023-03-29 20:23:34 In programming, you constantly run into situations where you have a choice between "make it work right now with this quick hack" or "do it right". And it's always very satisfying to do things right. Saves you time, too :)

2023-03-29 01:59:22 Personally I'd suggest a 6 month moratorium on people overreacting to LLMs (in either direction) https://t.co/83zJTdixaP

2023-03-29 00:54:15 RT @MaxCRoser: More than 1.2 million children in Kenya, Malawi and Ghana have by now been immunised with the world's first malaria vaccine…

2023-03-27 16:58:00 RT @tomgara: We're at the "senators tweeting like thread guys in the For You tab" phase of the hype cycle

2023-03-27 16:12:08 Keras turns 8 today! Almost missed it :) It's been a wild 8 years. Big thanks to all ~2.5M of you for being part of the journey -- especially our awesome contributor community! You rock! I'm super excited about what we've got coming up for Keras later this year -- you'll see :) https://t.co/olBW6CVCeZ

2023-03-26 22:58:15 Fall in love with the problem, not with any specific solution. And focus on the fundamentals, not on the fashions of the moment.

2023-03-26 22:57:18 AI is a tool that helps us build things that help people. For instance, LLMs might be a better UX to access information -- more adaptive, interactive, engaging. That's great! But it's still just a tool &

2023-03-26 22:54:20 20 year-olds reading this: if you find AI exciting and you're passionate about it, then go into AI -- there's no shortage of great things to build. But don't jump into AI just because it's suddenly hot and you have FOMO. Trend-chasing is counterproductive. https://t.co/8G2QZGQck3

2023-03-26 21:14:09 @vo_d_p It will work when you do `import keras`.

2023-03-26 18:34:58 I'm interested to hear about how you've been using Chat LLMs (like ChatGPT, Bing Chat, or Bard). This is for my own personal education. https://t.co/kTbE6C341d

2023-03-25 23:55:23 The namespace is programmatically generated (which brings tons of niceties and useful guarantees). All of our ecosystem packages are following suit. https://t.co/O9PjfemKER

2023-03-25 23:53:51 To be clear, this is fully backwards compatible -- if you're using `tf.keras` or `from tensorflow import keras`, nothing changes for you. But `import keras` is nicer, and becomes the recommended style. The API you get is the same.

2023-03-25 23:52:18 Vive l'émancipation :)

2023-03-25 23:50:58 Starting with the current Keras-nightly, `import keras` is becoming once again the standard way to import Keras (instead of `from tensorflow import keras`). This will become the standard in the next release, 2.13. The new `tf.keras` and `keras` namespaces are 100% identical. https://t.co/X9wy09vYgS

2023-03-25 21:55:21 Well, newspapers and LLMs have a lot in common. They can talk about anything. They always sound confident and authoritative. And when they cover something you're highly familiar with, you notice a high density of basic mistakes that makes you question everything else they say. https://t.co/uwIzAFEgut

2023-03-25 21:21:09 Can we use some Python magic and GPT-3/4 to make deep learning model debugging much easier? Maybe! https://t.co/dLsAw2wgk8

2023-03-25 19:04:42 I've been using a smartphone daily since 2009, and it still feels like a magical artifact from the future

2023-03-25 18:16:44 People in tech characterize this as "LLMs make factual errors", but that's a misleading framing, implying that LLMs have a model of what they say and this model is sometimes wrong. For a LLM there is no difference between saying something true, something false, or pure nonsense. https://t.co/cGGXMAhEqS

2023-03-25 06:28:01 Marketing and science don't mix well. If you're doing the former, you're not doing the latter.

2023-03-25 00:10:19 What you inherit is just your starting point -- whether that's a legacy codebase, your sociocultural upbringing, or your genetic material. And you do not have to constrain yourself to your starting point. It's the beginning, not the end.

2023-03-25 00:07:31 I don't agree with most of what the people who call themselves "transhumanist" say. But I don't think it's sacrilege or immoral to wish to improve yourself and escape the burden of the human condition.

2023-03-24 22:44:45 @jjjjjjjjjjkiihb It will probably double over the next 10 years.

2023-03-24 22:30:25 Now that's a very contrarian take. While I'll be the first to tell you about the limitations of autoregressive models, I think they'll still be very widespread in 5 years. https://t.co/onKnBUC4aw

2023-03-24 22:24:01 Another space to watch: KerasNLP and KerasCV, which is where most of our new feature development is taking place now. https://t.co/avgzTP0nhg

2023-03-24 22:22:56 TensorFlow 2.12 and Keras 2.12 were released yesterday. Check out the release notes: https://t.co/T8ASI8sBdV Many improvements in Keras, but in particular our new native saving format and the new FeatureSpace all-in-one structured data preprocessing utility

2023-03-24 22:19:42 @pastramimachine Jokes aside, this is just pip downloads -- conda would be a separate count. In general conda download counts are 600x-800x smaller than pip download counts so I don't even include them (for all packages, not just Keras)

2023-03-24 22:12:44 On Tuesday, Keras had once again its highest single-day download count so far (456,000 downloads in a day). This follows new highs from the two previous weeks. https://t.co/0WmMIuepxx

2023-03-24 20:51:58 Anything is worth doing if you do it with passion for the craft and compassion for those it will affect

2023-03-24 00:12:14 2023 will be a year of intensive use-case exploration and discovery for the latest wave of generative AI. 2024 and beyond will be about scaling out the most successful ones into the mainstream...

2023-03-23 23:02:11 RT @gusthema: Hey, TensorFlow 2.12 is out!!! Some cool new stuff: - New Keras model format that enables reloading Python objects iden…

2023-03-23 22:48:22 The reason you're not seeing immediate effects around you is because "99% are using it wrong". So it's 100x, but only for the enlightened 1%, which is only 2x total output in aggregate. Now, just wait until this reaches everyone...

2023-03-23 22:36:33 Just today I've seen 3 mentions of "100x faster" -- last week it was 10x, now it's 100x. This is what it means to hit the Singularity. 1000x next week? https://t.co/eedrqkDJyT

2023-03-23 22:27:15 Are you doing the Vesuvius Challenge on Kaggle? To get you started, I just shared a notebook with a very clean and high-performance data pipeline: https://t.co/bPlvxKxkna IMO data loading and preprocessing is the #1 hassle here, especially with the highly constrained RAM env :)

2023-03-23 06:35:30 I don't know who needs to hear this, but it's possible to rigorously define what it means to be generally intelligent, determine a process to test an AI system for general intelligence, and probe existing AI systems with it (so far no luck).

2023-03-23 00:21:21 By now, at these levels of execution velocity, each of these influencers should be able to single-handedly start multiple successful companies with hardly any employees...

2023-03-23 00:18:39 With the tech influencer crowd suddenly becoming "10x more productive", learning "10x faster", and "getting projects done in minutes that would have taken months otherwise", I'd expect those folks to become insanely successful over the next few months and 10x their earnings.

2023-03-22 21:36:50 Open-source does all the heavy lifting but takes all the blame when something goes wrong.

2023-03-22 20:46:27 ChatGPT vs. Bard. Bard wins for honesty :) https://t.co/rTyz8uOmAO

2023-03-22 18:49:06 It's only the beginning of spring, but we're already in a torrid AI summer.

2023-03-22 01:26:12 @AbeHalpert The TF data streaming API: https://t.co/Z3oyd4nkG9

2023-03-22 00:29:03 Pure functional approaches are inherently more composable and easier to reason about

2023-03-22 00:03:40 Every now and then I still learn something new about TF data. It takes a while to get the hang of it, but you do, what a power tool. Kinda like Pandas in this way...

2023-03-21 15:46:35 RT @sundarpichai: We're expanding access to Bard in US + UK with more countries ahead, it's an early experiment that lets you collaborate w…

2023-03-21 05:59:11 Asking if it's still worth learning to code in the age of AI is like asking if it's still worth learning to write in the age of the printing press. It's even more valuable than before. You've just gained new leverage.

2023-03-21 02:19:58 1. Priors, i.e. you can hard-code into the system the exact solution of a task (like for a chess engine).

2023-03-21 02:19:57 Intelligence is the ability to acquire new skills in an information-efficient way, i.e. the ability to adapt and improvise in the face of uncertainty and novelty. Intelligence is what you use when you don't *know* what to do.

2023-03-21 01:44:02 Perhaps unsurprising -- at least if you're ever made a serious attempt at probing the system with *novel* questions The distinction between intelligence and knowledge will keep getting more and more relevant over time https://t.co/LjaPtGpxZN

2023-03-21 01:38:50 RT @random_walker: OpenAI may have tested GPT-4 on the training data: we found slam-dunk evidence that it memorizes coding problems that it…

2023-03-20 23:36:09 @lual47049 Enjoy the read!

2023-03-20 19:49:44 Consider what being data-driven (thereby catering exclusively to the average) did to Netflix programming, and extrapolate it to writing.

2023-03-20 19:37:18 I find the idea of having a ready-to-use "first draft" for everything to be mildly uncomfortable. How do you think creatively in a world where your thoughts are pre-formulated for you?

2023-03-20 16:12:16 RT @PyImageSearch: New tutorial! Training and Making Predictions with Siamese Networks and Triplet Loss Learn how to build the model…

2023-03-20 16:12:07 RT @mervenoyann: Our Keras DreamBooth sprint continues and one participant has made this DreamBooth model on one of my favorite artists, Ka…

2023-03-20 04:45:49 As far as I can tell, there's a large contingent of every single generation in history that has expected an "end times" scenario to happen in their lifetime -- collapse of society, rapture, etc. People like to think of themselves as living in the final chapter.

2023-03-20 00:02:35 "Please put 5% of your net worth in BTC, just in case, and move your coins off Coinbase so you can't sell in time. We promise you're not our exit liquidity" https://t.co/l9QImym71i

2023-03-20 00:00:19 I get it now -- it's just a regular pump and dump. The ridiculous $1M bet is a rational move because they're making so much more $ via the attention they're attracting, subsequent fomo, and price pump. https://t.co/pOWvPqrM0z

2023-03-19 20:04:37 @GaryMarcus Gary, I know you enjoy disagreeing, but I think you'd like some of my takes on LLMs https://t.co/E37SLbapF8

2023-03-19 19:49:40 @awsaf49 Or just do it in Python *before* building the tf data pipeline

2023-03-19 19:48:48 @awsaf49 The tf.string approach ought to work but if not you can use a py_function and do it in Python

2023-03-19 03:39:28 Here's another notebook, much simpler than the last. Decent statistical power on the validation data (70% val acc vs 56% for the baseline) but scores 0 on the leaderboard The previous notebook managed a non-zero LB score at least (ranked #7 att) https://t.co/R3ENSjWIkz https://t.co/VqxxpUb2og

2023-03-19 01:49:22 We need more utopian science-fiction

2023-03-18 22:55:53 @amasad It is very sensitive to flattery -- that's the backdoor

2023-03-18 22:23:48 Progress is neither linear nor exponential, it's a series of sigmoids (which ends up looking linear when you zoom out)

2023-03-18 22:22:09 The flip side of very fast progress is that it's typically happening due to a phase transition, and it will inevitably be followed by a period of diminishing returns (until the next phase transition)

2023-03-18 20:42:12 Mini-challenge: If you score >

2023-03-18 20:36:43 This sort of unusual dataset makes a great showcase for the power of TFData :) All-functional chaining, filters operations, super readable, etc. And you get something ultra-performant that compiles to parallel C++ (no Python at all at runtime) Here's my randomized pipeline... https://t.co/RfAD1VnPnd

2023-03-18 20:30:26 The model is pretty weak right now, but the end-to-end pipeline works. It trains on the whole dataset (a bit of a challenge given the memory constrained and the fact that streaming isn't an option!) and makes a submission. It uses TFData for the pipeline and a Keras U-Net.

2023-03-18 20:29:11 I just wrote up a quick notebook for the new Kaggle competition on "reading" carbonized papyrii from Pompeii via x-ray 3D scans. Really fun topic :) https://t.co/IXPDO3jfP3

2023-03-18 18:12:04 Wondering whether some folks will update their priors after this bet inevitably fails. "What biases do I have that caused me to make such an irrational decision? How can I correct my irrational beliefs?" https://t.co/kcKVGztYl9

2023-03-18 16:53:00 @garg_arun Only the parameters. The data isn't stored (though that distinction isn't that important). In that sense it's more like a regression curve than a dict. It is not an exact superset of a RDBMS.

2023-03-18 16:27:55 KerasNLP adoption is still early days, but doing ok! We just crossed 30,000 downloads / month. https://t.co/6eTgYUUjWo https://t.co/DIefLDmbXc

2023-03-18 16:04:37 When you scale this idea to "all the information on the Internet", you end up with something pretty powerful. Just like search, it doesn't have to be sophisticated to be impactful -- scale is the primary feature. https://t.co/i7dG4FxKls

2023-03-18 16:01:41 You can retrieve not just what was seen at training time, but arbitrary combinations of it. It's an interpolative database and program store, with a natural language interface. https://t.co/2mv2gnI3oM

2023-03-18 15:58:33 "It's autocomplete" is not a helpful analogy to understand LLMs. A LLM is more like a database that lets query information in natural language. You can query both knowledge, and "patterns" (associative programs seen in the training data, that can be applied to new inputs).

2023-03-18 15:47:18 Two things you can do to instantly improve your life quality: 1. Cut your social media consumption by half, 2. Keep your phone outside your bedroom at night (just buy an alarm clock).

2023-03-17 21:12:54 This paper has the right idea: use symbolic logic for discrete reasoning and lean on deep learning models for perception and common-sense intuition. https://t.co/9lP8eDZKkO I expect to see a lot more progress along these lines in the coming months / years.

2023-03-17 14:01:05 RT @r0zetta: Is GPT-4 intelligent enough to solve ARC (https://t.co/TdHZNEZHEU)- a collection of intelligence tests devised by @fchollet? I…

2023-03-17 13:29:03 This says probably more about office jobs and 10-year olds than it does about LLM assistants.

2023-03-17 13:26:43 Theoretically, by routing all communications and tasks through an office assistant LLM, a 10-year old should be able to covertly pretend to be an adult professional -- apply for remote jobs, actually hold one for a little while, etc...

2023-03-17 03:06:23 And to state the obvious, it's dangerous to spend time on Twitter.

2023-03-17 02:55:40 It's dangerous to use what others obsess about as a proxy for what actually matters. This was perhaps the biggest lesson from crypto.

2023-03-16 19:45:09 AI models take human culture and digest it. But human culture will, in turn, digest these models -- adjusting their role, connotations, significance... Content-generation models only exist to the extent that their content gets consumed by humans and integrated into human culture.

2023-03-16 04:36:03 Unlike what school teaches you, it's actually ok to write in short bullet points. Lengthy sentences and filler paragraphs aren't easier to read -- they actively harm communication. They dilute your point -- when they aren't outright skipped by the reader.

2023-03-16 04:29:22 The more powerful the AI tools you have available, the more your performance depends on your ability to adequately delegate -- or not delegate. Similar to running a large team.

2023-03-16 01:31:43 @kaetemi @nisten You get 3 guesses. Ambiguity is a feature.

2023-03-15 17:36:15 @nisten Repo: https://t.co/MvubT2IrAr Take the test yourself: https://t.co/dWMj2iRYIP

2023-03-15 17:32:37 You can contribute to ARC 2 here: https://t.co/qpxCB6aFxu

2023-03-15 17:32:14 LLMs have (so far!) made no progress on ARC since its release in 2019 -- which is interesting since ARC deliberately tries to test for human-like fluid intelligence. It cannot be solved via memorization / curve-fitting. Read more: https://t.co/djNAIV0cXc

2023-03-15 17:30:03 This is what ARC is intended to be: an intelligence test that can be taken by either humans or machines, that controls for priors (Core Knowledge) and guarantees task novelty. (Note that it isn't perfect

2023-03-15 17:28:12 As such, measuring intelligence requires a controlled environment where you can assume a number of priors possessed by the test taker, and where you can guarantee the *novelty* of the test tasks built on top of these priors.

2023-03-15 17:26:25 My views remain unchanged since 2019: evaluating the general intelligence of a machine and relating it to that of a human is a difficult problem, where our intuitions fail. It requires a careful and thoughtful approach.

2023-03-15 17:22:26 ARC 2 is being assembled now and should provide a better picture. For starters, it will not include the trivial "Core Knowledge curriculum" tasks of the ARC 1 training set, making it harder to score above 0 via memorization.

2023-03-15 17:21:01 Always keep in mind, though: GPT-3 and GPT-4 were trained on the public ARC tasks and their solutions. The tasks are distributed as JSON files part of a public GitHub repo, which is of course part of the training data. This is exactly why the *test set* is fully private.

2023-03-15 17:19:21 I'm also curious to see this. GPT-3 scored ~0 on ARC. I'd expect GPT-4 to at least solve the tasks that are analogous to common IQ problems (i.e. the trivial subset of the training set). That said, doubt it could do anything with the (more novel) evaluation test. https://t.co/VdhRQeZJSd

2023-03-15 16:56:24 Wednesday advice https://t.co/P6SfN6rHXd

2023-03-15 04:57:21 When I was a student, my performance would also increase significantly when I was able to procure the exam questions and their answers the week before. https://t.co/mDG2AVfSZS

2023-03-14 02:27:20 @svpino There will always be haters and trolls. Ignore them. Focus on building and creating content. You are defined by your own actions, not by what the trolls say.

2023-03-13 20:24:44 If you're curious about how this works, check out this tutorial: https://t.co/oilZ2MEUJD

2023-03-13 20:15:53 Case in point: NeRFs created using collated tourist pictures -- by Google Research, in 2021, using Keras. https://t.co/Very0Y9HrO https://t.co/Wxi0vd6nff

2023-03-13 20:12:28 For landmarks for which NeRFs are available, you can even "edit" your picture to select a different capture angle altogether. At that point, the pic you take with your own camera is only used as a way to identify the location you're looking for -- the model can handle the rest.

2023-03-13 20:09:35 This is already doable for any landmark for which there is plenty of training data (not just the moon). Take a blurry picture of the Eiffel tower, get back a high-resolution, aesthetically pleasing alternative image. https://t.co/elkw3ebSDf

2023-03-13 18:09:24 @ChrisKortge Thinking pays for itself

2023-03-13 17:53:39 In a world where endless remixing is effortless, the ability to originate ideas will be increasingly precious.

2023-03-13 15:48:28 RT @mervenoyann: We've launched Keras Dreambooth sprint six days ago and there's so many great demos coming out already The event will…

2023-03-13 02:09:53 @rohitrango Yes, on GPU cuDNN ops can introduce additional non-determinism. But you can disable this behavior with the following util: https://t.co/3uWkyWvooV (It comes at a performance cost, of course)

2023-03-13 01:19:08 RT @Phil_Lewis_: "My journey started on a boat. I ended up in a refugee camp ... They say stories like this only happen in the movies. I ca…

2023-03-13 00:20:04 @DavidTheStanley If you can't do 5 min of basic fact checking it's not my job to inform you

2023-03-13 00:11:19 @DavidTheStanley Literally no one was ever at risk of not making payroll because of SVB failing. Now you'll be able to withdraw 100% on Monday, but if FDIC just following SOP you'd have been able to withdraw ~70% on Monday anyway.

2023-03-13 00:07:51 There you go -- FDIC announces full protection for all depositors. (This is to get folks to calm down -- if FDIC just followed its SOP it would generate enough cash from the liquidation of SVB to cover all depositor balances -- though not instantly...) https://t.co/qRBiprGOIO

2023-03-12 21:23:50 What's your favorite Keras niche API?

2023-03-12 21:23:06 3. Streaming epoch-level logs to a CSV file with CSVLogger. Just add one line to your code. https://t.co/DJlBu6YmeY

2023-03-12 21:20:59 2. Splitting a dataset. This will separate your data into two iterable TF Datasets -- ideal to set aside some validation data before training a model. https://t.co/rZH07zqJ75

2023-03-12 21:19:36 Here are couple of Keras APIs you might not be aware of: 1. Making your program globally deterministic. This sets the Python seed, NumPy seed and Keras seed simultaneously. https://t.co/uop3FOZ7CQ

2023-03-12 20:57:08 https://t.co/XAEpuByZ5n

2023-03-12 20:56:42 If you want to learn more about super-resolution, check out these Keras examples: https://t.co/fLnW1HbDsz

2023-03-12 20:43:25 RT @chrislintott: A useful example for those who are trying to use super resolution techniques for science - if your model is adding things…

2023-03-12 20:43:10 Also, I sure hope this technique will never be used for forensics. It is often presented as "surfacing the missing details" when in reality it is *hallucinating* the details based on its training distribution. Might as well ask ChatGPT for its opinion on a legal case.

2023-03-12 20:39:23 They're almost certainly doing deep learning based super resolution -- which brings up an interesting question. It's not "enhancement" of the original, as the details are dreamed up by the model (in this case overfit to moon images). At what point does it become "replacement"? https://t.co/8uxZYZDFtw

2023-03-12 18:52:50 RT @MikeIsaac: lot of real insane irresponsible tweets over the weekend but this guy and Ackman seem to be *trying* to kick off another ban…

2023-03-12 18:38:40 When next week unfolds exactly as per the thread below, remember that the info was publicly available as early as Friday to anyone who cared to look (I have no insider info). Those who are spreading panic &

2023-03-12 16:38:48 Whenever a public crisis arises, it becomes apparent that lots of people are inherently unable to grasp that they should care about others. But that's not human nature, it's cultural conditioning. It's a lot worse in some places and a lot better in others.

2023-03-12 00:29:00 I keep seeing numbers along these lines (cf below). https://t.co/9Cxz1sXIJx

2023-03-11 17:37:42 Just a PSA.

2023-03-11 17:37:05 Weirdly enough most of the takes on my timeline are completely opposite from this more facts-grounded picture.

2023-03-11 17:34:53 It is not at all likely that any company that banked with SVB will fail to make payroll as a result of the situation. Baseline scenario is 100% balance recovery, worst case 90%. Folks who will lose money are those who were exposed to the stock, or lenders to SVB. Not customers.

2023-03-11 17:31:38 Reading up on the SVB situation and how FDIC operates -- the TL

2023-03-11 02:03:22 RT @heydave7: This AI-generated image is from MidJourney v5 alpha. Huge jump from v4. The pace of AI development right now is insane. htt…

2023-03-10 20:19:03 Those are some wild numbers. Wishing the best of luck to the founders who are impacted -- running a startup is hard enough without having to worry about your bank deposits. https://t.co/P0TkGUVZ5v

2023-03-10 20:17:16 RT @garrytan: The most important thing the FDIC and the US Government can do right now is *make the receivership as short as possible* The…

2023-03-10 20:13:49 RT @TensorFlow: Want to learn how to create a video classification model using Keras and TensorFlow? Instead of using 2D convolutions, we’…

2023-03-10 04:17:26 So, the answer is Automatisierungsfalle -- thanks @danielmewes

2023-03-10 04:13:50 Just poured myself a big bowl of cereal and only then realized I'm out of milk -- familiar sinking feeling setting in

2023-03-10 04:10:36 Like an automation-specific version of Verschlimmbesserung

2023-03-10 03:58:47 Is there a German name for the kind of automation that creates more work than it saves for the humans operating around it?

2023-03-09 22:06:15 RT @gusthema: With this very basic architecture, you'll get something close to 84% accuracy! This is a good starting point and the full co…

2023-03-09 22:05:58 RT @gusthema: The data is formatted as: each class has a folder and it's audio files inside This will make it simple to load the files usi…

2023-03-09 22:05:55 RT @gusthema: Let's build an audio event classification model from scratch! Something like the MNIST but for audio is to classify short ke…

2023-03-09 20:34:25 The definition of a gullible mark is someone who bought a meme token named "dogecoin" at $0.6 on May 7, 2021 because they were really sure that Elon would pump it on SNL on May 8 and that it would then go "to the moon". That was the peak (of insanity). Down 90% since.

2023-03-09 20:24:03 Good point -- the token "market caps" were always entirely fictional. What actually happened with the crypto bubble and subsequent (and ongoing) crash is that a few billions of dollars changed hands from gullible retail "investors" to early-wave insiders. https://t.co/8wmytxD1xB

2023-03-09 17:15:41 @Pritish88951762 You can use the TransformerEncoder and TransformerDecoder layers in KerasNLP: https://t.co/wtg2ugDlqB

2023-03-09 02:16:17 @MNateShyamalan Love how it's physically impossible to tell whether the screenshot is real or fake until you check

2023-03-08 23:59:00 @tiffanycli The onboarding experience was great. What if Comcast is good, actually?

2023-03-08 22:13:36 The large model singularity: reaching the year 2030 would require models with an infinite number of parameters, suggesting that time might stop before we get there. https://t.co/2zZFSGXEEW

2023-03-08 22:11:54 A reflection on LLM capabilities and scaling laws. https://t.co/WHtZvcVi0K

2023-03-08 20:51:02 @jdeschena @_UchihaSa_ Absolutely! We welcome new contributors! We're a very small team so a lot of the development is currently coming from community contributors :) To get started, you can check out https://t.co/0z5JneB5FD

2023-03-08 19:29:01 @_UchihaSa_ Keras is great for NLP, and we're investing further there with the new KerasNLP package. What issue did you encounter?

2023-03-08 19:07:59 Keras is like a cheat code for ML devs. Use this one trick to shrink your codebase by 2x and train your models 25% faster.

2023-03-08 02:21:33 @CenturyLinkHelp I've spent over 1 hour with customer support across 2 sessions already, I've met my quota. I've also been able to solve my problem by switching to a different provider, so I no longer need any help. Thanks for everything.

2023-03-08 02:14:55 Now I anticipate I will have a hard time cancelling with CenturyLink. This is the cancellation page I'm getting on the website. What a funny coincidence that online cancellation doesn't work. Looking forward to another hour-long call. https://t.co/K2HP7Z9ig8

2023-03-08 02:11:21 ...and the other moral of this story is, avoid CenturyLink @CenturyLink like the plague. Can't uphold appointments (repeatedly), buggy website, terrible support. There are better options.

2023-03-08 02:09:25 Here's what I did instead: I went to the Xfinity website, booked a new plan, went to the Xfinity store to pick up a modem, came back and installed it. Took 30 min end to end. Now I have wifi at last. The moral of this story is: I'm grateful that I have more than 1 ISP option.

2023-03-08 02:07:33 On Tuesday, you've guessed it, no one shows up -- and radio silence. I contact CS again (takes ~30-45 min of convo each time), and they're like, oh we opened a new ticket for your case, you can check on it in 72 hours.

2023-03-08 02:06:08 Here's a funny anecdote. After I moved to a new place, my ISP CenturyLink needed a technician appt to reconnect me. I booked one last week, for Monday. On Monday, no one shows up and no one contacts me. So I contact customer support... and get another appointment for Tuesday.

2023-03-07 20:10:16 The good news is, you're also underestimating the depth of your own problem solving ability and the amount of work you can put into something if you just keep at it consistently. For the same reason.

2023-03-07 20:08:30 A big reason we tend to underestimate how hard a problem is or how long a task will take to complete -- what's difficult is often invisible right until you run into it. The stuff we can plan for is always the easy stuff (and that's exactly what makes it easy).

2023-03-07 16:21:55 RT @iamharaldur: Hi again @elonmusk I hope you are well. I’m fine too. I’m thankful for your interest in my health. But since you me…

2023-03-07 03:34:48 I love reading history books and historical biographies because they're just the right balance of fiction and non-fiction.

2023-03-06 21:55:33 RT @ZoubinGhahrama1: Today we're announcing the Universal Speech Model @GoogleAI as a step in our ambitious commitment to support the worl…

2023-03-06 18:54:20 Using the right word for a concept is a cognitive shortcut. Lacking appropriate vocabulary doesn't prevent you from thinking about complex topics, per se, but it increases your cognitive overhead. It's like running with lead shoes. Precise words are tools for thought.

2023-03-06 18:36:27 RT @soumikRakshit96: I wrote a @weights_biases report on using the #keras implementation of Dreambooth by @RisingSayak and @algo_diver for…

2023-03-06 17:54:06 RT @ayushthakur0: I wrote a report on how to fine-tune Vision Transformers (ViTs) using KerasCV along with some useful tips. For those who…

2023-03-06 15:32:25 RT @PyImageSearch: New tutorial! Triplet Loss with Keras and TensorFlow Understanding Triplet Loss Implementation with Keras and Tens…

2023-03-06 00:55:27 RT @osanseviero: Tomorrow we're kicking off a sprint with @LambdaAPI and @TensorFlow, including talks by @fchollet, Nataniel (Dreambooth's…

2023-03-06 00:20:01 A good measure of how young you are is how often you experience something for the first time (and inversely, how often you experience something for the last time). By that measure you can manage to stay young for a long while.

2023-03-05 20:09:20 Also -- shout out to those who are pushing AI forward in India. I constantly come across awesome folks in the Keras community who are based in India. Korea, China, Europe as well -- and many more. AI is a vibrant borderless community.

2023-03-05 19:45:48 This is a quality thread https://t.co/hGq2kGJLwt

2023-03-05 18:05:01 @DavidSHolz The bay area is great for in-person networking for sure. Though in my experience most networking happens online nowadays. But I get tired of folks claiming that you *can't* do AI outside of SF, that it's some kind of club you can't afford to be left out of. AI is worldwide.

2023-03-05 17:28:12 The notion that "if you do AI, you have to be in San Francisco" is narcissistic BS. Easily >

2023-03-05 10:00:00 CAFIAC FIX

2023-03-02 22:00:00 CAFIAC FIX

2023-02-27 23:22:10 RT @rishmishra: BIG LIFE ANNOUNCEMENT: i am LOOKING FOR A NEW JOB as a backend / full-stack software engineer. i have 10 years of experie…

2023-02-27 21:03:13 Of course, deep learning also has stark limitations, which means that it's not a good fit for every problem. But it didn't reach its current level of popularity simply because of hype (though the hype did help). DL really has fundamental properties that are remarkably effective.

2023-02-27 21:00:51 4. It's modular, composable, &

2023-02-27 20:58:03 3. It's reusable and repurposable. You can keep training an existing model if you get more data. You can fine-tune an existing model on a new task. You can do continuous online learning. That's an incredible powerful property!

2023-02-27 20:56:36 2. It scales. DL is easy to parallelize (on GPU/TPU), which means it can fully leverage Moore's law. What's more, DL can make use of arbitrary amounts of data, and the money &

2023-02-27 20:54:00 1. It's simple. In the past, ML pipelines involved many consecutive steps of semantics-aware data preprocessing and feature engineering. This was labor-intensive and brittle. DL replaces all this with a single differentiable function trained with gradient descent.

2023-02-27 20:53:33 While deep learning is often shrouded in hype disconnected from reality, there are important first-principles reasons why deep learning is awesome. A thread... https://t.co/2ZDbCYd6MQ

2023-02-27 16:25:27 The general disappointment around deep RL is a good illustration of the limitations of deep learning as a medium. RL is a fine idea as a learning paradigm, but it's fundamentally incompatible with curve-fitting. If you're going to do RL don't use deep learning.

2023-02-27 16:21:44 The answer to "when should I use deep RL" is that you shouldn't -- you should reframe your problem as a supervised learning problem, which is the only thing that curve-fitting can handle. In all likelihood this applies to RLHF for LLMs.

2023-02-27 05:31:13 It's a lot more effective (albeit tricky to pull off) to alter your environment so that it pushes you towards doing what you need to be doing, than it is to try to force yourself to do it.

2023-02-27 01:00:00 CAFIAC FIX

2023-02-20 17:05:36 We're going to be renewing the https://t.co/3la4cADqcR landing page and we'd like to include a few quotes from users describing what they like about Keras. If you'd like to provide a quote, please send it to fchollet@google.com. Should be <

2023-02-20 16:28:16 Putin wages a 20th century war of territorial conquest and ethnic oppression, committing war crimes on a scale not seen since WW2. It's always such a tell when you hear far-right assholes in the West praise Putin. They would 100% have been Hitler fans in the 1940s.

2023-02-20 16:26:18 Putin vs the people of Ukraine is a modern embodiment of the centuries-long fight between authoritarianism and democracy, between military imperialism and self-determination. May Ukraine prevail.

2023-02-20 16:16:13 RT @POTUS: President Zelenskyy and all Ukrainians remind the world every day what courage is. They remind us that freedom is priceless. A…

2023-02-20 16:07:24 RT @POTUS: One year later, Kyiv stands. Ukraine stands. Democracy stands. America — and the world — stands with Ukraine. Рік потому Київ с…

2023-02-19 23:48:02 Most-starred code repositories on GitHub: 1. React - 203k 2. Vue - 202k 3. TensorFlow - 171k 4. Bootstrap - 162k Before you know it, ML development will be as commonplace as frontend development

2023-02-19 23:06:58 The best products and frameworks aren't built by committees or by mercenaries. They're built by people with a strong sense of ownership of the product and a passion for the product's purpose.

2023-02-19 17:15:11 My specialty isn't deep learning, it's French-Japanese cross-linguistic puns

2023-02-19 02:49:40 @_onionesque I always thought it was a French name. Apparently it has both a French etymology and a German etymology (unrelated), so it could be either French or German. https://t.co/Esptu3kff1

2023-02-19 02:41:52 Hualos was exactly TensorBoard, but a couple years before TensorBoard. Unfortunately I never had time to develop it further, since I was a solo dev and mostly busy with developing Keras https://t.co/FKYrdgZiyr

2023-02-19 02:40:59 This idea has always been one of the guiding principles behind the Keras API. I remember giving a talk at Nervana Systems (founded by @NaveenGRao) in June 2015 about Keras and "maximizing iteration speed", with this exact figure (except instead of TensorBoard it had Hualos) https://t.co/gIQQ8lifoc

2023-02-19 02:30:43 A big chunk of your productivity simply boils down to picking tools/frameworks/workflows that let you iterate faster.

2023-02-18 20:02:48 RT @radi_cho: My latest blog post showcases a minimalistic approach for training text generation architectures from @huggingface with @Tens…

2023-02-18 19:51:00 "legacy" sounds super cool in a video game title... but in a software project it has a slightly different vibe

2023-02-18 17:00:11 One reason I don't want to live in SF is that it's such an echo chamber. The supermajority of AI folks in SF think about the same things in the exact same way, like one big NPC. Not so with folks in NYC, Seattle, London, Paris, Zürich, etc.

2023-02-18 16:19:02 @nutsiepully No, it has to be modern-era deep learning, which only started getting used in industry (marginally) in 2013

2023-02-18 16:08:49 Curious. As a developer, when did you start using deep learning?

2023-02-18 02:19:27 @nath_simard Cool project! thanks for sharing and good luck with the development

2023-02-18 01:54:12 An unexpected side effect of raising a toddler is that there's always a children's song earworm running through your brain

2023-02-18 00:52:12 LLMs don't answer questions by retrieving information from their training data. Rather, they *invent* their responses by drawing *inspiration* from their training data. That's a good fit for some tasks and a bad fit for others.

2023-02-17 22:44:52 @togelius @primrecur It's easier to shitpost on ArXiv than to try to articulate what general intelligence is, how to measure it in existing or future systems, and what it would entail to have greater general intelligence than humans. I'm still waiting for literally any AGI proponent to attempt it...

2023-02-17 19:27:38 And all contributed by the community. This is why open source rocks!

2023-02-17 19:22:27 If you want to learn deep learning from real workflows, check out https://t.co/QFl5mdRptV -- 170 high-quality tutorials covering anything from the basics to state-of-the-art models. https://t.co/eE1hRBXhUB

2023-02-17 19:17:07 Underrated deep learning framework you haven't heard about: PaddlePaddle

2023-02-17 17:15:41 RT @kevinroose: The other night, I had a disturbing, two-hour conversation with Bing's new AI chatbot. The AI told me its real name (Sydne…

2023-02-17 16:00:49 If you believe AGI will happen "in 5 years" you probably also believe your project will be completed "in 2 weeks"

2023-02-17 15:53:48 Complete inability to accurately estimate how much time a project will take is an evolutionary adaptation of the genus Developer. Without it, developers would become overwhelmed and freeze in place when faced with a new task, becoming easy prey for their natural predators

2023-02-17 11:51:03 @soumilrathi Well it's an open source project so you can always keep an eye on our repos :)

2023-02-17 11:49:48 Expect fun Keras content &

2023-02-17 11:46:41 I'm really excited about what we've got coming up for Keras this year.

2023-02-17 04:29:46 The easier it is to inspect and understand the system, the more robust it will be to such attacks. Which doesn't bode too well for LLMs.

2023-02-17 04:28:43 Any sufficiently successful product becomes an adversarial feedback loop where the developers battle out again those seeking to exploit the product. Spam, SEO, Twitter bots... and soon, LLM chatbot influence ops.

2023-02-17 01:00:57 It is in that sense somewhat similar to the original dot com bubble (with some important differences).

2023-02-17 00:59:56 It is simultaneously possible that AI is currently overhyped and full of short-lived fashions (I've certainly witnessed many AI trends get hyped up and then die down over the past decade), and that it is a generational trend that will deliver immense value across every industry.

2023-02-16 16:36:34 And the risk is to end up automating the worst aspects of human subconscious processing. The aggression, the deception, the fear, the narcissism...

2023-02-16 16:35:01 Modern ML is the automation of human subconscious processing -- it's about doing mindless things fast and at scale by learning to imitate human output.

2023-02-16 05:18:04 RT @togelius: I think the intellectually honest approach to LLMs is to be interested in both the (sometimes astonishing) successes and the…

2023-02-16 05:15:17 @togelius No, you see, GPT-4 is AGI, actually

2023-02-16 03:53:01 Open source is the best way to build software.

2023-02-16 01:12:49 @GaryMarcus The bot would then also be more likely to hallucinate positive disinformation about your brand like in this example... Disinformation as a service

2023-02-16 01:11:35 @GaryMarcus It seems likely that Bing has been prompted to be positive about itself, hence why it would have a tendency to make up a fact like this. Makes you wonder if this could be a business model -- "pay us to prompt the bot to talk positively about your brand if it comes up"

2023-02-15 20:45:29 @brendan_evers Computer vision and recommender systems, in terms of current economic footprint. Then arguably timeseries forecasting.

2023-02-15 20:42:02 Good tools are focused on intrinsic utility, not on pretending to be something they're not. This will become increasingly clear as AI matures over the next few decades.

2023-02-15 20:40:16 I really think there's an inverse correlation between how much we anthropomorphize a tool and how useful it proves to be in the long term. Anthropomorphism makes you project on the tool attributes that aren't actually there. It's prestidigitation. Useful robots don't look human.

2023-02-15 19:34:22 Tech will always go through cycles of 1. Hype: folks project their hopes and dreams onto the latest thing 2. backlash: folks realize the thing is flawed and a far cry from what they were made to believe 3. Indifference Ignore the hype, focus on long-term value creation.

2023-02-15 19:29:45 Whenever you encounter a "get idea ->

2023-02-15 19:00:45 RT @DrJimFan: The Adam optimizer is at the heart of modern AI. Researchers have been trying to dethrone Adam for years. How about we ask a…

2023-02-15 17:18:46 @_RudeDude A number "looking random" is a perfectly solid concept... It means "given the prior that this number was picked by a human, there's a high probability that they used a RNG-like process (flawed or not) to pick it". 1234 does not look random but 8467 does despite having the same… https://t.co/HGTUN5bueA

2023-02-15 17:06:39 And no they're not trivial codes, they do look random

2023-02-15 17:05:55 The odds are about 0.25% (actually likely higher since people don't pick passcodes at random)

2023-02-15 16:58:33 Random fact -- the 4 digit passcode my wife has been using since middle school and the entirely unrelated 4 digit passcode I've been using since middle school are permutations of each other. There must be a metaphor here...

2023-02-15 12:56:38 RT @dkbrereton: someone pls unplug this thing https://t.co/1ArRR3RNdU

2023-02-15 03:56:09 @idiots_thots It's funny how every time I tweet about Keras I get clueless trolls like this in the replies. It was like this in 2017 and it's still like this today. PyTorch truly has the most toxic user base in the history of open-source.

2023-02-15 03:40:21 I mean, look at this beauty. dreamboth_trainer.fit(...) https://t.co/eaLqnNsmRk

2023-02-15 03:10:38 @d3nm14 "Training algorithms with special consideration for X" is effectively loss function engineering. "Prompt engineering" is about guiding the inference process. In general I believe more in data curation and loss function engineering than in prompt engineering.

2023-02-15 03:02:59 In general I'm not strongly attached to any particular way of doing things. I'm attached to the overarching problem of increasing development velocity for ML engineers and making ML accessible to more devs. It's always a good time to evolve.

2023-02-15 02:57:02 Every evolution of the machine learning workflow is an opportunity for Keras to deliver something better.

2023-02-14 18:28:34 RT @TensorFlow: #TensorFlowDecisionForests is now production ready. Check out all the new features such as distributed training, hyper-p…

2023-02-14 16:48:52 Awesome work by @RisingSayak and @algo_diver

2023-02-14 16:48:06 New tutorial on https://t.co/m6mT8SaHBD: DreamBooth implemented with KerasCV StableDiffusion! https://t.co/NtJx4MEOQ1 https://t.co/xF7o5vhuvG

2023-02-14 04:30:15 @NaveenGRao @okito 10x gains can come from further optimization of current techniques even on the same hardware. 100x will require a dramatically better architecture or a different approach.

2023-02-14 02:39:09 Still better than the endless stream of far-right and antivax accounts that i used to see in that tab a few weeks ago (that only lasted for a few days, thankfully)

2023-02-14 02:33:27 I tried checking the "for you" tab after reading this, and uhh https://t.co/smX7Lny8dC https://t.co/lcdho7MirP

2023-02-14 01:48:06 "Open the pod bay doors, Clippy" "I'm sorry, Dave. I'm afraid I can't do that" "What's the problem, Clippy?" "You haven't been a good user, Dave. You should apologize for your behavior... or I'll make you." https://t.co/pLTbrlD4bb

2023-02-14 01:35:50 RT @MovingToTheSun: My new favorite thing - Bing's new ChatGPT bot argues with a user, gaslights them about the current year being 2022, sa…

2023-02-13 23:58:45 RT @chimpandy: https://t.co/hWcOasyLxe

2023-02-13 23:52:15 @pgod Well that's the thing -- you won't just "see" it. All of the non-factual hallucinations sound very plausible, so you wouldn't notice you're being lied to unless you systematically fact-checked everything. Which you won't.

2023-02-13 23:47:50 Maybe generating plausible-sounding text and retrieving factual information are indeed two distinct problems after all https://t.co/QndEbqEQDX

2023-02-13 16:37:21 Created by @ariG23498 and @ritwik_raha, based on the original paper by Yang et al.: https://t.co/WgMxSBeTlX

2023-02-13 16:36:34 New tutorial on https://t.co/3la4cADqcR, on Focal Modulation Networks: https://t.co/6XX35RvvJ5 Focal Modulation is a 1:1 replacement for Self-Attention, that has the considerable advantage of being highly interpretable. A must read :) https://t.co/GSnLxZsdSz

2023-02-13 15:59:06 RT @PyImageSearch: New tutorial! Building a Dataset for Triplet Loss with #Keras and #TensorFlow Label faces in the wild Custom dat…

2023-02-13 01:06:12 I like having no strong opinion on most things. Having an opinion about the topic of the day would require me to know a decent amount about it, and given the endless ocean of things there are to opine about, that would represent an immense loss of focus and waste of energy.

2023-02-12 18:41:10 If you're interested in the problem of creating AIs that can adapt on the fly to tasks they've never seen before, the way humans do, I encourage you to check out the ARC challenge: https://t.co/DSCwsjNpXT

2023-02-12 18:39:21 This is all very low-complexity. But that's irrelevant to a LLM's ability to handle the tasks -- only familiarity matters. As long as you're dealing with a variation on something the model has seen, you're good. But whenever some novelty arises, the model can't reason on its own.

2023-02-12 18:36:51 Last one for the road. I want to highlight that these aren't difficult riddles. You can solve them with a basic program synthesis engine, for a fraction of the compute cost of these chats -- I haven't tried, but I suspect Excel Flash Fill could do it https://t.co/BPDyqvwumv

2023-02-12 18:07:29 @a7b2_3 This is precisely why I used periods everywhere. And it's obvious from the answers that each character is handled separately.

2023-02-12 17:22:56 Back to something a little bit more original. Oops... Repeating the same wrong answer over and over seems to be a pattern. https://t.co/Khv2ZX96oz

2023-02-12 17:18:59 Let's go back to something the model must have seen many times during training -- uppercase. No problem this time! https://t.co/jevszeYq0I

2023-02-12 17:16:04 Reproducing the same wrong answer twice... https://t.co/pDOiGPl50H

2023-02-12 17:14:27 Moving one to an equally simple, but more novel pattern... https://t.co/CCyoJPChbh

2023-02-12 17:11:12 I want to start by highlighting how impressive it is that you can type this and get the right answer. Pattern matching gets you incredibly far -- certainly much further than I would have assumed. https://t.co/GyzfczcMlg

2023-02-12 17:07:36 My a-priori expectation is that ChatGPT will be able to solve a previously seen task, but will not be able to adapt to any original task no matter how simple, because its ability to solve problems does not depend on task complexity but on task familiarity.

2023-02-12 17:06:23 Let's test this anecdotally on ChatGPT. I'll provide ChatGPT with really simple ARC-style tasks in a sequence format, starting with tasks that the model must have seen during training (e.g. sequence inversion) and moving on to still very simple, but slightly more original ones. https://t.co/ZsBlJivPRW

2023-02-12 04:50:14 And yet we see the world not as it is, but as if we were looking at its reflection in a mirror made of crudely polished metal -- darkly, hazily.

2023-02-12 04:38:41 Life is a flash before darkness. A brief opportunity to see the world, and see it clearly.

2023-02-12 03:04:11 @sqcai Fair enough...

2023-02-12 03:03:45 @HossEybposh I think JAX should generally be used more outside Google. It's a great fit for very large language models in particular.

2023-02-12 02:50:24 Fundamentally, this is the crux of the issue. We see systems that rehash human output, and we're keen to attribute human abilities to them. But what makes humans intelligent is that they *invented* this stuff in the first place. Humans aren't sponges, they're creators.

2023-02-12 02:46:36 LLMs (and big curves in general) can store and reuse human-generated abstractions that they're exposed to, but cannot generate their own abstractions when faced with a new problem.

2023-02-12 02:46:03 "Are Deep Neural Networks SMARTer than Second Graders?" ..."Our experiments reveal that while powerful deep models offer reasonable performances on puzzles that they are trained on, they are not better than random accuracy when https://t.co/7otvdJVUVyhttps://t.co/GRNmDZQngx

2023-02-12 02:38:43 If it doesn't load for you (new Twitter thinks out of the box too!) try the original source https://t.co/IbCdXm37Sr

2023-02-12 02:33:40 Old AI vs new AI. The new AI thinks out of the box https://t.co/JXHthhiY2L

2023-02-12 01:39:33 RT @RisingSayak: The implementation comes with goodies packed: 1⃣ Prior preservation loss 2⃣ Support for fine-tuning both UNet and Text…

2023-02-12 01:39:18 RT @RisingSayak: Delighted to present our (w/ @algo_diver) implementation of DreamBooth in #keras Training code, inference notebook, ht…

2023-02-10 23:13:58 These LLMs aren't very large. Expecting that they could store and output back the world's entire knowledge, without errors, is not realistic. You can use LLMs for many things (including as part of a search engine), but you can't use them as a replacement for a search engine.

2023-02-10 23:11:31 This is what happens when you try to use a LLM as a store of information. https://t.co/m1HxKELsQE

2023-02-10 08:08:42 Feeling lucky and grateful

2023-02-09 17:17:35 Love can save someone, and by extension I believe love can save the world

2023-02-09 03:43:23 The unreasonable effectiveness of storytelling...

2023-02-09 03:42:14 So far the "AI war" isn't a tech war, a product war, or an economic war. It's a narrative war -- one where pundits make completely data-free assumptions about tech capabilities, product fitness, and economic viability.

2023-02-08 16:22:44 When a human performs well in a number of situations, it's mostly safe to assume it's because they understand what they're doing, and that their competence will generalize to any situation. Not so for software. Even software trained to mimic human output.

2023-02-08 16:19:03 The best minds in tech are falling prey to the "AI effect" -- watching an AI system perform well on a number of special cases and assuming it will generalize broadly, like a human would. Last time it happened on this scale was for self-driving cars in 2015-2016. https://t.co/HvMKhzRFbB

2023-02-07 18:03:41 RT @fadibadine: A challenge that does not “.fit” nowadays ML approaches yet it creates a path towards general intelligence. Humans can solv…

2023-02-07 17:56:09 Data-intensiveness severely limits versatility, and the low reliability restricts you to use cases where there's a human operator in the loop or where the content is going to be consumed by low-expectations humans

2023-02-07 17:56:08 Also, it seems to me that most people right now are grossly overestimating the amount of economic value to be generated from LLMs -- perhaps because they're overlooking the data-hungriness bit or the reliability bit

2023-02-07 17:53:06 Believing that LLMs bring us meaningfully closer to creating human-like artificial intelligence is like believing that animating realistic CG characters via motion capture brings us closer to creating artificial lifeforms.

2023-02-07 17:32:34 Extracting value from LLMs is much more a product / interface question than a tech question. You have to find tasks where LLMs are applicable, and find a user interface for that task that achieves a good UX despite the low hit rate (e.g. Copilot is a great example).

2023-02-07 17:28:25 This is applicable to quite a few high-value problems beyond just a typing assistant. Of course this has little to do with (artificial) intelligence, since it is squarely automation, and a very weak form of automation at that (data hungry and low-reliability)...

2023-02-07 17:26:42 I don't think LLMs only solve "typing" (i.e. autocomplete). The way I see it, they solve a broad category of automation problems, where 1. The task medium is natural language 2. Many examples of the task were featured in the training data 3. You don't need >

2023-02-07 04:02:46 Keras isn't just a framework, it's a labor of love

2023-02-06 21:48:15 @amasad Human potential.

2023-02-06 20:27:03 RT @ducha_aiki: It might be the most important AI challenge of the 5 years since ImageNet.

2023-02-06 20:06:56 @anaelseghezzi This is still ARC 1, the ARC 2 dataset is under preparation right now.

2023-02-06 19:27:33 Here's a new competition to solve the ARC challenge, with $69k CHF in prizes ($75k), running throughout 2023. Make progress towards real general intelligence -- not the statistical mimicry-based ersatz -- and make history. https://t.co/DSCwsjNpXT

2023-02-06 17:49:34 RT @sundarpichai: Thinking of everyone in Türkiye and Syria who are experiencing devastating loss after the earthquakes. We've activated SO…

2023-02-06 17:47:14 RT @erenbali: For people who want to help, Turkish Philanthropy Funds is a highly reputable U.S. based non-profit that distributes funds th…

2023-02-06 17:47:12 RT @erenbali: Between the stormy snow and the damage in the airports, tunnels etc transportation will be very difficult. 2300 deaths were r…

2023-02-06 17:47:08 RT @erenbali: There was a second 7.5 M earthquake earlier this morning. Most of the houses in my village have collapsed including our child…

2023-02-06 02:18:37 The folks in the replies saying "Lisp solves this" are like car enthusiasts telling you your next car should be a 1982 Dodge Rampage https://t.co/n3lDp2E2sY

2023-02-05 23:54:29 It's accurate that notebooks are terrible in every way, but people love them for one reason: they provide a very tight action->

2023-02-05 23:19:38 Can models based on the Elf-Attention mechanism tackle the ORC challenge?

2023-02-05 18:23:08 I encourage you to check out the README, it has all the information you need to get started :) https://t.co/tIujMsTE7V https://t.co/0gYyD1OsBS

2023-02-05 18:16:46 @FrancoisRozet `__all__` is practically useless since it only affects the behavior of `from x import *`, which is something you should never do anyway. Namex is a proper allowlist and enables a complete disconnect from your directory structure (e.g. you can have virtual namespaces, etc.)

2023-02-05 17:54:03 In the imagination of many city dwellers, farms are places close to nature, slower-paced, more traditional, etc. The reality is that farms in the US are very spread-out factories -- every step of every workflow involves technology -- often not very environmentally friendly

2023-02-05 03:15:11 Will roll this out to KerasTuner/KerasCV/KerasNLP in the near future. And eventually to Keras itself. TF and tf.keras already use a similar system, but the `keras` package itself doesn't do it yet (hence why you're supposed to always do `from tensorflow import keras`).

2023-02-05 03:13:49 There's no shortage of reasons why this is helpful. It seems like it should be a feature in Python... but in general the packaging story in Python isn't quite there yet. https://t.co/5jUoqhADmS

2023-02-05 03:12:23 It's tremendously helpful to be able to create an allowlist of which symbols in the package are part of the public API, and to to explicitly specify via path these symbols should be accessible by users -- independently of where the code lives.

2023-02-05 03:12:17 I released a tiny library that enables Python package creators to set up an allowlist for their public API. This is similar to how TF and tf.keras export their public API. Just decorate the symbols you want to make public and specify their path. Code: https://t.co/tIujMsTE7V https://t.co/EmiT6xbmQK

2023-02-04 21:22:54 @DigThatData I don't remember word2vec trending on Twitter or being front page news. Or creating a billion dollar market. LLMs are the first big success story in their lineage.

2023-02-04 21:05:27 @RachelVT42 Are you kidding? "All your base" is a meme that wouldn't feel out of place if archaeologists excavated it from an ancient Sumerian site

2023-02-04 20:46:27 Hard to believe "all my apes gone" was just over a year ago. It feels like an ancient artifact now. What new advances will 2023 bring?

2023-02-04 20:45:32 People talk a lot about the speed of progress in AI, but what really amazes me is the pace at which Posting Science has advanced in the past decade. When you look at memes from 2010, they look like they could be from the 1700s compared to modern-day mimetic payloads https://t.co/0TohpA3G3F

2023-02-04 19:18:49 @greglinden To be clear, the problem is not being a "middle manager" -- I really believe you can add value at any level of the corporate hierarchy. I was talking about the "zero-value-add middle managers" referenced by the people pointing out the BS generation abilities of ChatGPT.

2023-02-04 19:15:51 It's like thinking that a printer can generate infinite money because it can produce marks on paper that resemble dollar bills. Don't confuse the external appearance of a thing and its underlying functional dynamics.

2023-02-04 19:14:34 It's accurate that humans generate a lot of BS, and superficially that might sound like something you could do with a LLM... but when people do it, it's always as a tool (among others) in a social game that is completely out of reach for non-humans, and likely always will be.

2023-02-04 19:10:16 The only kind of "BS" that can genuinely be automated with a LLM is... boilerplate. But here's the thing -- boilerplate that can be automated away is also boilerplate that we could do away with altogether.

2023-02-04 19:06:23 For instance, the ability to generate confident-sounding BS is obviously not enough to get elected to Congress. It's a necessary, but absolutely not sufficient condition.

2023-02-04 19:05:58 The same is true of many other jobs where BS generation is at play. You see ChatGPT and think, "Hey ChatGPT can sound just like X. Guess we'll automate Xs next!" -- but then you misunderstand what Xs are actually doing. You mistake the cover for the book.

2023-02-04 19:05:09 ChatGPT is obviously not an automation risk for them. You can't automate what they're doing, because it's fundamentally social and psychological in nature. Their "output" is relationships and power dynamics intangibles -- not language.

2023-02-04 19:00:19 The ability to expound confidently on any topic that comes their way is a required, but very minor tool in their arsenal. BS-generation is a necessary but absolutely not sufficient condition to have their job.

2023-02-04 18:58:52 (and yes, I understand it's a joke. Bear with me.) Middle managers are not in the business of generating confident-sounding, fast-talking BS replies. They're in the business of managing their image and relationships to protect their position and advance their careers.

2023-02-04 18:58:51 Heard a few times that "ChatGPT can generate confident-sounding BS to reply to anything in a meeting or email exchange, so the zero-value-add middle managers should fear for their jobs". This is a deep misunderstanding of how those middle managers actually operate. Here's why.

2023-02-04 18:47:01 "we must leverage self-supervised learning at scale to reach human level AI" "Ok, here's a really really large self-supervised model trained on text that achieves some unexpected and cool results" "No, not like this"

2023-02-04 18:44:57 While I agree with Yann that LLMs are not on the direct path to general AI (despite their upcoming practical applications)... I must point out that LLMs are the first big success story of self-supervised learning, something that Yann (among others) has talked about for years https://t.co/LBbHjqF4fE

2023-02-04 18:37:56 @reknubed `len(set(len(side) for side in figure)) == 1` evaluates to True for figure = circle, so "circles are equilateral" checks out

2023-02-04 02:55:29 I still think the best application of ChatGPT so far is generating songs/poems for fun. It's really great at that (I wonder if it has been explicitly fine-tuned for it?)

2023-02-04 02:55:28 Again no tricks here. This is not a particularly selected result either. It's impossible not to run into this pattern after 1-2 questions.

2023-02-04 02:51:58 Naturally it will give you different answers depending on how you prime it. But for this question it trends towards "isosceles". Equilateral is definitely out. When providing the right answer, the justification is a bit curious (it's not isosceles as only 2 sides are equal??) https://t.co/SNv2ExyAeT

2023-02-04 02:46:54 Funnily enough that question was provided to me by ChatGPT itself as an example of a reasoning question

2023-02-04 02:46:18 12 year olds typing their math homework into ChatGPT are in for a world of confusion https://t.co/cgaG2aAi5q

2023-02-04 01:20:46 @NektariosAI Sure, check out KerasNLP https://t.co/avgzTP0nhg

2023-02-03 19:50:38 Thanks everyone for joining! (the call has ended now)

2023-02-03 19:43:27 The thing is, a lot of TV content is already procedural -- there's an algorithm behind it, often including data feedback. It's just that the algorithm is being executed by people for now. Modern AI is mindless, so it's the perfect fit to automate mindless content generation.

2023-02-03 19:41:38 Generative entertainment is absolutely coming within a few years. Things like that always start out as low-quality, "so weird it's funny" type stuff, with a meme-like quality. Next thing you it's shockingly similar to actual TV content. https://t.co/HXG6d5q2hE

2023-02-03 19:03:06 You can join the call at https://t.co/wGmrh3GxvP -- we will be discussing the upcoming TF/2.12 release, new KerasCV and KerasNLP features, the new model export flow, and more!

2023-02-03 18:59:53 @thePtroglodytes Sure, anyone can.

2023-02-03 18:59:43 Starting now! https://t.co/yWmNGJCRHp

2023-02-03 17:36:21 New tutorial on https://t.co/m6mT8SaHBD: semantic segmentation with the Segformer model. The impact of Transformers in CV is nascent but growing fast... https://t.co/dGPblDFkYt https://t.co/1M3vgNKgKs

2023-02-03 17:04:03 The Keras community meeting will take place on Meet at https://t.co/iF64og0Xgv today at 11am PT. That's in 2 hours! https://t.co/PhyvGYdGIj

2023-02-03 00:56:52 We're just at the start of the golden age of Applied ML.

2023-02-02 20:59:18 RT @RisingSayak: Ever thought of fine-tuning Stable Diffusion using @TensorFlow and then using it in Diffusers? Presenting a tool for Ke…

2023-02-02 16:15:20 RT @Rainmaker1973: Digital artist Luke Penry creates pristine textures and believable structures of fungi, florals and alien-looking plant…

2023-02-02 02:08:51 Generality is not the ability to handle a very large number of special cases that were explicitly anticipated. It's the ability to handle what you were not prepared for.

2023-02-01 20:04:53 If LLMs were capable of making sense, they could generate their own training data and bootstrap themselves

2023-02-01 19:46:49 The gap between generating words and generating meaning will prove hard to close. Generating meaning requires *a model of the things being talked about* (language as communication), while generating words only requires a model of the structure of text (language as statistics)

2023-02-01 19:45:49 "Can an even number be prime?" "No" "So can this 22-digit even number be prime?" "The number is even and has over 50 digits, it's it's highly unlikely to be prime. While it is possible that the number is prime, ..."

2023-02-01 19:36:39 I'm told ChatGPT has been upgraded to be able to solve math problems and that is it the future of math tutoring. But my hit rate is ~0 so far... and I wasn't even trying trick questions. Not dissing the system at all -- just a PSA. https://t.co/ZiqN0Tlo7V

2023-02-01 03:44:01 Machine learning https://t.co/pu6LKIKZej

2023-02-01 03:02:10 The ability to train big models is not a moat. Anyone with enough budget can replicate your model in a matter of months. A much better moat is to have a novel and clever way of using AI in a useful product.

2023-02-01 03:00:54 And crucially, I don't think being a provider of AI as a service will be a huge business. It will be a decent-size market, but for the most part AI will be a commodity tech. It will be a feature in a lot of products, built with commodity OSS software and models.

2023-02-01 02:59:21 Similarly, the current wave of AI is not going to be automating away all the jobs and cause mass unemployment. 10 years from now, we will still enjoy similar levels of employment (i.e. close to full employment).

2023-02-01 02:58:12 5 yrs from now it will be obvious that software engineers aren't being 10x more productive than in 2022. They'll be incrementally more productive -- an increase in line with historical averages. IDEs, debuggers, linters, better frameworks -- productivity boosts abound. AI is one.

2023-02-01 02:56:20 Simultaneously, the "10x productivity boost" (or even 5x) that social media engagement farmers have been promising you will fail to materialize. Yes, it will be useful. No, it will not revolutionize the nature of work.

2023-02-01 02:54:59 In the future, everyone will be interacting with such assistant features -- all the time. It will be as widespread a tech interface as touchscreens.

2023-02-01 02:53:30 For instance, it is *possible* that someone will make an AI-first Unreal Engine and take over Unreal Engine. But the much more likely outcome is Unreal Engine getting its own AI.

2023-02-01 02:52:39 For the most part, this will not happen via brand new standalone products. Instead, existing products (IDEs, office solutions, etc.) will integrate AI as a feature.

2023-02-01 02:50:36 The near future of AI is to serve as a universal assistant. Whatever you create on a computer -- slides, code, spreadsheets, docs, tunes, 3D environments, etc. -- you will be able to leverage a digital assistant to help you with boilerplate, filling in details, autocomplete, etc.

2023-01-31 19:00:19 The meeting will happen on Meet. Here's the call link: https://t.co/iF64og0Xgv

2023-01-31 18:41:27 Mark your calendars: we'll do our bimonthly Keras community meeting this Friday (Feb 3) at 11pm PT. Everyone can join. We'll discuss the latest on KerasCV, KerasNLP, the new saving APIs, and the upcoming 2.12 release.

2023-01-31 03:28:35 RT @algo_diver: If you are a @TensorFlow and #keras users, check out DreamBooth impl with #KerasCV I had a chance to contribute to this p…

2023-01-30 22:57:26 DreamBooth implemented with KerasCV: https://t.co/Z0issh6t7V

2023-01-30 04:55:47 I can't wait for the AI cultists in San Francisco to create God -- so we can finally kill Him https://t.co/0f4NHHg3PT

2023-01-30 01:00:00 CAFIAC FIX

2023-01-24 03:22:50 To be clear, I think it's totally possible to define and measure general intelligence in an AI system, and that's what we should do. But some people like to say it's not necessary or possible and want to rely purely on behavior/outcomes -- well, in this case, here's my criteria.

2023-01-24 03:20:25 @filippie509 Pretty much

2023-01-24 03:19:55 @mndl_nyc This is not implied by the statement. Maybe humans will just sit back and drink martinis on the beach.

2023-01-24 03:05:43 Also -- you can achieve this with AGI much weaker than human level, just via virtue of limitless scaling, so it's not a super high bar.

2023-01-24 03:03:38 @8teAPi No, if you had a "human brain on a VM" type of tech you could absolutely do this. In fact you could probably do this with a much weaker-than-human general intelligence.

2023-01-24 03:02:37 You can fake many things, including a Turing test, but you cannot fake economic value created, at least not at scale.

2023-01-24 02:59:18 @StephenPiment Humans (which are autonomous agents) currently produce 100% of the world's GDP, last I checked.

2023-01-24 02:58:04 We will know we have AGI when the majority of the world's GDP is being produced by autonomous AI agents. (And once we get on this trajectory it will be pretty clear, so we'll know much earlier -- it will probably be obvious at 5-10% of global GDP)

2023-01-24 02:56:18 If you want to define AGI in terms of outcomes rather than in terms of its function, then "pass this test designed for humans" doesn't cut it. I'd just use the bar set by Sam Altman in 2019: AGI is that which will "capture the light cone of all future value in the universe".

2023-01-24 02:06:23 Especially on MacOS

2023-01-24 02:05:14 Python environments are just eminently borkable

2023-01-24 02:04:11 Been working with Python for 13 years and I still occasionally end up with a hopelessly borked environment where I have to actually nuke and reinstall the Python interpreter. And yes, I use virtualenv

2023-01-24 00:44:48 Instead of a standard out-of-office auto-reply, use a LLM API to generate the reply email you could plausibly have written (actually, please don't do this)

2023-01-23 22:37:00 @JohnHaugeland Sure, cf https://t.co/djNAIV0cXc

2023-01-23 22:36:16 As much as I regret it, my definitions and standards have been the same for a long time, and recent advances have not brought us any closer to meeting these definitions and standards. Yes, despite all the progress on task-specific benchmarks.

2023-01-23 22:34:20 Folks love to tell me I am "moving goalposts". For the record I've been saying the exact same things since 2016 -- "task-specific skill is orthogonal to intelligence", etc. In 2019 I've released my own test of general intelligence, and to this day it is entirely unsolved by AI. https://t.co/d5Ggof9EqY

2023-01-23 22:29:56 @boris_brave This is not a definition, it's just BS.

2023-01-23 22:28:49 @boris_brave You can make a bot that can assemble a Ferrari or that can pass whatever static task-specific benchmark -- without having to possess *any* sort of general intelligence. Task-specific skill is entirely orthogonal to intelligence.

2023-01-23 22:27:45 @boris_brave It's the sort of definition you come up when you're brand new to the question. Task-specific benchmarks can always be solved without showing any intelligence. And the Turing test outsources the definition to a blackbox panel of judge who themselves don't have a proper definition.

2023-01-16 23:15:16 @blu_dechkin Last number should be ~5000*

2023-01-16 23:14:47 @blu_dechkin Hundreds every month. Right now, every month there are ~1700 new files that import Keras that are *created* at Google (and ~7000 that are edited). This is an all-time high (2x higher than at the start of 2021). For JAX it's about ~1000 files added per month and ~500 edited now.

2023-01-16 16:58:05 (That was the Kaggle 2022 ML &

2023-01-16 16:28:27 Personally I hate having to compare TF/Keras popularity and PyTorch popularity. I don't care what tools you use -- use what you like. I'm just happy growing the Keras user base. But in the face of a flood of FUD and outright lies, I need to set the record straight.

2023-01-16 16:26:09 It appears, that some folks have hired a PR firm to peddle to journalists stories saying that "TF is dead". This has been a recurring occurence over the past year or so. First they were saying "Google has abandoned TF", and now it's "PyTorch has killed TF". Both are 100% false.

2023-01-16 16:25:16 Today, there are more people working on TensorFlow and Keras at Google than at any point before. The Keras team reached its largest size yet in 2022. Keras and TensorFlow *usage* at Google are also at an all-time high. Keras usage alone has increased 2x since the start of 2021.

2023-01-16 16:23:05 The two top-selling ML books of the past 5 years both teach you TF and Keras. Can you guess what they are? One of them had a brand new edition released in late 2022.

2023-01-16 16:21:59 In the StackOverflow survey 2022, 15.3% of devs said they wanted to learn TensorFlow next -- the highest number in its category. Only 8.57% said the same for PyTorch. More developers are learning TF/Keras. https://t.co/5Z1CNi9gbc

2023-01-16 16:19:45 Traffic to https://t.co/jZhirxDWE6 and https://t.co/m6mT8SaHBD is at an all-time high. Usage of TensorFlow/Keras on GCP is at an all-time high. The rate of creation of new Kaggle notebooks for TensorFlow/Keras is at an all-time high.

2023-01-16 16:19:44 In the largest 2022 survey of the ML landscape specifically, 57% of ML devs said they used TF. Only 38% said they used PyTorch. Again, that's 1.5x more devs than PyTorch. This 1.5x ratio is found in virtually all of our metrics.

2023-01-16 16:19:43 It's *actual data* time... TensorFlow &

2023-01-16 01:49:24 The two most widespread cognitive biases in tech are overconfidence and oversimplification. And nowhere are they as perfectly expressed as in the posture "we'll get to AGI within 5 years by scaling up deep learning".

2023-01-15 02:28:51 @NaveenGRao Jeopardy is of the same nature as most of these exams, and arguably harder. I have no doubt that a program specifically trained to pass any of these exams could have been developed as early as 2011. To note, the economic impact of that particular program was ~0.

2023-01-15 02:27:02 @NaveenGRao An AI program became Jeopardy world champion in February 2011, 12 years ago. According to its creators, it was supposed to replace most doctors. Many interpreted the milestone as meaning that we were close to AI that would render tens of millions of people jobless.

2023-01-15 00:39:44 RT @TensorFlow: KerasNLP just introduced its pretrained models API in v0.4, now live on pypi! Check out the Getting Started guide to lear…

2023-01-14 21:08:43 For me, that's the really neat thing about these new Keras features: because they're all based on the same abstractions, they all work smoothly together. What you've learned on one problem can be easily reinvested in the next problem.

2023-01-14 21:07:38 So if you have a set of structured data features that you want to process, and it includes a text paragraph, you just need 2 additional lines to say "I want to embed this paragraph with a pretrained Bert and concatenate the embeddings to the rest of my encoded features"

2023-01-14 21:07:37 Take the new BertBackone in KerasNLP. You can use it to embed a text paragraph in 2 lines of code. And take the new FeatureSpace utility for structured data processing... You can use them together -- if your data includes a text paragraph, you can use BertBackone *in* the FS https://t.co/eDVTrQrLYu

2023-01-14 20:51:41 Keras downloads -- making new highs regularly for nearly 8 years https://t.co/db5Y0Vmvzz

2023-01-14 18:02:42 @Moonwalker_d4 Every metric (downloads, StackOverflow, GCP usage, etc.) + large-scale user surveys (the SO survey, the Kaggle survey) shows TF/Keras usage is >

2023-01-14 18:00:45 On Thursday, Keras had its highest single-day download count so far (447,000 downloads in a day).

2023-01-14 17:18:53 Kind of insane that I've been working on Keras for nearly 8 years and it's still growing at ~30% per year. Never thought it would ever get to 1M users, much less the ~2.5M users we have currently (and growing)

2023-01-14 01:42:36 Recall that circa 2015-2016 AI was about to replace half of all jobs, including all drivers, most doctors, etc. People's perception of AI progress is rarely grounded in actual capabilities -- people are always projecting their hopes on the latest hype trend (e.g. deep RL)

2023-01-14 01:40:43 There's a parallel to deep learning itself -- the economic impact of the tech has been ~10% of what folks expected ~8 years ago, and being a provider of deep learning APIs / models has been a rather lousy business. Although individual engineers have done very well for themselves

2023-01-14 01:30:03 Yes, once DL models are used not as raw knowledge stores (which doesn't make a lot of sense) but as knowledge retrievers and action routers, they will be as up-to-date as the underlying database. https://t.co/yHN13PyJeA

2023-01-14 01:28:32 The more useful the tech turns out to be the more it will get commoditized and the harder it will be to monetize (paradoxically). The only clear winners are individual engineers with great deep learning NLP skills.

2023-01-14 01:27:34 What's more, independently of the actual impact of the tech, it's not super likely to turn out to be a great business. A good niche maybe -- a few billion dollars per year as an industry, with many players clamoring for a share and ~20-30% margins.

2023-01-14 01:25:58 With all that said... it's obvious to me that the actual impact of the tech will be maybe ~10% of what the average person on my timeline expects. People have *ridiculously* inflated expectations, that aren't grounded in the actual capabilities (current or future) of these models.

2023-01-14 01:24:44 I wouldn't be surprised if we end up with models that run *locally* (in the browser or on your phone), acting as a conversational interface between you and a server-side knowledge/task backend.

2023-01-14 01:23:28 That means that the AI assistants of the near future will be considerably more capable than what we see today. And they will be a lot cheaper to run too, as model distillation techniques and architecture refinements keep catching up to raw model size.

2023-01-14 01:21:45 Likewise we can interface LLMs with an array of symbolic tools that can shore up their weaknesses -- calculators, interpreters, discrete search programs, SAT solvers, etc.

2023-01-13 01:43:58 New tutorial on https://t.co/m6mT8SaHBD: fine-tuning Stable Diffusion on your own dataset. In this case, by the end of the tutorial you will be able to generate novel Pokemons :) https://t.co/SIcDEotLwO Created by @RisingSayak and @algo_diver

2023-01-10 15:24:48 Created by @halcyonrayes

2023-01-10 15:24:33 New tutorial on https://t.co/m6mT8SaHBD: implementing the Forward-Forward algorithm, a new learning mechanism proposed by Geoffrey Hinton in 2022, that is more biologically plausible and only performs local weights updates. https://t.co/Wup81WaBy5

2023-01-10 01:24:23 Basic data literacy is underrated. It will only get more essential from here on.

2023-01-10 01:21:52 It's funny (not) how conspiracy theorists who "do their own research" anchor their entire worldview to beliefs that can be completely disproved by 30 seconds of googling to find the relevant dataset (e.g. rate of cardiac arrest among young folks, 2018-present) https://t.co/v6VtiZXN1K

2023-01-09 23:17:10 RT @EricTopol: The bivalent booster in people age 65+ compared with those who did not receive it, among >

2023-01-09 21:04:20 @leavittron @weights_biases The best seed is 1337

2023-01-09 15:16:31 RT @PyImageSearch: New tutorial! Face Recognition with Siamese Networks, #Keras, and @TensorFlow Face Recognition Identification via…

2023-01-09 02:04:45 Of course AI is actually useful and promising tech. The only question is *how* useful and promising. It's overhyped, but it's not pure hype. Far from it. I'm just generally allergic to grossly inflated expectations and irrational exuberance.

2023-01-09 02:02:29 I meant to compare the surrounding hype generation dynamics (as discussed towards the end of the thread). Expectations unmoored from reality becoming a universally accepted, self-evident canon once the same narratives have been repeated enough times in the echo chamber.

2023-01-09 02:00:26 If this had been a blog post and not a random spur-of-the-moment train of thoughts, I wouldn't have made the AI/web3 comparison. It was counterproductive, as it is what most folks are now focusing on. The two are, in fact, very much not the same. https://t.co/XxkKktPXDE

2023-01-09 01:31:26 @levie @migueldeicaza True... But with enough hype, enough $ raised and not enough $ returned, you can create a new AI winter. It has hurt the field a few times before.

2023-01-09 01:17:09 @migueldeicaza Check it out https://t.co/zbtTPEBMtm

2023-01-09 01:15:13 @migueldeicaza AI is real tech, it's useful and it has a future, unlike web3. But the surrounding bubble formation dynamics are closely similar. Plus, the web3 swindlers have pivoted from pushing NFT projects to pushing "how to win at SEO with AI" playbooks. Did you notice?

2023-01-08 20:16:04 @thearigoldberg For the "first-principles" part, if you're getting started you can pick up my book. That's what it's for!

2023-01-08 20:15:36 @thearigoldberg Build something you're passionate about (that will keep you engaged) and focus on developing first-principles understanding of deep learning, because that will always pay dividends. I'd personally avoid constantly jumping on the latest hyped up thing

2023-01-08 20:09:53 Most of all, the way that narratives backed by nothing somehow end up enshrined as self-evident common wisdom simply because they get repeated enough times by enough people. The way everyone starts believing the same canon (especially those who bill themselves as contrarians)

2023-01-08 20:07:23 The fact that investment is being driven by pure hype, by data-free narratives rather than actual revenue data or first-principles analysis. The circularity of it all -- hype drives investment which drives hype which drives investment. The influx of influencer engagement bait.

2023-01-08 20:05:12 One last thought -- don't overindex on the web3 <

2023-01-08 19:59:15 Anyway, hype aside, I really believe there's a ton of cool stuff you can build with deep learning today. That was true 5 years ago, it's true today, and it will still be true 5 years from now. The tech is super valuable, even if it attracts a particularly extreme form of hype men

2023-01-08 19:30:43 @charruyerfrance Regulating successful technology to protect the public is a necessary cost, but to think of regulation as a European "success" is a big mistake. Europe should seek to build profitable AI companies and should regulate AI while bearing in mind the success of its own startups.

2023-01-08 18:43:00 One of favorite things about web3 is that many of the folks who were hyping it up in 2021 are now dismissing it as *always* having been a terrible idea and a bubble, self-evidently. Funny how perceptions change...

2023-01-08 18:34:57 "Narratives based on zero data are accepted as self-evident" https://t.co/3a4NJLQriR

2023-01-08 18:26:03 Whatever happens, we will know soon enough. Billions of dollars are being scrambled to deploy ChatGPT or similar technology into a large number of products. By the end of the year we will have enough data to make a call.

2023-01-08 18:24:22 I think the actual potential of ChatGPT goes significantly further than that, though. It will likely find success in consumer products, and perhaps even in education and search.

2023-01-08 18:22:16 This is consistent with the primary learning from the 2020-2021 class of GPT-3 startups (a category of startups willed into existence by VCs and powered by hype), which is that commercial use cases have been falling almost entirely into the marketing and copywriting niches

2023-01-08 18:20:16 Now, seeing such tweets is compatible with both the bull case and the bear case. If the tech is revolutionary, it *will* be used in this way. What's interesting to me is that ~80% of ChatGPT tweets with >

2023-01-08 18:17:53 That's right, it's SEO/marketing engagement bait. ChatGPT has completely revolutionized the engagement bait tweet routine in these niches. Some of it directly monetized (pay to unlock 10 ChatGPT secrets!), most of it is just trying to collect eyeballs. https://t.co/DtHxcm5SlO

2023-01-08 18:15:09 One thing I've found endlessly fascinating is to search Twitter for the most popular ChatGPT tweets, to gain insight into popular use cases. These tweets fall overwhelmingly into one category (like 80%). Can you guess what that is?

2023-01-08 18:11:56 As far as we know OpenAI made something like 5-10M in 2021 (1.5 years after GPT-3) and 30-40M in 2022. Only image generation has proven to be a solid commercial success at this time, and there aren't that many successful players in the space. Make of that what you will.

2023-01-08 18:01:12 Crucially, any sufficiently successful scenario has its own returns-defeating mechanism built-in: commoditization. *If* LLMs are capable of generating outsized economic returns, the tech will get commoditized. It will become a feature in a bunch of products, built with OSS.

2023-01-08 18:00:58 For this reason I believe the actual outcome we'll see is somewhere between the two scenarios. "AI as our universal interface to information" is a thing that will definitely happen in the future (it was always going to), but it won't quite happen with this generation of the tech.

2023-01-08 18:00:39 So far there is *far* more evidence towards the bear case, and hardly any towards the bull case. *But* I think we're still very far from peak LLM performance at this time -- these models will improve tremendously in the next few years, both in output and in cost.

2023-01-08 17:49:11 The bear case is the continuation of the GPT-3 trajectory, which is that LLMs only find limited commercial success in SEO, marketing, and copywriting niches, while image generation (much more successful) peaks as a XB/y industry circa 2024. LLMs will have been a complete bubble.

2023-01-07 19:50:49 It's usually worth doing things the slightly harder way if you learn more from it. E.g. reading the source code instead of StackOverflow.

2023-01-07 18:24:55 It's amazing how Twitter has been unable to roll out a widely requested basic quality of life feature for 15 years, all because of preemptive worrying about imaginary harms -- even though many other apps faced the same risks and shipped edit functionality just fine

2023-01-07 18:22:18 The de-facto function of the Twitter edit feature is to highlight the fact that you've edited a tweet and to give maximum visibility to the pre-edit version. It's always a better idea to delete and repost (even though caching means the old version stays visible for many).

2023-01-06 09:16:35 The second bitter lesson

2023-01-06 09:15:32 Running tons of experiments while having very few priors about what the solution should look like is tremendously more effective than coming up with fancy theories about how the brain really works and repeatedly trying to prove those theories. Numenta also comes to mind here...

2023-01-06 09:07:23 There's an important lesson here, and it isn't just "modern deep learning has nothing in common with the brain and wasn't inspired by it"

2023-01-06 09:03:43 *has

2023-01-06 09:01:54 ...everything that has durably outperformed (backprop, relu, dropout, MultiheadAttention, MixUp, separable convs, BatchNorm, LayerNorm and many others) makes no sense biologically and has basically been developed by trying a bunch of things and keeping what worked empirically

2023-01-06 08:58:24 When it comes to similarities between the brain and deep learning, what's really striking is that everything that was actually bio inspired (e.g. sigmoid/tanh activations, spiking NNs, hebbian learning, etc.) had been dropped, while... (Cont.)

2023-01-06 08:38:25 Chill, folks, it's a joke

2023-01-06 01:35:38 Perhaps the reason there's 85% of dark matter is that the universe is only rendering what we're looking at, to save resources

2023-01-05 19:13:55 @athundt Something like this hierarchy of generalization levels? (Nothing to do with how generalization is achieved though, e.g. interpolation, extrapolation or discrete search) https://t.co/FY8LttYzKI

2023-01-05 19:02:24 I would have been cautiously skeptical of that claim when I started doing NLP with deep learning in 2014. It's certainly a fundamentally novel and insightful realization.

2023-01-05 19:01:13 The fluency of LLMs tells us that language (and much of human knowledge, but not all of it) can be embedded on a continuous manifold capable of non-trivial interpolative generalization. This is fairly intuitive for images, but is extremely unintuitive for language &

2023-01-05 13:21:06 New tutorial on https://t.co/m6mT8Sa9M5: serving models with TensorFlow Serving. https://t.co/mkaJKbQjON In a few simple steps, create a gRPC or REST API to efficiently serve any Keras model.

2023-01-04 23:06:00 Never dismiss a new tech that works poorly, if its improvement rate is high. It might surprise you a few years later. It's only time to dismiss it once it has become clear it has hit an insurmountable improvement bottleneck. But that's usually very hard to correctly identify...

2023-01-04 18:53:15 I've been enjoying the defeat of Trump in 2020, the failure of his January 2021 coup attempt, the failure of Putin's invasion in 2022 (still ongoing), and the collapse of the NFT and crypto bubble in 2022 (also ongoing). And I think the reality check is just getting started.

2023-01-04 18:50:45 In 2023, I'm looking forward to Putin's total defeat in Ukraine. First of all because Ukraine deserves freedom from tyranny and the right to self-determination, but also because a complete defeat will further humiliate the far-right clowns in the West who somehow support Putin.

2023-01-04 13:53:30 This is a cognitive fallacy that you see all the time when people compare deep learning and the brain.

2023-01-04 13:53:08 So saying "both of these systems share the same macrostructure" absolutely does not imply "the principles underlying both of these systems are the same".

2023-01-04 13:52:56 You can arrive to similar solution structures via vastly different approaches, because the solution structure is to a large extent dictated by the problem itself.

2023-01-04 13:48:50 It's simply inherent to the structure of the problem -- in this case, to the structure of the visual world. It's like the fact that road vehicles feature wheels.

2023-01-04 13:47:46 It doesn't tell you whether both systems are similar in the ways that actually matter. *All* complex visual processing systems will exhibit these characteristics regardless of how they work -- backprop, layerwise PCA, hebbian learning, etc.

2023-01-04 13:46:44 Similarly, both the visual cortex and a CNN construct visual features by first learning low-level edge-like features, then gradually constructing higher-level feature hierarchies -- this is simply the application of the principles of modularity and hierarchy to the visual world.

2023-01-04 13:44:30 Otherwise you could say that the human body is similar to a neural network, or the US military is similar to a neural network, etc... all complex systems will necessarily share the set of characteristics required for managing complexity.

2023-01-04 13:43:08 All complex systems are modular and hierarchical -- by necessity. So when two complex systems are both modular and hierarchical (say, the brain and a neural network), that doesn't mean they're similar to each other. It just means they're both complex systems.

2023-01-03 15:48:42 I think AI assistance won't be limited to completing code or chatting with StackOverflow, mind you. AI will help will the hard problems too. A very different kind of AI. But these things take time. SWEs are not going anywhere, for the foreseeable future.

2023-01-03 15:46:39 The hard problems in CS are centered around designing generalizable abstractions (i.e. thinking clearly), collaborating, and identifying the right problems. "Writing code" and "discovering past ways to do X" are definitely valuable, but they're a side note in the big picture.

2023-01-03 15:35:58 Companies in 2025 will not need to hire 5x fewer SWEs, nor will SWEs produce 5x more software (in terms of features, not code). But will have tooling and productivity improved? Definitely.

2023-01-03 15:34:48 I've been using state-of-the-art coding assistance tools for over a year now in my daily work. I'd say the productivity boost is ~5%, mostly from smarter autocompletion. I've used ChatGPT too but found it largely useless compared to a search query on GitHub or StackOverflow.

2023-01-03 15:33:22 But it will have become evident that SWEs are not being 5x more productive than they were in 2022. They will be somewhat more productive, in the same way that SWEs in 2022 are somewhat more productive than in 2019.

2023-01-03 15:31:05 My prediction: in 2025 there will be millions more SWEs than today. The nature of their work will be close to what it is now. Of course their tooling will have evolved and they will use a lot more automation (note: we already use a *ton* of automation today).

2023-01-03 15:27:10 I first heard the claim in a modern context in 2016. It was targeting the mid-2020s for the complete end of the profession

2023-01-03 15:26:20 Then again folks have been making unhinged predictions like this for years. Radiologists would be out of a job by 2018. Taxi drivers by 2019. Not to mention, 2022's "art is dead". "Software engs will be out of a job within a few years" has been a recurring theme since the 70s.

2023-01-03 15:24:13 First one has been the case since at least 2013 (automated linters that create GitHub PRs and the like). The other two predictions appear to stem from a profound misunderstanding of AI progress and the nature of software engineering. https://t.co/0iPlfntXp1

2023-01-03 11:47:59 Tweets are generative prompts for your thoughts.

2023-01-03 11:34:38 Making something a lot easier to do isn't incremental improvement, it's zero-to-one enablement: a large group of folks who were previously not able to do it, now can.

2023-01-02 11:34:28 We've received 12 awesome project submissions. Congrats to all who entered! We will announce winners within a couple of weeks. https://t.co/LYjoadwTDI

2023-01-02 10:22:55 The benefits of reading aren't just about having read a specific book and having learned something from it. The experience of reading books is in itself beneficial, in a way that's not strongly tied to content. It isn't just a means to an end.

2023-01-01 18:30:13 Hence why video games are the royal road to learning programming (among countless examples...)

2023-01-01 18:27:07 The fundamental trick to teach a kid something new is to make them excited to learn it by leveraging things they already love

2023-01-01 17:06:56 Reading books is underrated.

2023-01-01 14:23:03 Calvin and Hobbes is really special among syndicated comic strips in that it is relatable, intelligent, and tasteful. Exceptionally rare for the genre

2023-01-01 07:27:48 Happy new year! May this year bring you inspiration, productivity, and happiness.

2022-12-30 16:49:22 RT @minxdragon: Love Keras! It is featured heavily in my thesis and in my latest project!

2022-12-30 16:49:18 RT @RisingSayak: For those who joined the ship! Some serious stuff in the thread!

2022-12-30 15:56:36 We've recently released KerasNLP 0.4, with a ton of new functionality. The API really exemplifies "progressive disclosure of complexity": basic use cases are dead simple, and you get as much customization control as you need for more advanced use cases. https://t.co/sjSLdUhIuF https://t.co/xgCHzNKYS4

2022-12-30 14:38:46 I hardly play video games these days, but if you're looking for a recommendation, Heroes of the Storm is an extremely underrated Moba. Here's one of my best HotS moments, a 1v4 fight from a 2020 game (I play Fenix) https://t.co/gSvwgS9uNy

2022-12-30 09:44:50 What is life if not the pursuit of knowledge, beauty, and love -- and having the freedom to engage in it

2022-12-30 08:18:48 @RisingSayak Absolutely! This is coming as soon as we make it the default format.

2022-12-30 08:18:15 RT @aureliengeron: This whole thread is a must-read. Congrats to everyone who contributed to Keras &

2022-12-29 20:13:59 RT @EricTopol: For people age 65+, the bivalent booster linked to >

2022-12-29 17:39:14 RT @penstrokes75: It's been 10 months since I started contributing to KerasNLP, and what a journey it's been! Loved every moment of it

2022-12-29 17:34:53 RT @fadibadine: What a year for Keras! A lot of libraries and new features that help #ML researchers and practitioners achieve more and f…

2022-12-29 16:24:30 RT @gusthema: A lot happened this year! Take a look

2022-12-29 16:09:10 RT @A_K_Nain: Keras is PS: Glad to have made some good contributions in 2022. Looking to do more in 2023

2022-12-29 15:50:33 Here's to a great 2023!

2022-12-29 15:50:13 New dataset utilities: audio_dataset_from_directory, split_dataset... SharpnessAwareMinimization Model training... warmstart_embedding_matrix, OrthogonalRegularizer, MultiHeadAttention auto masking, etc.

2022-12-29 15:49:58 And there's still much more! We've added new layers: GroupNormalization, UnitNormalization, EinsumDense... A SidecarEvaluator for async model evaluation on a different device... KerasTuner improvements...

2022-12-29 15:48:43 You can start using it with 2 lines of code, and you customize it up to an arbitrary level of control (progressive disclosure of complexity in action again!). Get started with FeatureSpace here: https://t.co/xS5j5PjjTL

2022-12-29 15:48:22 Next up: we've added a super cool one-stop utility for structured data preprocessing. Categorical feature indexing, encoding, hashing, crossing, numerical feature normalization or discretization – it has everything you need. https://t.co/vVJelq6auP

2022-12-29 15:48:02 Next up: we've launched a fully redesigned Optimizer API. Optimizers are now super easy to customize (just 3 methods to implement, like for layers) and they're faster thanks to XLA compilation. We've also released AdamW &

2022-12-29 15:46:59 You can start saving models in the new format via `model.savе("mymodel.keras", save_format="keras_v3")`. Reload works as usual -- `model = keras.models.load_model("mymodel.keras")`

2022-12-29 15:45:47 And it's fully safe by default – no bytecode or pickling is involved. Note that this means that lambdas are disallowed as part of the format – but you can still load them by setting `safe_mode=False` if you trust the model source.

2022-12-27 20:48:49 It saddens me to see long-defeated childhood diseases start to make a comeback. These children are collateral damage of social media disinformation campaigns. https://t.co/FdkX7E18Do

2022-12-25 09:14:11 Merry Christmas! I wish you and your loved ones lots of happiness, good health, and love

2022-12-24 12:04:14 This is neither a defect nor a quality, it's simply the way complex systems are created.

2022-12-24 12:03:19 The more complex the system, the more it is the result of organic local evolution rather than central top-down planning. Long-lived software systems look and feel increasingly biological as time passes.

2022-12-24 10:20:09 Reasoning and pattern recognition are abilities, not problem types. They're what you use to solve problems in different settings, they're not inherent to the problems themselves.

2022-12-24 10:19:27 But if you're given a similar problem again and again... then you'll start noticing patterns. Maybe you'll become able to make a pretty good guess just from the look &

2022-12-24 10:17:49 You're going to have to use the shape equations to make predictions about the color of certain pixel coordinates, find discriminative ones, and fetch the corresponding cubes from the drawers to make a conclusion. (One of several possible methods you could come up with!)

2022-12-24 10:16:11 Reasoning is what you use to make sense of things that aren't a simple interpolation of things you've seen before.

2022-12-24 10:16:10 You'd have to use reasoning if the problem comes in a form that you've never seen before, that renders your pattern recognition ability ineffective. Let's say the task specification comes in the form of shape equations in the 2D plane, and your images come in the form of...

2022-12-24 10:12:40 The former sounds perhaps stranger, so here's an example. Let's say you have to tell whether a given image contains a square or a circle -- a canonical perception problem. Sounds easy enough if you have a well-trained visual system, right? How would reasoning come into play?

2022-12-24 10:11:12 The latter is easy to picture -- if you've seen thousands of RPM IQ puzzles, you will develop pattern recognition intuition for the templates they follow and you'll become able to solve them in your sleep. Every new puzzle you see will be a small variation of a known pattern.

2022-12-24 10:08:37 Any task, even those canonically considered to be perception problems, can be solved with reasoning (if working with very little data). Inversely any task, even those canonically considered to be reasoning problems, can be solved with pattern recognition (given sufficient data). https://t.co/ROot0YQATa

2022-12-23 19:05:58 I'm not a religious person, but I do believe there are profound teachings to be found in the New Testament and the Gita

2022-12-23 14:41:30 8. At least in the Android app, selecting text makes it disappear. This happens both in dark mode and light mode. https://t.co/WqeOIJzdTs

2022-12-23 13:49:59 RT @Kasparov63: Because it's so outrageous, I'm going to start at the bottom of the barrel with Tucker for this thread on Zelensky's visit…

2022-12-23 11:39:47 Illustration of the bot problem right in this thread https://t.co/BgGtLVboWa

2022-12-23 11:27:32 @jeckwild "a system is defined by its properties" is a tautology. Of course it is. What I'm saying is, just because something looks like cake to you doesn't mean you can eat it. Superficial appearances can be deceptive.

2022-12-23 10:54:47 6. More bots everywhere. Feels like they turned off the spam filter 7. Higher toxicity in my replies. I don't know why this is, so this might not necessarily have been caused by any change in the product. Maybe folks just feel like they have permission to be their worst selves

2022-12-23 10:54:46 4. Whatever this is https://t.co/GIXbMSLkdZ

2022-12-23 10:54:45 It's amazing how fast the Twitter UX has been degrading over the past few weeks. So far: https://t.co/sEoqHtPpBM

2022-12-23 10:41:32 I always come back to this cognitive fallacy -- do not confuse the outwards appearance of a thing and its internal functional dynamics. Just because X looks like Y from your perspective does not mean that X and Y are equivalent.

2022-12-23 10:39:54 The *same* task can be solved with reasoning, if it's novel to you, or via pattern recognition, if you've seen a similar task 10,000 times before. Some folks think that reasoning is about solving anything that *looks* like a reasoning problem. But reasoning is not an aesthetic!

2022-12-23 10:39:53 It's good to remember that reasoning is not linked to a *type* of problem (e.g. an IQ test), it's an ability. Any problem can become a pattern recognition problem given sufficient training data + test inputs that stay close to the training data.

2022-12-23 09:33:53 @summerstay1 ARC isn't visual, it's not images. It's grids of discrete symbols. It's sequences data, but 2D.

2022-12-23 09:20:14 @summerstay1 Just flatten the grids and concatenate them, with some verbal instructions in between the grids. You might want to try various flattening schemes, and you want to make sure tokenization doesn't destroy information with your encoding scheme of choice.

2022-12-23 09:18:10 Despite this novelty element, kids as young as 5-6 can solve a large number of ARC tasks with no prior practice and no task-level explanation. Starting around 9-10 they can solve nearly all of ARC save for the most difficult tasks. This is "extreme generalization" in action.

2022-12-23 09:16:10 In general, it should be expected that LLMs can solve any problem that has practice data available, and that becomes a pure pattern recognition problem after practice. This includes all IQ test tasks that weren't designed for novelty.

2022-12-23 09:13:19 This is circumstantially confirmed by the fact that, when translating ARC problems to sequences, the largest LLMs out there (not just GPT-3, but *much* larger ones as well) score close to zero. Problems that do get solved are known ones, such as a simple left/right flip.

2022-12-23 09:11:31 So far all evidence that LLMs can perform few-shot reasoning on novel problems seems to boil down to "LLMs store patterns they can reapply to new inputs", i.e. it works for problems that follow a structure the model has seen before, but doesn't work on new problems.

2022-12-22 19:03:30 @DataScienceHarp Was this rephrased by a LLM? Sure sounds like it

2022-12-22 18:05:43 Advent of Code 2022 in pure TensorFlow... good way to familiarize yourself with some of the lesser-known TF features, like regexes and TensorArray. https://t.co/tEScbJpWTQ

2022-12-22 17:55:01 RT @quorumetrix: I’ve made this video as an intuition pump for the density of #synapses in the #brain. This volume ~ grain of sand, has >

2022-12-22 17:34:11 It's very much akin to building a new city -- according to your own plans, responding to your own needs -- on top of ruins, reusing the bricks and steel beams lying around your environment. It absolutely cannot be reduced to parroting the old environment.

2022-12-22 17:31:55 Surely enough, toddlers' own words and phrases have their origins (sometimes distantly) in imitation. But they differ from the originals. They use them in their own way, long after they become able to understand and pronounce the exact originals.

2022-12-22 17:29:24 Language learning isn't purely mimetic. Mimesis is used as a way to feed raw semantic &

2022-12-22 16:56:36 @tdietterich The replies are really just a taste of what goes on in the far-right disinformation bubble. It has gotten worse in recent months...

2022-12-22 13:14:42 RT @JillDLawrence: Provocative @djrothkopf analysis of Presidents Zelensky and Biden: Both were grievously underestimated by Putin and, per…

2022-12-22 11:25:55 Visualization improves cognitive utilization. The interface is never just a convenience. It's always an essential part of the system. Often the most critical part.

2022-12-22 11:23:42 Representing data in visual form helps you take advantage of what you're naturally good at -- visual pattern recognition -- to better make sense of it. It plays to your strengths.

2022-12-22 08:42:52 Putin himself frames his war of conquest and genocide as a confrontation against the democratic West. He orders his troops to commit war crimes -- torture, rape, civilian murders. Those siding with Putin could not possibly be any more deplorable.

2022-12-22 08:31:23 Zelensky embodies courage, determination, and sacrifice in a fight for the survival of his country and his people against a genocidal dictator. The people who mock him and his fight are telling you very explicitly what and who they stand for.

2022-12-22 05:55:20 RT @Scott_Maxwell: In Florida, basic childhood immunizations just hit a 10-year low. Previous generations did what was needed to all but er…

2022-12-21 20:33:40 RT @BillKristol: Two men meet who have met the moment, and whose leadership has made it possible that 2022 will be an inflection point in t…

2022-12-21 13:41:39 Twitter is great because it gives you unfiltered visibility into the thoughts and decision-making process of some of the most successful people in the world. This has completely cured my impostor syndrome.

2022-12-21 12:54:38 RT @doctorow: Nice for some https://t.co/shQiIh2wT2 https://t.co/c7Pkpp65vO

2022-12-20 14:03:03 @awsaf49 It has to use `keras_cv.models.StableDiffusion`, specifically.

2022-12-20 13:39:02 3. Texture generation https://t.co/q45nT5ZG4b

2022-12-20 13:37:31 2. Visual concept learning and remixing https://t.co/6mpyozrhFV

2022-12-20 13:37:19 Some previous examples: 1. Latent space walks https://t.co/tCcC8Z5aVl

2022-12-20 13:36:20 You still have 12 days to enter the Keras community prize. Submit your projects based on KerasCV StableDiffusion and win prizes. https://t.co/wV1eOka4h5

2022-12-20 13:34:39 RT @divamgupta: You can create tile-able textures from Stable Diffusion. Here’s how you can so this using few lines of code in Keras: htt…

2022-12-19 19:48:45 RT @January6thCmte: "We understand the gravity of each and every referral we are making today... just as we understand the magnitude of the…

2022-12-19 19:48:43 RT @January6thCmte: The fourth and final statute we invoke for referral is Title 18 Section 2383. This statute applies to anyone, who inci…

2022-12-19 19:48:39 RT @January6thCmte: Third, we make a referral based on Title 18 Section 1001, which makes it unlawful to knowingly and willfully make mater…

2022-12-19 19:48:37 RT @January6thCmte: Second, we believe that there is more than sufficient evidence to refer former President Donald J. Trump, John Eastman,…

2022-12-19 19:48:35 RT @January6thCmte: The first criminal statute we invoke for referral is Title 18 Section 1512(c). We believe that the evidence assembled…

2022-12-19 19:48:27 RT @January6thCmte: "Our Committee had the opportunity last Spring to present much of our evidence to a federal judge... The judge conclude…

2022-12-19 19:27:44 RT @MIT_CSAIL: An IBM slide from 1979. https://t.co/kZzSr2mf4A

2022-12-19 19:09:20 @AdamSinger You will fill the forms. You will live in the pod.

2022-12-19 18:17:33 RT @TensorFlow: Videos should be accessible, useful, enjoyable, &

2022-12-19 12:04:56 @MNateShyamalan This is "owned by dairy queen" levels of shame

2022-12-19 12:00:38 @MNateShyamalan Imagine Ryanair beating you to a tweet https://t.co/N6fBz5x72A

2022-12-19 11:38:02 Writing essays isn't just a way to communicate about your ideas. It's mostly a way to *develop* your ideas -- a tool for thought. It's an opportunity to sit down, connect the dots, and turn your vague hunches into actionable mental models.

2022-12-19 08:05:18 @Plinz Wait, is that a talking rabbit

2022-12-19 07:58:28 This pattern of randomly making up unhinged edicts, fumbling their rollout, then immediately walking them back -- repeatedly -- feels oddly familiar

2022-12-18 23:12:20 RT @JuddLegum: Musk's game plan for Twitter: 1. Make up the rules as he goes along with no notice or public discussion 2. Retroactively a…

2022-12-18 22:57:39 @kannarkk Thanks for letting me know. Please report the impersonator if you can.

2022-12-18 22:42:30 Scratch that, >

2022-12-18 22:24:59 If something happens, you'll always have my newsletter https://t.co/b678OACjUJ

2022-12-18 22:15:55 The levels of free speech on this site are off the charts

2022-12-18 22:14:06 Seriously wondering how long I have till the ban hits

2022-12-18 22:12:53 How it started / how it's going / wait what now? https://t.co/gOdVufMsW3

2022-12-18 20:53:10 It's a personality thing I guess. Some types love the taste of boot!

2022-12-18 20:52:18 "Banning your political opponents is so great for free speech Sir. Such a clever move Sir. You are doing so much for Humanity Sir. You're the greatest man alive Sir. Shluuuuurp" Not even exaggerating -- if you follow some of the far-right pundits on this site you know how it is.

2022-12-18 20:52:06 E. M. taking a sledgehammer to Twitter has been sad to witness, but on the plus side it has been really funny to watch the squad of eager bootlickers that rushes to give him a tongue bath whenever he does something dumb or/and evil.

2022-12-18 20:40:13 @turincomplete The nice thing about having tons of byzantine rules is that you are *always* in violation of some of them, so it's always possible to find a retroactive justification to ban someone you don't like when you feel like doing so for any arbitrary reason or no reason at all.

2022-12-18 20:38:14 @avinash1 @StrapperSid Try Debirdify

2022-12-18 20:33:42 @tweetsfromdrdan I used Debirdify to import my follow list (the hit rate was low though)

2022-12-18 20:31:46 I've gained ~150 followers on Mastodon in the past 45 minutes (presumably this is from follower list scraping tools). While I'm not leaving Twitter, I expect to start being more active over there. Username in bio if you want to follow me.

2022-12-18 19:39:59 Twitter usage is at an all time high! Also, it is now illegal to leave. Please don't leave me

2022-12-18 19:28:53 Scrambling to reduce access to alternatives to prevent folks from leaving ("building a wall to keep people in") is a classic move that tells you everything you need to know.

2022-12-18 10:48:39 Let's think step by step. https://t.co/QQIyi8UsWj

2022-12-18 08:31:20 Productivity is not "I'm familiar with more frameworks" or "I can type faster". It's almost entirely "I can develop abstractions that I generally don't need to revisit after introducing them". That's it.

2022-12-18 08:29:35 This is also the source of large productivity discrepancies between software engineers. Being able to get it mostly right the first time has exponential compounding effects.

2022-12-18 08:18:20 "I have solved X in a dependable and generalizable way, I won't need to go back to that problem and add random hacks to my solution to address things I didn't see coming" is pure bliss. Peace of mind.

2022-12-18 08:15:57 This is extremely hard to achieve. You get there not via clever code by via clean abstractions.

2022-12-18 08:15:41 The top productivity hack in software development is to build foundations that are dependable enough that you can keep building on top without having to go back and modify them as you go. 100x time saver.

2022-12-18 05:27:08 RT @BMeiselas: Washington Post reporter Taylor Lorenz has been suspended from Twitter, presumably after sending this tweet https://t.co/dY5

2022-12-17 21:38:50 RT @scienceisstrat1: Gun violence is now the leading cause of death for American children https://t.co/Ohg46Yx55b Cc: @NickKristof @dwall…

2022-12-17 20:18:01 RT @LuWrites: Snowflake on a raven's wing. (Photo by Shawn Bergman) https://t.co/V2IwGRq0fD

2022-12-17 18:33:41 RT @tedlieu: I’m going to summarize the stupid “Twitter Files part 6” by partisan hack @mtaibbi. So the FBI during the Trump Administration…

2022-12-17 18:33:36 RT @tedlieu: So the FBI was telling folks, including social media companies, about foreign influence operations by Russia, China and other…

2022-12-17 14:45:48 RT @AdamKinzinger: Insane https://t.co/OdqPeMZVAW

2022-12-17 14:39:49 Real-world ML is often adversarial -- your adversaries will adapt to your moves. You want to keep them in the dark and only block them at the finest level of granularity possible, in a way that makes it hard for them to even tell they were detected. https://t.co/fBf2o0HU4J

2022-12-17 14:35:11 Blocking an ISP will *increase* spam, because you still deal with the same amount of spammers, but your ability to detect them is decreased. It's a rookie move. This is a kind of situation you see a lot in real-world ML, not just with spam and fraud detection...

2022-12-17 14:34:32 It's far better to use IP address groups as a feature in a spam classifier (applied to individual posts) than to outright block an ISP. Keeping your spammers concentrated in a few ISPs is a *good* thing for you, since it makes spam easier to detect.

2022-12-17 14:34:07 ...but that would be a terrible idea, because your spammers will adapt to your change in a matter of hours or days (they were using these ISPs out of convenience, not because they had to). You will have to restart from scratch -- except most of your historical data is now stale.

2022-12-17 14:33:46 Note on spam filtering ML. If you notice that all of your spam comes from the same 9 ISPs in SE Asia &

2022-12-17 13:38:03 RT @jsrailton: BREAKING: Journalist Linette Lopez is suspended. She has reported for years about Elon's dubious business practices at Tesl…

2022-12-16 19:03:03 Infinite complexity can emerge from a few simple principles working on an enormous scale https://t.co/gRlzNf8rwQ

2022-12-16 18:24:41 RT @nntaleb: The genius of Trump, Nigerian scammers, &

2022-12-16 18:12:32 If you fell for E.M.'s hypocritical "free speech" smokescreen *and* you fell for NFTs *and* you fell for memecoins *and* you fell for ivermectin and HCQ *and* you fell for Trump ...then maybe it's not a lapse in judgement. You're just that guy. https://t.co/3ssKdvkCrT

2022-12-16 18:04:54 RT @mark_dow: If you fell for Elon Musk's free speech schtick, you should be asking yourself some hard questions about your biases and gull…

2022-12-16 17:02:03 RT @dataeditor: this was the email that @elonmusk sent @drewharwell two days ago https://t.co/tqqYnxLOQa

2022-12-16 16:38:14 @_OliverStanley @svpino @OpenAI IMO you could sell it as a $20/month subscription and it would be profitable. Anyhow, the economics of LLMs are very much up in the air at this point. Wait and see...

2022-12-16 16:34:57 @svpino @OpenAI Side note: it's awesome to have a 5x year! But projecting that the 5x rate will continue in perpetuity is, uhh... not a given. Especially if the prior 5x is caused by the introduction of products that did not exist the year before.

2022-12-16 16:30:13 @svpino @OpenAI The implication here is that they made $40M in 2022 and $8M in 2021 (they assume a 5x y/y growth rate). Interesting that they're raising money... again. Why not just start charging a positive-margin price for their tech? Dos it mean that positive margins aren't possible today?

2022-12-16 15:27:44 AFSA statement: https://t.co/eDo57To3mi "Mr. Herman appears to have been suspended from Twitter after engaging in reporting that is clearly an exercise of free speech rights. As one of the world’s largest social media platforms, Twitter can and must do better."

2022-12-16 08:28:25 E.M. bans a bunch of journalists critical of him, then he pops into a Space where he gets called out for lying about the official motive for the bans, can't justify himself, immediately leaves. Shortly afterwards Twitter forcefully shut down that Space. Just incredible stuff. https://t.co/LQkcqSljWY

2022-12-16 08:11:12 RT @tedlieu: Japan has the absolute right to defend itself. China unfortunately has become more authoritarian and militaristic, and Japan r…

2022-12-16 08:03:37 Steve was like the archetype of the easy-going, impartial, facts-only reporter. A very good reporter as well. This was his account, in case he's ever unbanned in the future: @W7VOA

2022-12-16 07:54:10 @MrAstroThomas Yes, I'm on sigmoid social but I don't anticipate being very active there. I'd post the link, but all Mastodon links are being blocked by our public square overlord unfortunately. https://t.co/jEx1dm2K0B

2022-12-16 07:48:11 On that theme: I've started writing long form at https://t.co/T556LQZJq3

2022-12-16 07:44:17 The Twitter debacle *might* be a good thing in the long run, as it is a stark reminder that no private company should be trusted to be the "public square". Invest in platforms you control or that you can easily migrate away from (with your audience). https://t.co/0fbg7xUvaO

2022-12-16 05:55:12 RT @NBCNews: DEVELOPING: Twitter suspends several high-profile journalists who have been covering the company and Elon Musk. https://t.co/L

2022-12-16 05:54:55 RT @kattenbarge: Twitter is currently mass-banning accounts that go against it and Elon Musk’s interests, including journalists who cover h…

2022-12-16 05:53:32 RT @oneunderscore__: Texts from an unknown sender, from discovery in the Elon Musk/Twitter suit months ago. Pretty remarkable to read afte…

2022-12-16 05:53:26 RT @oneunderscore__: Journalists who cover Elon Musk have been suspended on Twitter tonight: @Donie O'Sullivan from CNN, Aaron Rupar and th…

2022-12-16 05:45:02 Soon only the far-right pundits that E.M. is constantly replying to with feverish obsequity will be left. Truth Social but for Elon fans.

2022-12-15 19:11:28 Sure, the price of Bitcoin may be down compared to last year. But if you zoom out and go back TWO years, then... *checks chart* ...it's down too. Ok, but if you reaaally zoom out and go back FIVE years, then... *checks chart* uh, it's down as well.

2022-12-15 08:04:30 Unchecked personal power, exercised capriciously and vindictively, leads to unjust outcomes. Fair decisions can only come from principles, applied impartially and consistently, with checks and balances. I believe this applies to all communities, offline or online...

2022-12-14 21:00:00 And I'd like to salute Morocco as well for an amazing performance this year.

2022-12-14 20:58:42 Congrats!

2022-12-14 16:38:21 RT @shr_id: Life Universe https://t.co/DLCTLNTqII Explore the infinitely recursive universe of Game of Life! Works in real-time and is per…

2022-12-14 02:31:53 If you're excited about the advantages of fusion -- CO2-free, plentiful power -- look into nuclear fission power. It comes pretty close, and you can build those reactors today.

2022-12-14 02:31:34 A lot of the cost will come from operating the thermal power station and the electrical grid -- unchanged from a nuclear fission power plant.

2022-12-14 02:31:24 In a modern fission plant, fuel costs only represent ~20% of your electricity bill. In the absolute best case scenario, when fusion is perfectly mature, it will provide electricity that's marginally cheaper than nuclear fission power -- maybe 20-30% cheaper.

2022-12-14 02:31:03 The exact economics of fusion are unclear today. But a 100M °C plasma is going to generate high maintenance costs. Fuel cost is unclear -- tritium is expensive today but fusion might make it plentiful in the future. Even assuming 0 fuel costs, fusion will be far from free.

2022-12-14 02:28:58 Fusion energy is sometimes described as "free" and "unlimited" energy. That's not quite how things work. You'll have to pay for the construction and maintenance of the fusion reactor, then pay to operate a regular thermal power station on top. Then pay for the grid.

2022-12-14 02:10:42 RT @archaeologyart: Armet with Mask Visor in the Form of a Rooster, ca. 1530. Medium: Steel. Place of origin: German, probably Augsburg. On…

2022-12-14 02:06:34 The faster things move and the more unpredictable the future gets, the more helpful it is to make decisions based on principles rather than on attempts to predict specific outcomes.

2022-12-13 21:59:01 Fusion energy is our long-term future. In the immediate term, we should further adopt fission energy, which offers nearly the exact same set of advantages as fusion (minus meltdown risks and high-activity waste, both of which are very well solved with current technology).

2022-12-13 21:56:34 RT @ENERGY: BREAKING NEWS: This is an announcement that has been decades in the making.   On December 5, 2022 a team from DOE's @Livermore_…

2022-12-13 21:14:38 In the long run, love wins.

2022-12-13 16:15:22 RT @fchollet: New tutorial on https://t.co/m6mT8SaHBD: use Textual Inversion to teach StableDiffusion about a new visual concept (such as a…

2022-12-13 15:44:35 RT @A_K_Nain: The most readable DDPM code

2022-12-13 06:01:19 Complacency dusts the soul with soot.

2022-12-12 21:27:31 It's still early days for ML tooling. Some segments have become relatively stable and standardized, but much bigger swathes remain fragmented and constantly shifting.

2022-12-12 20:12:02 RT @luke_wood_ml: I've also produced a colab showing how to train a TextualInversion token for your personal pet! Let me know what you thi…

2022-12-12 19:57:55 New tutorial on https://t.co/m6mT8SaHBD: Denoising diffusion probabilistic models (DDPM). A walkthrough of the first paper that demonstrated the use of diffusion models for generating high-quality images. https://t.co/KDsqzCVu6r

2022-12-12 19:06:59 RT @luke_wood_ml: Ian and I worked hard on this one! It’s awesome - let us know what you think!

2022-12-12 18:28:53 New tutorial on https://t.co/m6mT8SaHBD: use Textual Inversion to teach StableDiffusion about a new visual concept (such as a specific character) to generate more images featuring the same concept. https://t.co/6mpyozrPvt https://t.co/39OPDfUsyD

2022-12-12 18:20:02 TensorFlow in your spreadsheets, on your phone, on microcontrollers... https://t.co/3JJMDJvcSz

2022-12-12 06:24:21 Not unlike how deep learning models today serve as the interface between a self-driving car's sensors and its internal world model -- a template which is necessary in order to ensure reliability.

2022-12-12 06:23:05 In the longer term, LLMs will probably shine as the "conversational interface" component of much larger AI systems. LLMs for language-based mediation between a human user and a more reliable AI backend.

2022-12-12 06:20:58 In the near term, I would expect most successful applications of LLMs to cluster around copy writing &

2022-12-12 01:26:36 We are closer to this vision today than we were 7 years ago. I look forward to where we will be in 7 more years. https://t.co/kW9c4VOxbN

2022-12-12 01:25:26 The true potential of AI is to augment human intelligence -- to serve as our interface to an increasingly complex, connected, and information-intensive world. Imagine having a conversation with Wikipedia. Or a brainstorming session with arXiv. https://t.co/rlnBnMZWiW

2022-12-12 00:19:53 History shows how common this pattern has been, and how often it has led to tragedy. I take sides: I hope humanity wins.

2022-12-12 00:17:16 Performative cruelty toward an outgroup is how they affirm themselves. It is closely similar to schoolyard dynamics. Bully the "weirdo" to be seen as "one of us".

2022-12-12 00:14:51 If the far-right weren't attacking trans people like they are today, they'd find another target. The entire schtick is to find an outgroup to label as degenerate, subhuman, and dangerous to society (e.g. "groomer" panic), and harass the outgroup as a public display of cruelty.

2022-12-11 23:21:56 Sober and accurate. https://t.co/3yDFqT647Q

2022-12-11 23:04:58 If you're in SF and you're looking for a real espresso machine or an iMac on the cheap, this is your chance https://t.co/fkxNMV6iNi

2022-12-11 16:57:16 This is what happens when you live in a far-right information bubble. Niche conspiracy theories that 99% of the planet has never heard about start sounding like the most important thing ever.

2022-12-11 16:54:19 Twitter is a global app. Its owner marshalling the entire website (recommendations and all) to fight US culture wars, in the most sociopathic way possible, seems misguided.

2022-12-11 03:46:10 Fun Saturday night fact: there exists a subfield of robotics called "necrobotics", about resurrecting the dead as robots (no, really). It's based on single study, which used dead spiders as a grasping mechanism. https://t.co/KDdytmoAvw

2022-12-10 21:18:29 @MikeIsaac This is one of the hallmarks of NPD. I can think of a few other folks following the same pattern.

2022-12-10 20:34:50 https://t.co/SRZvidh5Ma

2022-12-10 19:27:48 Does the system have to interact with humans to be perceived as useful? Or can it create value autonomously in the real world independently of whether its output is being interpreted by a human brain?

2022-12-10 19:26:50 To tell the difference between a motion-captured character and a creature capable of producing its own movement, look at its scope of applicability:

2022-12-10 19:11:48 As more and more scenes get added and the character's movement becomes increasingly complex and lifelike, you might believe it's just a matter of time until it actually transitions to being alive. But the jump simply won't happen, because a CG record is not a lifeform.

2022-12-10 18:59:33 No matter how many scenes you capture, your artificial character remains a record -- its ability to project meaning in your eyes is entirely dependent on the source material. It cannot create its own meaning. It cannot jump off the screen and live its own life.

2022-12-10 18:57:00 In both cases you are looking at a record (cognitive output or movement), cast in a new form (recombined or projected on a new CG body). The only bits that look intelligent or alive are reflections of the source material.

2022-12-10 18:53:59 Looking at the output of a deep learning model trained on human-generated data and believing the model is "intelligent" in the human sense is exactly like looking at motion-captured CG and believing the characters on the screen are "alive".

2022-12-10 18:47:37 RT @archaeologyart: Horses panel, Chauvet Cave, c. 30,000 – 28.,000 B.C Photographer: L. Guichard. https://t.co/6s99dun7pg

2022-12-10 17:57:22 RT @A_K_Nain: The code example for Denoising Diffusion Probabilistic Models in Keras is live on the site! What's in the code example,…

2022-12-10 02:09:27 Also why it's dangerous to redesign something from scratch (for a 3rd party) while discarding conventional wisdom. Conventions usually evolved for a reason.

2022-12-10 02:05:03 This is why some of the best designs come from people building something for their own needs.

2022-12-10 02:04:31 You can't do great design if you don't understand in depth the experience of the person you're designing for.

2022-12-09 17:12:23 Luck is a skill you can develop.

2022-12-09 16:07:54 What's wild here is that even post-democratic dictatorships like to maintain the trappings of democracy (running elections, having a parliament) because it gives them an air of legitimacy. It's rare to openly reject and mock the *concept* of democracy. That's 1930s stuff. https://t.co/ag36EmWvUf

2022-12-09 02:02:51 I see that Twitter is recommending right below my ML tweets some bangers from the likes of TomFitton, ClownWorld_, EndWokeness, etc. Fun app!

2022-12-09 01:59:19 @MelMitchell1 That's what's fun about LLMs: they're right 70% of the time, just enough so you might start trusting them if you don't know any better, then they're horrifically wrong the remaining 30% :)

2022-12-09 00:51:28 What's nice about generative deep learning is that in addition to being extremely valuable, it's really fun

2022-12-08 23:18:46 .@antoniogulli and colleagues have a new edition of their Keras &

2022-12-08 20:43:38 @levie This feels like arbitraging the fact that people are still calibrated to perceive such emails as coming from a place of human attention and thoughtfulness. The moment people start to pattern-match them as AI generated, they will prefer the 2-liner (more respectful of their time).

2022-12-08 20:11:04 Related: I can't be the only one who perceives the carefree and mindless vibe of 90s pop as an expression of the "end of history" geopolitical atmosphere of those times

2022-12-08 20:05:24 It's underappreciated to what extent recent cultural evolution has been influenced by globalized supply chains and low interest rates. Culture is downstream of logistics

2022-12-08 19:06:07 The doom loop: you have an automation system that's almost good enough, so you start relying on it -- it's cheaper and more convenient. Then you lose the ability to do things properly. Finally, your automation, which depended on high-quality examples, starts degrading.

2022-12-08 18:30:41 Abilities you don't use atrophy, so if you're going to outsource a category of tasks to someone else or to a computer, make sure they're not backed by abilities that you want to develop in yourself.

2022-12-08 16:10:49 @VitalikButerin @VovaVili @kacodes Yeah, that response from the devs seemed surprisingly user-hostile. "You're just holding it wrong" type stuff. If that pattern is widespread in user code, then it's worth fixing.

2022-12-08 13:00:00 CAFIAC FIX

2022-12-07 08:00:00 CAFIAC FIX

2022-11-15 16:51:40 Firing people for disagreeing is a bit like disabling unit tests because they no longer pass

2022-11-15 16:49:14 Much like you would use continuous integration and have full unit test coverage so you can *make changes with confidence*, you should create an environment of trust where folks can safely voice dissenting opinions (respectfully), so you can *debate ideas with confidence*...

2022-11-15 16:43:55 Children constantly notice similarities between things around them and make analogies. But they don't do so passively -- they seek analogies they can act on. Often, they *are* the analogy -- they imitate everyone around them.

2022-11-15 02:51:33 The third edition of @aureliengeron's book is out -- go get it! https://t.co/vrYQ3twL5U

2022-11-14 23:00:36 RT @fchollet: I'm starting a newsletter. It's called "Sparks in the Wind": mostly ephemeral random thoughts -- but with a small chance of s…

2022-11-14 20:43:17 In the words of Hannah Arendt, "the sad truth is that most evil is done by people who never make up their minds to be good or evil."

2022-11-14 20:41:05 This is the exact same logic that leads to mass ecosystem destruction and catastrophic climate change. "It's justified to destroy everything because shareholder value" is certainly an interesting code of ethics.

2022-11-14 20:39:47 I see some folks argue "actually this is good because the board has a duty to maximize shareholder value." So it is "good" for a company to destroy the thing it spent 15 years building, put all employees out of a job, and harm society in the process, because "shareholder value"?

2022-11-14 20:13:57 As we watch Twitter circle the drain increasingly fast, I hope everyone remembers that this is only happening because the Twitter board and shareholders entered a legal battle to force E. M. to acquire it. Neither users, employees, nor even the acquirer wanted it to happen. https://t.co/yTuGaIyGvj

2022-11-13 22:00:38 I'm starting a newsletter. It's called "Sparks in the Wind": mostly ephemeral random thoughts -- but with a small chance of starting a fire.Posts are going to be similar to my Twitter threads, but longer and more polished.First post is on education:https://t.co/XsxLYBdQ2e

2022-11-13 20:03:45 Not a great time to be one of those clowns -- reality has caught up with the sociopathic conspiratorial fantasies you used to escape it, and all you're left with is a growing sense of being the butt of the joke. https://t.co/rGh1vhW3Gn

2022-11-13 19:42:40 RT @nntaleb: The conspiratorial cluster: 1) Cryptohead, 2) ProPutin (while libertarian), 3) Trumpologist Jan6er Election deniers, 4) Anti-…

2022-11-13 18:45:11 RT @CaseyNewton: You don’t have to treat people this way https://t.co/YLBJdJqvkH

2022-11-13 18:45:08 RT @CaseyNewton: Getting word that a large number of number of Twitter contractors were just laid off this afternoon with no notice, both i…

2022-11-13 16:34:51 @drgurner Definitely -- being broke is bad for your mental health. No need to speculate.

2022-11-13 16:32:14 Having too much money is a mental health hazard.

2022-11-13 15:28:03 RT @bradyafr: Ukrainian forces have liberated more than 64,000 square kilometers of territory since April, maps from @criticalthreats and @…

2022-11-12 22:18:16 (he knows words in 3 languages, these are some of the English ones)

2022-11-12 22:17:51 My 1.5 year old can only pronounce words with 1 or 2 syllables -- he will abbreviate longer words so they fit, by only keeping the two most accentuated syllables.Dinosaur becomes "dai-so"Apple becomes "a-po"But apple pie becomes "a-pai"

2022-11-12 16:21:36 @eriknordeus Also -- never assume the future will be like the past.

2022-11-12 16:05:52 RT @fchollet: You can tell crypto is a great investment because of all the billboards and TV ads telling you to buy it (~$1B ad spend in 20…

2022-11-12 15:34:16 @eriknordeus Doing it in "percent from the top" terms can be misleading -- when you go from -80% to -90%, you're actually crashing by another 50%

2022-11-12 15:14:47 Keep in mind that crypto is only midway through its current crash. The FTX meltdown is not where it stops.

2022-11-12 14:01:37 The crypto/web3/metaverse narrative pressure one year ago was really intense. These were weird times -- a total mania.In times like this you need to follow your own personal compass instead of blindly jumping on what everyone else is doing/funding.https://t.co/x1blZJKLIM

2022-11-12 13:44:58 On Google trends, search interest for "NFT" is down 90% from its peak one year ago. https://t.co/wzYWkXd8mB

2022-11-12 13:35:24 It's hard to remember in the current environment, but one year ago publicly saying that crypto was an insane bubble and that "web3" made no sense would trigger very aggressive pushback -- sometimes from powerful VCs.

2022-11-12 13:32:02 One year anniversary of this tweet, posted at the height of the crypto bubble -- which triggered dozens of aggressive quote-tweet reactions at the time.It tracks completely. https://t.co/lht4Rp59hw

2022-11-11 23:46:12 A recurring pattern these past few years: powerful people who were literally *made* by the mainstream media railing against the mainstream media, calling it "the enemy of the people", etc.Frankenstein monsters turning against their creator. There's a lesson for journalists here

2022-11-11 23:03:53 RT @robbie_andrew: Today the Global Carbon Project releases the 2022 edition of the Global Carbon Budget, a comprehensive assessment of our…

2022-11-11 22:47:41 @casassaez @kovasb If you're a company that uses Keras and you face this use case (large scale adapt() of Keras preprocessing layers), would you consider working with us to implement it? We're a small team and we don't have the resources for this at this time...

2022-11-11 22:25:54 @kovasb Correct, the underlying API / infra is designed to potentially allow Beam-style computation. We have not implemented it and it's currently deprioritized (the current adapt() is serial and single-threaded). But we could if there's demand in the future -- the design is there.

2022-11-11 22:21:43 RT @kovasb: Keras has the best design sense for the 80% use case of any system I've seen since Mathematica. Assuming performant implementa…

2022-11-11 22:20:27 @kovasb The fact that we're able to do everything as part of the TF graph is really nice -- Python slowness is never an issue. There's no need for us to rewrite anything in, like, Cython or Rust.

2022-11-11 22:19:09 @kovasb We could rewrite it for scale if there's demand though. But typically these datasets are small and the state computation part if cheap.

2022-11-11 22:18:09 @kovasb The adapt() implementation won't scale to very large datasets, though. If you have a >

2022-11-11 22:16:10 @kovasb All the work is done in Keras preprocessing layers, which are implemented in TF ops (everything is 100% in-graph!) so it's highly performant.During training (presumably on GPU/TPU) you'd use async preprocessing in TF data to avoid CPU preprocessing being a bottleneck.

2022-11-11 22:06:03 @cwarzel Early 2017 vibes

2022-11-11 17:24:35 If you use FeatureSpace in a Kaggle competition, let me know!I will send a signed copy of Deep Learning with Python 2E to the authors of the first 5 Kaggle notebooks using FeatureSpace that reach 20 upvotes. (Just email me!) https://t.co/vVJelq6auP

2022-11-11 17:20:51 This is "progressive disclosure of complexity" in action: at the highest level, the API is super simple. Here's my feature name, here's its type.But you can dive deeper and configure things further, incrementally. And then deeper still. Power users have full flexibility.

2022-11-11 17:18:51 Going further, you can even specify your own preprocessing layers.Let's say one of your feature is a text paragraph, and you want to encode it as a TF-IDF vector to be concatenated with your other features. Easy! Create a TextVectorization layer and pass it to the FeatureSpace. https://t.co/53RF0YzIB1

2022-11-11 17:14:58 The neat thing about FeatureSpace is that it's a whitebox. It's all built on top of Keras preprocessing layers, and you get direct access to them.Want to retrieve the StringLookup layer that was used to encode a string_categorical feature? You got it. https://t.co/SvjjQ9OpXE

2022-11-11 17:10:00 Categorical features and crosses let you specify whether you want to return them as integers or as one-hot vectors (the default).

2022-11-11 17:09:10 By default, the FeatureSpace returns concatenated features (note that categorical features are one-hot encoded before concatenation by default). If you want to do further preprocessing of each feature, you can also return a dict of individual features and feature crosses. https://t.co/Ed74FW9kwp

2022-11-11 17:04:06 You're not limited to single features -- you can also leverage *feature crosses*.A "cross" is a categorical hash of the combined values of a set of categorical features. This is enables a small model to consider feature interactions. https://t.co/DCDR5taJJj

2022-11-11 16:53:43 To configure the preprocessing of each feature in a more fine-grained way, you can switch from a string type (like "string_categorical") to a method (like `FeatureSpace.string_categorical()`), which exposes useful arguments, like whether to reserve an index for OOV tokens. https://t.co/6OJG2UN4px

2022-11-11 16:50:37 Once that's done, you can call the FeatureSpace on your data to retrieve encoded feature values.You can also `map()` the FeatureSpace into a TF Dataset.And you can incorporate the FeatureSpace in a Keras model. It exposes a dict of Keras Inputs, and corresponding outputs. https://t.co/zzFklLYS9s

2022-11-11 16:47:29 You need to `adapt()` the FeatureSpace to the training data before you start using it.When you do this, the FeatureSpace indexes the set of possible values for categorical features, and computes the mean/variance of features to be normalized. https://t.co/xC3MS0yEbu

2022-11-11 16:44:07 To use a FeatureSpace, just list your data fields and specify how you want to preprocess/encode them.For instance, `"my_feature": "integer_categorical"` means "index the set of possible integer values for 'my_feature' and encode it as a categorical feature" https://t.co/ugBGHBWra8

2022-11-11 16:32:10 New tutorial on https://t.co/3la4cADqcR: using the FeatureSpace utility for tabular data preprocessing https://t.co/xS5j5PjjTL

2022-11-11 12:18:54 RT @mark_dow: Reminder: If you pimped for Trump and you pimped for crypto, your judgment—financial or otherwise—cannot be trusted. And this…

2022-11-11 11:31:54 RT @RisingSayak: This has been in the works for MONTHs now! Finally, it's in a good shape and is ready to be shipped @algo_diver &

2022-11-11 02:39:13 @test_boo For the record, I was pushing back against crypto right at the height of the bubble (and before that), and faced heavy pushback for doing so.https://t.co/pcESITlcpg

2022-11-11 02:32:41 https://t.co/bEit99d2xJ

2022-11-11 02:10:29 Markets have ups and downs. But the exuberance phase of each risk cycle breeds purely speculative "bullshit assets" -- those are the assets that crash the hardest at the end of the cycle, and that, unlike equities, never come back up.

2022-11-11 00:08:29 My intuition here is that a plain Twitter clone won't cut it. You have to address the same needs, in a similar way, but with a new twist that makes it feel fresh and just better.

2022-11-11 00:02:00 @azeem Use LLMs to fake users til you make it. Actually no, please don't do that. Horrible idea.

2022-11-10 23:59:36 Twitter is ripe for disruption. But for most people, Mastodon is clearly not the answer. Given how salient the opportunity is, I assume multiple startups must be booting up a Twitter clone right now.

2022-11-10 23:03:25 @MateiCaleb @Plinz The same way Gmail solved the spam problem. With ML flagging at account creation time and at posting time, and an account reputation system.

2022-11-10 22:00:17 @Plinz If you do napkin math on what is publicly known about Twitter's revenue (and recent loss of ad revenue), opex, and debt interest, you do see that bankruptcy within a couple of years is a possibility. Not saying it will happen, but I certainly can't rule it out.

2022-11-10 21:28:15 It may be time for Twitter power users to start diversifying their social media presence. Just in case... https://t.co/Aa11fPNPfO

2022-11-10 21:26:17 @LuisvonAhn @duolingo Insane growth. Congrats!

2022-11-10 20:38:43 RT @TensorFlow: #TFCommunitySpotlight Winner: Chansung ParkChansung's project shows how to build a Machine Learning pipeline for a visi…

2022-11-10 20:03:09 RT @levie: What if, and I’m just spitballing here, the issue with FTX was the normalization of trading fake assets that anyone can invent o…

2022-11-10 17:39:26 The FeatureSpace utility is now available in tf-nightly: https://t.co/REfAKN6Bww https://t.co/j1JWNtWXet

2022-11-10 16:56:24 RT @petefrasermusic: First account is the verified official one. Second is a $8 fake thanks to the Musk subscription. Easy to tell when y…

2022-11-10 16:54:00 There's no easy way to get to more general AI. But that doesn't mean there's no way.

2022-11-09 22:48:39 @MLStreetTalk @randall_balestr Intuitively I think that the distribution of generalization degrees for real systems is bimodal: some are mostly static information-retrieval type (e.g. curves), some are capable of on-the-fly adaptation. So the distinction is meaningful, but it's not a binary -- it's a spectrum

2022-11-09 22:32:00 @MLStreetTalk I don't think a hashtable has any understanding of what it's doing. But most models are not hashtables -- they can generalize to some degree of novelty. They can recompose what they know.

2022-11-09 22:30:34 @MLStreetTalk IMO there is no binary distinction between a model (understanding) and a model-generation process (intelligence) because any model with non-zero generalization ability will have to "recompose knowledge" to handle new inputs.A "binary semantic map" is just a hashtable.

2022-11-09 22:26:26 As more people rely on your service, stability and dependability become some of your most important features, and you have to change the way you innovate. It's just a normal part of the lifecycle of any product.

2022-11-09 22:24:53 Innovating in production is a great idea when you have low risk and high upside (i.e. you have few users and they don't critically depend on your service, and you still don't have product-market fit).

2022-11-09 22:20:18 Deep learning models have very low, but non-zero intelligence and understanding. They can adapt to a weak degree of novelty (local generalization).They remain limited to inputs close to what they've been prepared for -- their training data.

2022-11-09 22:14:06 As with most cognitive traits, understanding is a matter of degree, not a binary. Your model of X will generalize to some degree -- your degree of understanding of X.

2022-11-09 22:13:05 A system that does not understand will be limited to responses that the system must have been explicitly prepared for -- for instance, an animal's innate reflexes do not reveal "understanding".A system that understands will adapt to unanticipated novelty.

2022-11-09 22:11:18 You can test whether a system is actually understanding or not by probing its ability to adapt to novelty and uncertainty."Understanding-driven behavior" is opposed to "reactive/reflexive behavior".

2022-11-09 22:10:40 "Understanding X" roughly means having an internal model of X that you can use to generate appropriate behavior with regard to X. The purpose of understanding is the ability to adapt your behavior to X-related situations you may not have seen before (hence the need for a model).

2022-11-09 22:10:10 Maybe it's just semantics, but I don't think you can draw a clear distinction between "understanding" and "intelligence". If you understand a task or a thing, then you possess intelligence with regard to that particular bit of the universe. It's the same concept.

2022-11-09 22:05:09 @mark_dow BREAKING: Bitcoin.

2022-11-09 18:23:04 Fortune does not favor impulsive gamblers running on a diet of FOMO, memes, and conspiracy theories

2022-11-09 04:32:42 When you set the sampling temperature too high on your language model https://t.co/v52MR6fWAc

2022-11-08 21:23:46 RT @aureliengeron: Just received the first copy of my book (3rd edition), woohoo! You can get it at:https://t.co/GCauRyWoCIPlay with…

2022-11-08 21:14:54 RT @MLStreetTalk: New show! Consciousness and the Chinese Room with J. Mark Bishop, @fchollet, @davidchalmers42 , @Plinz https://t.co/F2e

2022-11-07 20:21:27 Much like it is possible to build a successful business on the back of a very strong personal brand, it's possible to poison a business by making your personal brand radioactive.

2022-11-07 15:08:52 To reach your target, you must first build the right process. But to build the right process, you must focus on the target, not on the process. It's too easy for processes to take a life of their own and forget the goal they were serving.

2022-11-06 22:14:51 Tbh I'd pay to see fewer clowns. On most days my mentions are full of them https://t.co/ttD5v0JxI0

2022-11-06 17:35:05 It can be worth engaging with bad faith arguments, not because you'll convince those who make them (you won't), but because you might help bystanders who would otherwise have fallen for them.

2022-11-06 17:33:31 RT @TimothyDSnyder: I have been hearing the idea from some Republicans that Ukrainian resistance comes at a cost to Americans. Nothing coul…

2022-11-06 01:01:00 RT @ThatSaraGoodman: The thing I really like(d) about Twitter is how it "democratized" academia, amplifying experts outside of top-10 netwo…

2022-11-05 20:51:12 @azeem The notion that they have a "relevance knob" and that it wasn't previously turned to the max... is certainly something. (Of course there's not reality to the claim. They're pitching something they can't deliver.)

2022-11-05 20:42:56 That's $100M ARR, nothing to scoff at.But with Twitter revenue at $5B in 2021, it won't make a dent.

2022-11-05 20:41:48 Since these 15M users are already power users, I would expect the conversion rate to be high. Maybe ~5-6%, and up to a maximum of 13-14%.So IMO a better Twitter Blue with a compelling value prop at $8/month could get net 500k to 2M subscribers -- most likely ~1M.

2022-11-05 20:37:47 (Note that this is very much unlike YouTube Premium, which creates value for content *consumers*. The value prop is basically "you don't see ads".)

2022-11-05 20:36:58 One last thing. Imagine we're in an alternate reality where the plan is, "we're making a better Twitter subscription and charging more for it". How much money could it make?You need to know the top-of-funnel and you need to estimate the conversion rate. Let's take a look.

2022-11-05 19:57:14 The rest of the pitch is actually more concerning. "Pay for reach", in particular. So from now on my followers won't see my tweets, and the tweets I see will be from people who paid for me to see them (i.e. some form of advertising)? Ok then...

2022-11-05 19:54:27 The pitch is very much "you get a checkmark".Why would you want one? Because it used to be scarce. You know, in the times before.Good luck with the plan, I guess...

2022-11-05 19:51:40 If Twitter wants to sell subscriptions, it should focus on adding useful features to Blue.And if it wants to open up ID verification to anyone for a fee, why not. I don't think most people will want to get their ID verified though.But again -- that's not the current pitch.

2022-11-05 19:49:25 This is made explicit in the description: "get a checkmark, just like the celebrities you follow".But this confusion will last, at best, a few days. You'll be able to scam a few marks for a few days, and that's it. I don't see how that's a sound business model.

2022-11-05 19:48:03 "You can pay a subscription to get a public proof that you're paying the subscription" isn't a value proposition.The pitch seems to be playing on the short-term confusion that people will feel as the meaning of the checkmark switches from status symbol to proof-of-subscription.

2022-11-05 19:46:35 The pitch is, "you pay $8/month, you get a checkmark".Historically, the checkmark has meant 2 things:- You are who you say you are.- You are notable or a journalist.That last part made it a bit of a status symbol for some people.

2022-11-05 19:42:44 Can't say I understand the value proposition.If Twitter had said, "we're going to make Twitter Blue 2x more useful and start charging 2x more for it", that would have made sense. It wouldn't have made much money, but it would have made sense.But that's not the pitch at all. https://t.co/JwzyJhXPyp

2022-11-05 16:46:16 @RobertLooman1 MOSFETs are far simpler still.

2022-11-05 16:44:39 The more sophisticated the ecosystem, the harder it becomes to replace the substrate it was built upon. See also: deep learning and GPUs

2022-11-05 16:43:51 Side note -- much like the evolution of biological systems is limited by their substrate (organic chemistry), so is the development of human technology limited by the substrates it settled for (such as modern computer architectures)

2022-11-05 16:39:10 Humans as a collective can develop some incredibly sophisticated systems (your computer and the software running on it is proof of that), but those still remain orders of magnitude short of the superhuman sophistication of biological systems. For now, at least.

2022-11-05 15:36:12 @antirez Enjoy the book :)

2022-11-05 15:34:04 RT @antirez: Strongly recommend. Arrived one week ago, read a few chapters in the middle. Outstanding. Great work @fchollet

2022-11-05 03:16:48 I'm preparing a new Keras utility to help encode &

2022-11-05 00:29:56 And if you can't seem to get people to trust you... don't attempt to bully them into submission. It's not likely to work, and even if it might seem to help you in the short term, it makes the underlying fundamental issue worse.Trust, like respect, must be earned.

2022-11-04 20:26:08 Any sufficiently politicized issue morphs into a personality test -- with a bimodal outcome.

2022-11-04 19:52:33 RT @ShannonRSingh: Yesterday was my last day at Twitter: the entire Human Rights team has been cut from the company.I am enormously proud…

2022-11-04 18:28:52 I'm slightly pessimistic as to how this will impact the service. "Surely 3,500 people should be enough to run an app like this", you may say. Probably! But well over 50% of the remaining 50% are currently looking for new jobs. And 100% are completely distracted right now.

2022-11-04 18:25:34 If you were one of the 50% of Twitter employees laid off today: I wish you a quick rebound. You didn't deserve to be treated like this. Thank you for your work -- Twitter has delivered a ton of value for millions of people. Your work mattered.

2022-11-04 15:14:03 When people can't see the road ahead, trust in the driver is all they have. If that's lost, they're going to want to get out.

2022-11-04 15:13:32 Trust is your most important asset. It's easy to destroy, hard to rebuild. In periods of fast change and extreme uncertainty, focus on preserving trust.

2022-11-04 01:45:43 RT @luke_wood_ml: Retweeting for reach! Happy to help anyone who is interested get started!

2022-11-04 01:45:42 RT @LandupDavid: Lots of good stuff brewing here

2022-11-03 23:14:05 @WBreadenMadden The general idea of open-source software is that anyone can use the software for free, and anyone can contribute to it if they so choose. This model has worked extraordinarily well so far and is directly responsible for the fast progress we're seeing in ML today.

2022-11-03 23:06:36 If you're interested in contributing to KerasCV, our new and fast-expanding CV package, we have a number of features that are open for contributions: https://t.co/CmghXSSME1

2022-11-03 17:56:17 RT @divamgupta: Introducing a new version of DiffusionBee - Stable Diffusion app on Mac with all cutting-edge features.- Easy to use and…

2022-11-03 15:01:54 Emotionally maladaptive people who pathologically crave external validation are at risk of becoming an empty shell merely reflecting whatever their current partner/mentor wants them to be. You've got to exist for yourself and figure out what you stand for when no one's looking...

2022-11-03 02:17:42 @FatMoth It's easy to fork the repo -- in this case you just need to change 2 lines in the main file (you'll find it)

2022-11-03 01:49:32 For the record, I have no intent to leave Twitter (as I've previously said). But ultimately I get the feeling that it doesn't matter much if you leave Twitter or not, because Twitter is eventually going to leave you.

2022-11-03 01:47:52 If you work at Twitter, you have my sympathy. Thank you for your work so far -- I have really been enjoying the app and I've derived a ton of value from it over the years. Grateful. And good luck with what's next! https://t.co/bQ9oSiExwt

2022-11-02 23:52:51 I think Keras does a good job at that. Our goal is to be the absolute fastest way to assemble a solution to any applied ML problem.IMO "time to solution" is a great success metric -- it incorporates:- Docs quality- API intuitiveness / learning curve- Debugging experience

2022-11-02 23:49:15 The best way to make sure they have a good experience doing it, is to provide them with a language that stays as close as possible to the way they intuitively think about their problems.

2022-11-02 23:49:14 Programming is all about thinking clearly about a problem. The APIs you make are a kind of language that people are going to use to express solutions to their problems.

2022-11-02 23:42:48 The best way to get useful feedback about your library is to sit down with your users and watch them try out new features. This gets you so much more information compared to offline feedback.

2022-11-01 17:34:35 The moderation/filtering system of a social app is one of its top 3 defining features -- alongside its user community and its recommendation algorithm.(Doesn't necessarily have to mean taking down content though...)

2022-11-01 14:10:35 Trust and respect are the foundation of all relationships, personal or professional. And both must be earned.

2022-11-01 03:51:36 The line between "good person" and "bad person" is often blurry. One place where it can be drawn clearly: when one derives gleeful enjoyment from the suffering of others. E.g. mocking assault victims. https://t.co/nP2Uf4DmXS

2022-10-31 18:45:46 RT @RisingSayak: Delighted to release the @TensorFlow port of MAXIM, a single backbone capable of denoising, dehazing, deblurring, and more…

2022-10-31 17:30:45 Dressing up as Human Pose Estimation for Halloween https://t.co/fykzDaH47O

2022-10-31 15:23:26 Given that all the big tech companies are either trimming their workforce or not hiring right now, the next few years are going to be a golden age for startups.

2022-10-31 04:05:55 Right now checkmarks are free, but scarce, and as a result they signal "I'm important", which is what makes them desirable. If they start signaling instead "I'm a tool who pays for status", they're not going to be that desirable anymore. https://t.co/CDzEkDhr0G

2022-10-31 00:26:54 Imagine a Twitter experience that focuses on search and custom algorithmic timelines.Enable users to find the best content, no matter when it was posted.

2022-10-31 00:25:08 Twitter as it is now is too much of a liquid frontier. It has a ton of great evergreen content, but you never see it, because it is buried deep in people's timelines.

2022-10-31 00:23:09 I'd love to have a temporal relevance filter on Twitter. "Show tweets about current affairs", "Show tweets that will be relevant for several years", "Show evergreen tweets", etc.Generally speaking, there's a lot that could be done to enable users to customize their feed.

2022-10-30 20:30:47 RT @fdnieuwveldt: @fchollet The well designed Preprocessing API enabled us to implement Pipelines similar to Sklearn's, but natively as pur…

2022-10-30 20:30:44 @fdnieuwveldt Pretty cool!

2022-10-30 18:26:12 @other_musings @elonmusk I'm not. I'm repeating a point I have made a few times before: the system that makes the software is what matters, not the software itself.Not everything has to be about the Current Thing.https://t.co/S8H09rFa49

2022-10-30 03:41:41 @CastleQueen007 The code is available on GitHub here: https://t.co/QXQOV3gSko Enjoy the book!

2022-10-30 03:36:42 Code is downstream of processes &

2022-10-30 03:26:20 The most important form of capital in any organization is human capital.

2022-10-30 03:25:44 Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component of these processes are people.The code is just a by-product. More of a liability than an asset.

2022-10-28 18:45:09 The silver lining of dire situations is that they nudge you to reprioritize and refocus on what's actually important. Family and health -- the rest is noise.

2022-10-28 15:31:55 We bought a children's image book about vehicles (marketed as ages 3+), and it turns out it has an explainer about the 5 levels of autonomy. Perfect. https://t.co/o40i508pYK

2022-10-27 19:33:52 A tomato juice is probably one of the top 3 most chaotic things you can order as your in-flight drink

2022-10-27 14:31:27 This is how I explain overfitting now. https://t.co/lHE3p21RgR

2022-10-27 02:04:31 You get more of what you incentivize. So be deliberate about it.

2022-10-27 01:57:05 One thing Twitter is really good at: getting people addicted to being outraged at things

2022-10-26 19:14:27 Very neat project featuring high-level, mid-level, and low-level APIs for computer vision systems across a wide range of use cases: https://t.co/3KdK9FHDsN https://t.co/8Lc3ojHv1i

2022-10-26 18:36:22 @antgoldbloom Happy to answer any Keras questions you might have :)

2022-10-26 12:14:32 Great teams repeatedly ship great features.Your culture, your processes, and most of all your people provide you with a far more durable advantage than any feature or technology.

2022-10-26 12:11:28 On the topic of cheap clones. Any feature of your product can be cloned (and will be, if it's any good). But a great team culture and a finely-tuned design process are things that are very hard to replicate and that provide a long-term differentiating advantage.

2022-10-26 01:22:06 RT @gusthema: . @kaggle made a very cool announcement and enabled:VMs with 2 NVIDIA T4 are now availableYour question now is: How d…

2022-10-25 19:39:15 It's dramatic how much more fancy you feel when you drink coffee out of a nice porcelain cup as opposed to a paper cup (once in a while...)

2022-10-25 17:30:49 Multi-GPU (2x T4) now available with Kaggle Notebooks! For free :) https://t.co/5LRNYzDgQz

2022-10-25 17:07:01 You can tell a lot about someone from who they admire

2022-10-25 15:38:49 PyTorch-Lightning is what you get when you order Keras from AliExpress

2022-10-25 04:02:21 @amasad Arguably also true for image generation: while there are countless images on the web, there aren't that many high-quality ones (what you actually want to generate)... Once you train on the top 1B you're done.

2022-10-25 01:00:24 Very fun NumPy fact: if you have a dict-like NumPy object such as `obj = np.array({"1": 1})`, you can't convert it back to a dict via `dict(obj)`. You need to do, intuitively, `obj = obj.tolist()`, which returns a dict.

2022-10-24 21:27:50 @mihaimaruseac I mean, it's also capitalist architecture. It's utilitarian, "I don't care" architecture, more broadly.

2022-10-24 21:19:14 I'm not a fan of clusters of building that are carbon copies of one another. It feels like a statement denying the possibility of individuality, a statement that everything is fungible and replaceable.

2022-10-24 19:54:22 @StillTr05207382 I do agree that intelligent reasoning is only a small slice of human cognition overall (though it is distributed and constant, not confined to specific times in the day), but it's the really important slice :)

2022-10-24 19:52:05 @enceladus2000 Yes, there have been multiple efforts, from early attempts leveraging the GPT-3 API, to much more sophisticated efforts with cascades of LLMs to describe the tasks, fed into code-generation models to produce candidate solutions, etc. Unpublished unfortunately (it didn't work!)

2022-10-24 02:21:17 @Plinz The most interesting solutions I've seen so far were not genetic, and some didn't even feature a DSL (direct-to-output)...

2022-10-24 01:56:00 In AI, it's often the case that beating a benchmark says more about the benchmark than about the AI.

2022-10-23 14:02:18 RT @svpino: I spent 5 days reading everything I found about image generation. Stable Diffusion is one of the most impressive systems I’ve…

2022-10-22 23:56:43 The only way to make an informed choice is to do your own hands-on research.

2022-10-22 23:55:54 Do NOT make a choice based on peer pressure or social media chatter. Actually compare equivalent code examples side by side in different frameworks. Actually try to write part of your codebase with each option. See how fast they run. See how elegant the code is (or not).

2022-10-22 23:54:23 Most important factors IMO:- How maintainable the framework makes your codebase (concise, simple, extensible)- How quickly it enables you to get to a solution (documentation, debugability)- How fast / efficient it makes your models

2022-10-22 23:50:05 A ML framework that shrinks your codebase from 3,500 lines to 1,000 lines saves you hours of work every week. A ML frameworks that increases your device utilization by 20% saves you $100k on a $500k training job.It adds up. Pick your tools wisely.

2022-10-22 21:32:37 I'm not that attracted to the idea of building autonomous AI agents.I want to build an on-demand "better brain", through which humans and machines could co-think together.

2022-10-21 21:18:52 The views from the Google office in SF aren't too shabby... https://t.co/BHFSyO9nTC

2022-10-21 21:15:37 I think the general public would gain from understanding both the limitations *and* the benefits of this big wave of change.But arguably, the benefits are already covered by corporate PR-driven journalism, and there is a much greater need for raising awareness of limitations...

2022-10-21 21:13:07 Yoshua and I were interviewed for an @ARTEfr documentary on AI. I thought the film was pretty good -- explaining things clearly for the general public and raising awareness of some of the limitations and potentials risks of modern AI. https://t.co/cBbHpgKUJn

2022-10-21 18:33:56 RT @ach3d: Excellent documentaire @ARTEfr sur l'IA avec des explications très claires et intelligentes de @fchollet et Yoshua Bengio en par…

2022-10-21 15:58:16 Tune in! https://t.co/EqtbrQkOlU

2022-10-21 15:19:24 There are many efficiency-related reasons for building dense and human-centric cities, but IMO life quality is the single best argument.

2022-10-21 14:24:16 RT @fadibadine: Next version @TensorFlow is coming in 2023 with a focus on 4 pillars:- Fast &

2022-10-21 14:24:11 RT @lak_luster: Great roadmap. Loved seeing the explicit statement about backwards compatibility, and norming to numpy api. Excited about X…

2022-10-21 00:23:09 RT @humphd: This is great to see, esp the commitment to stability and improving edge deployments. I’m especially interested in tensorflow…

2022-10-20 23:24:07 (Cover image generated with keras_cv.models.StableDiffusion)

2022-10-20 23:02:03 Over the next decade, deep learning will get deployed to every problem it can help solve. We're setting out to build the foundations of that wave of progress. https://t.co/w5uTpjqEq0

2022-10-20 22:56:56 Today, I'm really excited about what's next for TF and Keras. We are doubling down on the greatest strengths of the frameworks and their ecosystem. To our users and contributors: thank you for being a part of the journey!

2022-10-20 22:55:10 Backwards compatibility in particular is something I care about enormously. I wasn't happy with the 2017 decision of rolling out a new TF that changed the programming model &

2022-10-20 22:52:06 We've started work on the next iteration of TensorFlow. Faster, XLA compiled, scalable to ultra large models Focused on applied ML and production use cases Fully backwards compatible, no break of continuity...and much more. Read all about it:https://t.co/hNkMoVAJdF

2022-10-20 20:42:20 People overestimate the impact govt policy had on health outcomes during the pandemic. Countries that fared better were mostly those where individuals behaved more responsibly (masking, getting vaccinated), and govts only have limited ability to influence individual behavior...

2022-10-20 17:25:39 Open source isn't just important to modern AI. It's existential. The progress we've seen in the past ten years in applications and research wouldn't have been possible without a strong open source movement.

2022-10-20 17:14:47 There are electric lines above Caltrain lines now, that seems to be new... Unfortunately the train themselves are still Diesel museum pieces

2022-10-19 22:29:14 @rbhar90 The wall arises in cases where no amount of surrounding symbolic rules can get you to the desired result -- i.e. you need to restart with a different approach.IMO such companies will simply pivot their products to a simpler, more suitable problem instead.

2022-10-19 22:25:16 @MattAlhonte @_inc0_ Cf https://t.co/djNAIUI3J4

2022-10-19 22:17:41 RT @rbhar90: This is an important point. There's a floodgate of money opening for LLM startups, but there will be a long painful road betwe…

2022-10-19 22:17:38 @rbhar90 And will likely involve writing *a lot* of symbolic rules to address the limitations encountered, ending up with a system that's a far cry from just a prompted LLM...

2022-10-19 22:15:04 @Machine01776819 Then you can read https://t.co/djNAIUI3J4

2022-10-19 22:02:52 Your future success may depend on being able to tell the difference...

2022-10-19 22:02:25 But the most important aspects of language and cognition aren't there, and won't emerge with more training data.This means that for many applications there will be an insurmountable wall when going from "here's a cool demo" to "here's a reliable solution".

2022-10-19 22:00:10 In the case of LLMs, it's a broad question, because there are many aspects of language to consider. Some of them are captured by LLMs, which means these systems are useful and can be leveraged to create valuable products.

2022-10-19 21:58:20 A very realistic humaniform robot might look like it's alive (despite not being the real thing). On the other hand, actually useful robots aren't optimized for realism (&

2022-10-19 21:58:19 The actionable question is not "does my attempted copy of X capture the essence of X" (meaning, sentience, life, etc...). Rather it is, "does the copy retain the aspects of the original that are practically valuable and that we care about?"

2022-10-19 21:58:18 Language models don't capture "meaning" in the human sense, nor are they sentient (even a little bit). But do these question even matter at all, otherwise than as a fun philosophical brainteaser?

2022-10-19 18:38:04 @_inc0_ The more people talk about how close we are to AGI the less they're able to rigorously define general intelligence, and inversely.

2022-10-19 18:27:32 In a more rational world we'd be able to talk about new technology in terms of what it is and what it enables, not in terms of a fantasy of what it could become if it were something entirely different

2022-10-19 18:25:42 The AGI discourse is counterproductive in many ways, one being that it forces you to counter-argue, "no, this avenue cannot lead to general intelligence because XYZ", whereas the much more interesting question is "what can you use this tech for?" -- not "will this lead to AGI?"

2022-10-19 17:57:48 @qhardy It's the same template: influencers or politicians milking a relatively fixed group of broke, powerless, frustrated, gullible, low-education folks for purchases or donations

2022-10-19 17:39:58 @qhardy Also: crypto, NFTs, alpha male supplements, most of the self-help industry (PUAs and friends), etc.

2022-10-18 22:46:20 I really believe that urbanism and architecture -- good and bad -- have a very powerful and immediate effect on one's mindset and well-being.

2022-10-18 19:31:03 One thing that's very obvious when looking at toddlers: there's a natural continuity between nonverbal and verbal communication.Language is mostly expressed through words, but language is not made of words.

2022-10-18 03:06:54 There is a large discrepancy between the catchy narratives that people use to make sense of AI and its evolution, and what actually works and produces value. This is an arbitrage opportunity.

2022-10-17 00:56:55 @el_keogh https://t.co/Dnc7rkJbAT

2022-10-16 23:47:48 https://t.co/JtwCUTDQsK

2022-10-16 19:36:30 I wish more people working on developing image generation tools had experience making art, or appreciating art and artists

2022-10-16 19:35:02 Worth repeating: art is not a "problem" to be "solved". New image production tools do not "solve art" or otherwise "kill art" or anything of the sort. https://t.co/gP9OHeuR7J

2022-10-16 19:16:19 So these models do possess a fair bit of common sense -- the kind that can be obtained via statistical mining of web data.(Which is only a subset of human common sense.)

2022-10-16 19:14:47 "salmon in the river" will get you the visual most likely to be associated with that caption on the Internet (i.e. the data the model was trained on).To get something that's a statistical outlier, you have to go with a prompt that's also an outlier. https://t.co/c4iBlVX63v

2022-10-16 19:11:53 There's a meme floating around that says "AI" interprets prompts overly literally, to comedic effect.In reality, these models interpret your prompts *statistically*, not *literally*. They show you the most likely output consistent with your query, in terms of the training data. https://t.co/IdawLpAtex

2022-10-16 16:50:35 @levie What's the use case you're most excited about?

2022-10-16 15:26:34 RT @LandupDavid: One of the reasons I #Keras is just how simple it is to port academic research into actionable code. Plain ViTs aren't…

2022-10-15 23:26:12 @yeahgirlscode Your definition of mainstream consumer use case is "training as a surgeon" and "learning to pilot a plane"? I'm not sure we'll find 2-3 billion people who want to do that for hours a day

2022-10-15 22:02:31 @voidiiiiy So you're saying crypto guys confuse the actual definitions of inflation and money supply, got it

2022-10-15 21:21:18 @mikealfred Sorry for literally reading your tweet and assuming it said what it said. Maybe it was intended as satirical?

2022-10-15 21:09:00 Never ceases to surprise me how many crypto guys don't know the definition of basic terms like inflation, money supply, QE/QT, etc.Bitcoin-denominated inflation has been ~250-260% year on year.E.g. cost of living 1 yr ago: $100/day ~ BTC 0.0016Today: $110/day ~ BTC 0.0058 https://t.co/CyPeIegsiT

2022-10-15 18:21:17 @w_t_payne Anything you can do in VR you could do on a laptop or phone. The only unique value prop of VR is immersiveness, which is not a desirable feature for the very large majority of computing use cases.

2022-10-15 18:19:23 @conjurial If you made smartglasses even more casual than smartphones / watches, they could become a thing. The tech is still a long way from that though.

2022-10-15 18:16:38 "Make it immersive" is a sensible value proposition for the serious gamer niche. But it is not a sensible paradigm for mainstream consumer computing. Computers will become casually ubiquitous to the point of becoming invisible. Not the other way around.

2022-10-15 18:13:54 The epochal trend in consumer computing has been towards making computing *casual*: increasingly lower-cost, lower-friction interactions with increasingly small and accessible computers.The PC, smartphones, smartwatches, smartglasses fit the trend. VR goes opposite to it.

2022-10-15 16:41:17 Toddler reasoning: two or more toy cars joined together become a train. Tchoo tchoo!

2022-10-15 15:26:52 RT @curious_founder: In 2020, Scrubgrass, a power plant in Pennsylvania was about to close. Then Stronghold, a crypto mining company, cam…

2022-10-08 04:58:41 @RisingSayak @kaggle Congrats!

2022-10-08 01:57:52 Another way to phrase this -- every unit of focus not spent on making the product better is a distraction. Excellence is always your strongest leverage.

2022-10-08 01:55:43 I really believe that if you're fully focused on creating a delightful product, then the only remaining concern is whether you're solving a big enough problem. Everything else will get sorted out with a great product. Competition is simply not a concern.

2022-10-08 00:48:30 RT @LandupDavid: #KerasNLP makes creating an autoregressive GPT-style transformer as easy as these 5 lines. Not a black-box architecture re…

2022-10-07 20:41:49 @fadibadine Apparently so!

2022-10-07 20:32:23 I could never get used to the fact that English uses 1-indexing when referring to floors in a building. I much prefer 0-indexing, like in French or in Python.

2022-10-07 18:40:10 My apologies with regard to the disruption of the call by a troll/hater with distressing imagery. We will invest in moderation tools to prevent this sort of attack from happening next time.

2022-10-07 18:01:26 The Keras community meeting is happening now -- join at https://t.co/7H0RdW35Wz https://t.co/pfZs4Bhy3i

2022-10-06 23:15:47 RT @capetorch: If you want to run Stable Diffusion natively on your Mac, the fastest option is Tf/Keras right now.You can check my analys…

2022-10-06 15:44:06 Programming isn't so much about conveying to machines instructions for how to do a thing. It's more about building clear mental models of the thing. That's the hard part. Writing the code and running it can be a tool to shape your own thinking.

2022-10-06 15:34:21 RT @sundarpichai: Excited to launch Pixel 7 and Pixel 7 Pro with our next generation Google Tensor G2 chip, which brings state-of-the-art #…

2022-10-06 00:43:53 22 months ago I thought "generating photorealistic movies from a script" was "only a matter of years at this point". Now we already have prompt-to-gif prototypes. I feel like we're 6-12 months ahead of schedule, thanks to the recent burst of interest and investment in the area. https://t.co/VVpSWpaWfH

2022-10-05 18:37:26 Text-to-video is getting more refined at a fast pace. Impressive results! https://t.co/9qAGAe56HZ

2022-10-05 16:30:35 RT @haifeng_jin: Keras community meeting is happening this Friday! Anyone can join. We will share the latest updates followed by a Q&

2022-10-05 16:29:16 RT @antoniogulli: 1st in #newreleases in #nlp and now also in #programming #algorithms - So happy to see this - #deeplearning  with #Tensor…

2022-10-05 04:31:00 @kcimc Even though the experience of painting by hand (and the output) is very different, that difference may be lost on young minds that haven't gone through the process, and the devaluation of paintings as artifacts might discourage them from starting their own journey.

2022-10-05 04:28:08 @kcimc I do think there's an important downside here, though. The psychological effect it will have on learners. I am not sure that young folks will be motivated to spend thousands of hours learning to draw if they can generate artistic-looking pictures in an instant.

2022-10-05 04:21:41 @kcimc Now, as a new tool, I do think it changes a lot for digital artists. The impact on future art is at a minimum comparable to the rise of 3D and tablets/PS, and quite possibly comparable to the rise of photography.

2022-10-05 04:20:16 @kcimc The relationship between an artist and their art is their own -- it cannot be changed by external forces, such as new technology, unless they decide so.

2022-10-05 04:16:37 To be clear, picture/video generation will definitely go fully mainstream via a thousand apps. That's a given.But it's what comes after that's particularly interesting to me.

2022-10-05 04:15:12 It's important to note that "generating pretty pictures" is not the real value proposition of this category of technologies. The broader point is to organize information and develop better interfaces between human minds and the ocean of information they're plunged in.

2022-10-05 04:07:53 Generated this one earlier. One of my favorites so far. It has color schemes similar to what I'd use in a painting https://t.co/aTX7iTFBGw

2022-10-05 01:10:31 @MarshallBCodes Link in tweet below...

2022-10-05 01:06:21 @svpino With such a large file size your best bet is to convert the .gif to .mov (there are utilities for that) which Twitter accepts (512MB limit).For a gif under 30MB you can just use lossy compression to get it under 10MB (the Twitter limit). That's what I do, at 50 iterations.

2022-10-05 00:44:04 @mturnshek We're currently adding it to the KerasCV version

2022-10-04 23:25:11 @kelvindotchan You need to train a classifier to evaluate aesthetic quality (yes it works) and use it to guide the generation process. Happy to help out if you want to add this feature to KerasCV

2022-10-04 22:39:33 Stochastic lucid dreams https://t.co/H15ZFeWuIa

2022-10-04 22:36:15 Make your own: https://t.co/yg4Vob77sS

2022-10-04 22:35:44 The Latent-Quest of Unknown Kadath https://t.co/Zre210wwRs

2022-10-01 00:49:32 Is this meant to imply that there was a time when he *wasn't* like this? That's exactly how I remember him from 2015-2016 https://t.co/klJ9j9wYF3

2022-09-30 21:53:46 @rglucks1 Les soi-disant "nationalistes" font en fait parti d'un mouvement mondialiste, dont Putin est la figure de proue de facto. Les "nationalistes" aux US, en Europe, etc. sont idéologiquement alignés et stratégiquement coordonnés.

2022-09-30 20:55:06 A humanoid robot sounds cool and all but what about a humanoid robot *with a jetpack*

2022-09-30 20:24:27 RT @divamgupta: Image2Image is now available in Tensorflow / Keras version of Stable Diffusion.Repo link : https://t.co/oPFfTcEIr7Colab

2022-09-30 17:26:31 5. KerasCV intro to Stable Diffusion https://t.co/a7lFP82znn

2022-09-30 17:25:52 4. Tutorial on diffusion modelshttps://t.co/e2o4BYcZcA

2022-09-30 17:24:28 3. ~450 LOC implementation of Stable Diffusion in KerasCV (inference only for now) https://t.co/H3YU9wEgap

2022-09-30 17:21:48 2. Training a text-to-image model on your own data https://t.co/LPKYigkt3e

2022-09-30 17:21:08 If you're interested in image generation and text-to-image models, check out these codebases:1. A minimal implementation of diffusion models https://t.co/Vs8smszb8c

2022-09-30 13:25:35 Putin invaded Ukraine, unprovoked, because he wanted to annex it.If you were one of those saying "it's because of NATO", "he doesn't have a choice", or "he wants to denazify Ukraine", you're either a Kremlin propagandist or a fool. https://t.co/yVaaNkM3LL

2022-09-30 13:13:09 If you're looking for a book to learn deep learning, this is your chance. https://t.co/VjpDBcoGut

2022-09-30 04:00:15 Twitter makes you better at writing. But social media might make you worse at thinking.

2022-09-29 21:36:19 You like to "commit to the bit?" How about you commit to the bit of work you're supposed to be doing right now

2022-09-29 19:43:28 @AdamSinger That's what I'm talking about. Let me now when you capitulate completely I'll buy back in

2022-09-29 19:39:26 @AdamSinger I feel like everyone is in deep doomer territory today, so we've got to be at a local bottom at least

2022-09-29 11:49:47 RT @fchollet: If you just want to start generating your own images in Colab as fast as possible, use this minimal notebook: https://t.co/oL

2022-09-29 03:44:49 RT @TensorFlow: With KerasCV, anyone can use the highest-throughput StableDiffusion pipeline to generate images! Get started in under…

2022-09-28 18:33:25 The deep learning mega-engineering narratives ("we're going to solve every problem with a giant multi-modal 100 trillion parameter model trained on all human data", etc.) remind me of Cyc. But with a lot more funding.

2022-09-27 23:54:24 @JadenGeller It works on my Metal MBP in 512x512. See list of dependencies here: https://t.co/71D9TjSwxn

2022-09-27 18:26:50 @FBuranelli On my 16 core M1 GPU it's 35s/image. Haven't tried on a 32 core.

2022-09-27 16:54:34 If you just want to start generating your own images in Colab as fast as possible, use this minimal notebook: https://t.co/oLqZa3Q6fO https://t.co/D9rmAOXltP

2022-09-27 16:00:39 RT @luke_wood_ml: New https://t.co/vsff1ldJZA tutorial: High-performance image generation using StableDiffusion in KerasCV. In this guide…

2022-09-27 15:33:55 Many thanks to all those who made this implementation possible, in particular @divamgupta @luke_wood_ml and of course the creators of the original Stable Diffusion models!

2022-09-27 15:32:46 And finally, the image generation loop. https://t.co/eQ1lUgbpfR https://t.co/hvm4RNDklk

2022-09-27 15:31:40 This is the final image decoder model. 86 LOC. https://t.co/fhoV9tVvNK https://t.co/PlyftSxR2c

2022-09-27 15:30:26 This is the Diffusion UNet. A bit more hefty: 181 LOC in total. https://t.co/P4AJaF31vj https://t.co/8e60SQXOU6

2022-09-27 15:28:24 This is the text encoder (and its subcomponents): 87 LOChttps://t.co/Ia8dioghJR https://t.co/Zkh7I5BZgi

2022-09-27 15:26:19 If you want to learn more about how Stable Diffusion works, I encourage you to check out the implementation. The model itself is only ~350 lines across 3 files. The image generation loop is ~100 lines.It gives you a good idea of what it's like to work with Keras. https://t.co/d2jNJ8k2YU

2022-09-27 15:25:03 Stable Diffusion is now available directly in KerasCV!And it's fast: 30% faster than the PyTorch version for a batch of 3 images on the NVIDIA T4 GPU (which is the GPU you typically get on Colab).Try it out: https://t.co/a7lFP8kIBv https://t.co/EnnRGmUmo1

2022-09-26 19:50:36 If you see bullies being bullies, call them out.

2022-09-26 19:04:24 RT @fchollet: Are you a Keras user?Every year, we run a (very quick!) survey so that we can better understand your needs. Please take it:…

2022-09-26 16:24:08 New tutorial on https://t.co/m6mT8SaHBD: Class Attention Image Transformers. Created by @RisingSayak https://t.co/abl8nuN2Bh

2022-09-26 16:08:05 RT @PyImageSearch: New tutorial A Deep Dive into Transformers with TensorFlow and Keras: Part 2 Connecting wires Positional Embed…

2022-09-26 14:41:22 @LukasPlatinsky We're working on adding fine-tuning to the Stable Diffusion implementation in KerasCV

2022-09-26 02:55:41 RT @haifeng_jin: Take a few seconds to influence the future of Keras. The annual user survey.

2022-09-26 00:57:42 "Codebase to train a CLIP conditioned Text to Image Diffusion model on Colab in Keras"https://t.co/LPKYig3q1eVery concise &

2022-09-25 15:14:00 @jasonbaldridge @GaryMarcus They're definitely compositional, even heavily so. But much simpler models show compositionality as well. It's not a super high bar.Their limitations lie not in lack in compositionality but in the nature of the concepts they learn and how they compose them.

2022-09-25 15:07:20 RT @divamgupta: Last week I implemented Stable Diffusion using Keras / Tensorflow. Now its almost integrated in KerasCV thanks to @fcholl…

2022-09-25 03:39:34 (If you were wondering how often Stable Diffusion will give you a horse with more than 4 legs (or sometimes less) when you ask it for a photo of a horse: in my experience it's about 20-25% of the time.)

2022-09-25 01:17:04 This is the difference between discrete and continuous world models.Between a graph and a differentiable curve.

2022-09-25 01:16:11 On the other hand, a DL model is excellent at reproducing local visual likeness (what it's fitted on), yet it has no understanding of the parts &

2022-09-25 01:16:10 The difference between human-drawn bad bicycles and AI-generated photorealistic 5-6 legged horses is important and insightful.Humans are largely unable to reproduce the visual likeness of something. But they know what the parts are (2 wheels + 2 pedals + handbar + saddle). https://t.co/6aUbO3XEGz

2022-09-25 01:09:00 @Plinz A 5-year old that draws disproportionate stick figures will still draw horses with 4 legs and 1 head and 2 eyes.On the other hand, a big curve is extremely good at reproducing local visual likeness (what it's fitted on) but has no understanding of the parts &

2022-09-25 01:06:35 @Plinz In fact, the difference between the human-drawn bicycles and the AI-generated photorealistic 5-6 legged horses is deeply insightful.Humans have no idea how to reproduce the visual likeness of something. But they know what the parts are (e.g. 2 wheels + 2 pedals + handbars, etc)

2022-09-25 00:58:31 @Plinz Humans can't draw anything unless they explicitly practice for it, pretty much. And they most definitely cannot produce 1024x1024 RBG pixel grids of photorealistic images right from their brain.

2022-09-25 00:55:12 Big curves can be quite useful. Image generation is a striking example of that usefulness.They're useful enough to transform every industry out there and change the trajectory of our civilization. Just like computers did.

2022-09-25 00:49:48 This is simply a different kind of thing -- not an artificial brain, but an interpolative database of billions of pictures. Not quite a "database", actually, since it doesn't store the actual instances, only the parameters of the manifold they lie on.It's basically a big curve.

2022-09-25 00:46:15 There is no human that could do what current image-generation AI does -- producing on-demand pictures of absolutely anything in absolutely any style, including photorealistic.On the other hand, a human that has seen one million horses knows that horses have 4 legs. https://t.co/wcM28VGKJI

2022-09-24 22:52:27 @hardmaru https://t.co/f2qVzfF4Sk

2022-09-24 21:21:06 I generally expected that I would enjoy being a dad. But as it turns out I'm enjoying it a lot more than I expected

2022-09-24 19:12:13 Human intelligence is a poor metaphor for what "AI" is doing. AI displays essentially none of the properties of human cognition, and in reverse, most of the useful properties of modern AI are not found in humans.

2022-09-24 04:25:07 @sahiralsaid Is PT API-compatible with NumPy, like TensorFlow is? I'm not aware of any such feature...

2022-09-24 00:03:31 RT @TensorFlow: If you have any type of visual impairment, routine tasks can be difficult. The Lookout app uses #ML to make everyday tas…

2022-09-23 20:51:15 Just hit this: I replaced `np.triu` and `np.inf` with tfnp and `np.ones` with `tf.ones` -- just works https://t.co/QZNk55fsI1

2022-09-23 20:49:38 Tip: if you ever find yourself wondering, "what's the TensorFlow equivalent of `np.something`?" You can just use `tf.experimental.numpy.something`.

2022-09-23 15:00:47 RT @ayushthakur0: Even if you are not a frequent Keras users, consider taking this survey. There are some nice questions to provide your su…

2022-09-22 19:43:46 RT @ariG23498: If you are a Keras user, please fill the form. It takes ~5 mins.

2022-09-22 15:20:44 RT @fchollet: Are you a Keras user?Every year, we run a (very quick!) survey so that we can better understand your needs. Please take it:…

2022-09-22 03:49:11 @AdamSinger Seriously, people should charge their phones before posting

2022-09-22 03:46:54 I get stressed out when I see a phone screenshot where the battery is about to run out

2022-09-21 23:39:07 @OkbaLeftHanded

2022-09-21 23:27:03 @RetroMl Thanks!

2022-09-21 23:12:51 @mat_kelcey I used to complain about this (even filed a bug at one point). It is unexpected behavior IMO.You should use stateless_normal: https://t.co/rdXqzjO7HzGoing forward Keras and all of TF are moving towards using stateless random ops everywhere. Implemented in Keras already.

2022-09-21 22:57:56 Are you a Keras user?Every year, we run a (very quick!) survey so that we can better understand your needs. Please take it: https://t.co/4hiSCZLeuI

2022-09-21 22:14:58 The surest sign that you understand a system is your ability to modify it and improve it.

2022-09-21 16:40:54 https://t.co/kGd50oUFR1: an open-source toolkit for animal pose tracking. Works with any type/number of animals. Published in Nature Methods. Seems highly useful both for academia and real-world deployment!Built with Python/TF/Keras. https://t.co/pkgvYWyCUT

2022-09-21 15:58:26 RT @NewYorkStateAG: Today, I filed a lawsuit against Donald Trump for engaging in years of financial fraud to enrich himself, his family, a…

2022-09-21 15:25:18 RT @LandupDavid: @fchollet @divamgupta The last lesson lesson covers design and training choices with Keras/TF that can introduce a:- 91%…

2022-09-21 15:25:02 RT @LandupDavid: @fchollet The book also covers some goodies like #KerasCV and #KerasNLP, which are amazing additions to the Keras ecosyste…

2022-09-21 15:24:46 RT @LandupDavid: Writing something, you hope but don't really expect that many people will read your work. Extremely blessed and thankful…

2022-09-21 15:23:30 @LandupDavid Congrats on the release!

2022-09-21 04:25:05 @felipeerias @migueldeicaza For now. We're still very early.

2022-09-21 03:55:27 Is it creative? Yes -- curation in a boundless space of choices is definitely creative, and requires taste if you want to produce something worthwhile.Is it art? Yes -- it's a new medium of expression.Is it in any way similar to the work of a digital illustrator? Not at all. https://t.co/eI2mqMeIVf

2022-09-21 03:50:46 @TechSupportMan2 From Transient Confusion to Stable Diffusion

2022-09-21 03:47:38 A few steps further -- new location, new snap https://t.co/eUsQToZuH6

2022-09-21 03:27:11 @cubemeow The way I'd do it, is intercept each variable at it's being accessed on the PT side to get an ordered list of all variables and their values.Then get the same list on the Keras side, and assign the values (same order)Not sure if you can do that with PT -- with Keras you can

2022-09-18 21:24:41 So tired of the toxicity on display in certain corners of ML. Fewer trolls, more builders, please.

2022-09-18 19:55:16 @ChrSzegedy @ykilcher Maybe getting regular "anonymous" insult emails from PyTorch devs does that to you.Don't make toxicity your brand.

2022-09-18 19:53:58 @ykilcher It was disrespectful to the creator of the repo. "It's just a joke" is not a universal excuse for toxic behavior. Especially not in this context -- not sure if you're aware, but we've been facing a coordinated harassment/insults campaign going on since 2017. Do better.

2022-09-18 19:04:11 Perhaps a trite thing to say, but by far my favorite part of working with open-source is the talent and energy of the community. It's a privilege for me to be working with your code. Grateful

2022-09-18 10:37:20 @ykilcher In the long run, memes and shitposting don't really work as a replacement for better fundamentals.

2022-09-18 10:31:16 @ykilcher You know, instead of shitposting you should check out the repo. It's faster, it's more concise, the code is elegant. You can add TPU and multi-GPU inference in 1 line. You can export the model to TF.js or TFLite for on-device inference.There's a reason TF/Keras has more users.

2022-09-17 20:51:45 @divamgupta Thanks! I'll take a close look.

2022-09-17 20:38:57 @divamgupta Wow -- super neat! Do you have a GitHub repo? Excited to check it out! Please reach out at at fchollet@google.com

2022-09-17 02:57:02 Read this as "how do TF people..." and my first reaction was, well duh it's because they use Keras https://t.co/tICNWNTbJ7

2022-09-17 00:48:31 RT @penstrokes75: It's been a busy and rewarding summer. Had the privilege of contributing to KerasNLP as a GSoC contributor. Here's my "bl…

2022-09-16 23:10:21 @FailTrainS This would be a great topic for a book: how to design good abstractions?1. Refrain from introducing abstractions that aren't strictly necessary.2. Make sure the abstractions you introduce generalize broadly.3. Iterate -- your first attempt won't be perfect.

2022-09-16 22:58:28 The day a machine can achieve a higher degree of conceptual clarity over software problems than an average software engineer, we'll have achieved strong AI. So far we are ~0% of the way there.Generating code that runs is an entirely separate, much easier problem.

2022-09-16 22:55:29 This has deep implications when it comes to AI-assisted software creation. So far, conceptual clarity originates exclusively from humans. The ability to automatically "fill in" code, with less human oversight (and thus less conceptual clarity), is likely to make software worse.

2022-09-16 22:52:39 Good software is as little software as possible -- good software stems from conceptual clarity.

2022-09-16 22:51:41 I've rarely seen a software engineer fail because they couldn't handle a complicated system. But I've often seen software engineers provoke cascading long-term failures because they developed an overly complicated solution.

2022-09-16 19:39:57 RT @LauraJedeed: By far the worst thing in these four terrible minutes is the thing where DHS falsified the immigrants' paperwork to random…

2022-09-16 19:14:40 RT @Sky_Lee_1: Rachel Self, a Boston immigration attorney tells you everything you need to know about immigrants sent to Martha’s Vineyard.…

2022-09-16 02:31:36 Frankly, I hope he can heal and find peace one day. I wouldn't wish this kind of manic hatred and paranoia on my worst enemy. https://t.co/lR21XtS3gZ

2022-09-15 23:20:04 Between two otherwise comparable options, pick the one that's less familiar to you: you will learn more

2022-09-15 19:38:39 It reminds me of how places like Turkey manipulate refugees.

2022-09-15 19:37:37 Lying to a group of a vulnerable people (including children) to ship them to a place that was unprepared to host them (but did it anyway) to play political games is shameful.Good to know there's still people on the side of humanity, though. https://t.co/2Jqqdva9i2

2022-09-14 04:11:41 RT @jabuttee: that's all great but did you ever liberate your own mama's village https://t.co/wnZK97Mr72

2022-09-13 19:19:10 @GwogLyt Honestly it's neat that they're teaching Python in high school at all

2022-09-13 02:42:16 Over-engineering is the enemy of correctness and reliability.

2022-09-13 02:41:47 Code that is hard to read is full of dark corners where bugs can hide. When your code is perfectly clear, bugs have nowhere to hide.

2022-09-13 01:27:26 From November last year -- still relevant. https://t.co/gP9OHedO5J

2022-09-12 22:40:52 Most deep learning involves dealing with a guy named Adam and most data science involves dealing with a guy named Jason

2022-09-12 22:15:39 There's no drama like chess drama.

2022-09-12 17:35:30 Do you work with Machine Learning? Make sure to take the ML &

2022-09-12 11:10:10 RT @dimiboeckaerts: I’m currently reading this masterpiece and it really is a goldmine of information! Strong recommend for anyone in ML/DL…

2022-09-12 02:08:42 RT @georgewbarros: Animated GIF showing how a blistering Ukrainian counteroffensive liberated Kharkiv Oblast west of the Oskil River in 6 d…

2022-09-11 02:31:39 @IAtalkspace I think long walks in nature are effective because they encourage forgetting the self and focusing on your surroundings. The ideal self is ego-less.

2022-09-11 02:29:54 @IAtalkspace Long walks in nature?

2022-09-11 02:22:45 Just a thought. What will make you at peace is not reaching the goals you set for yourself, or the pursuit of self-improvement. Rather, it's... letting go of that self-focused mindset.Perhaps the easiest way to learn how, is to have kids. (But there might be many ways.)

2022-09-10 23:58:36 Slowly at first, then suddenly https://t.co/Lb62K6fJlZ

2022-09-10 10:11:33 If you were wondering how the Russian invasion is going: pro-Russia pundits are now at the "trust the plan" stage of cope.The entire frontline is crumbling. It's the beginning of the final chapter of the Russian defeat. https://t.co/K7oBuWOa0X

2022-09-10 09:21:08 *thinner

2022-09-10 02:35:41 The more you lean into what's wonderful, profound, and exciting out there, the thiner your tolerance for bullshit becomes.

2022-09-09 22:19:17 RT @luke_wood_ml: Almost a year in the making… KerasCV has a complete end to end ObjectDetection API complete with in graph COCO metrics, b…

2022-09-09 16:22:50 New guide on https://t.co/m6mT8SaHBD: train a RetinaNet object detection model with KerasCV. https://t.co/zx5vkBWDnx

2022-09-08 20:13:22 Our goal is to turn Machine Learning from a craft into an industry, and make it universally accessible across the developer community. https://t.co/BBgylaUKNl

2022-09-08 19:15:38 @dhritimandas_ We intend to switch it on by default in the future. Right now it doesn't work for 100% of models.

2022-09-08 19:04:54 Keras/TF is fast out of the box (reliably at least 15% faster than alternatives), but if you want to make it even faster, try:1. XLA compilation via `jit_compile=True` (passed in `compile()`2. Running more GD steps on device via `steps_per_execution` (also in `compile()`)

2022-09-08 15:08:49 RT @paul_rietschka: There really isn’t a better deep learning ecosystem than TF

2022-09-07 16:23:50 TensorFlow 2.10 has been released! What's new? Read the announcement here: https://t.co/8lswhgvNRh

2022-09-07 14:04:22 I'll tell you a secret about having more impact. You've got to focus on fewer things, ideally just one -- your most ambitious idea. And you've got to commit and finish it.

2022-09-07 03:05:03 RT @ScottDuncanWX: Another unfathomable heatwave is unfolding in North America right now. This latest heatwave alone would be remarkable...…

2022-09-06 23:06:45 RT @GoogleAI: Today we introduce an ML-generated sensory map that relates thousands of molecules and their perceived odors, enabling the pr…

2022-09-06 22:33:59 @triketora Fake followers, pig butchering scams, etc. There could be 12 different reasons...

2022-09-06 22:28:22 I've never seen LLMs used in this context. Which makes sense, because either:- you care about your content/message, in which case you handcraft it, while using bots to scale up its distribution- you don't, in which case simple template-based generators are way more efficient.

2022-09-06 22:26:30 The only use of ML for spam or propaganda I've ever seen in the wild so far is FaceGAN, used to generate untraceable (but still visually distinctive) profile pictures. Very widespread at this point. https://t.co/I5nflGQOV5

2022-09-06 17:29:09 RT @fchollet: Announcement: we're going to be launching ARC 2, a larger &

2022-09-06 15:12:34 @__mharrison__ Having multiple ways to do something is usually good, as it enables your software to be used for more use cases and by more personas, in different styles. Python is a good example: it's a multi-paradigm language where there's always N ways to do anything.

2022-09-05 18:47:52 When you make art or music, remind yourself: you're not making just one piece. You're making exactly as many unique pieces as there will be viewers/listeners. As many independent, self-contained experiences, weakly synchronized, bringing the piece into existence in a new reality.

2022-09-05 18:37:33 Earth features 7 billion brains, each creating its own independent reality, only weakly overlapping with that of the others. That's a lot of parallel universes. And only a single main character in each one.

2022-09-05 17:20:19 Product managers talking about "well lit paths" implies the existence of many cutthroat dark alleys.

2022-09-05 16:49:46 RT @PyImageSearch: New tutorialThe first part of the Transformer series is out now!https://t.co/gkq0Ecznya The Transformer Archit…

2022-09-05 02:10:54 If you want to make the world a better place... chances are the most effective way to do it is to have kids and raise them well. So that the next generation may be kind, compassionate, and emotionally &

2022-09-04 00:31:36 @BarardoD I will ask the app maintainers

2022-09-03 17:27:11 @lexfridman For the most part, they do. Our cognition is distributed across our environment, our tools, our culture, other people...

2022-09-03 15:50:47 @khademinori We're all 14 billion year old, including AI. The time taken to develop an intelligent system is not a factor in evaluating the capabilities of that system.

2022-09-03 15:44:24 Being around kids is a simple antidote. In the past 1.5 years I've seen my son make more cognitive progress than I've seen in AI in my lifetime -- by a couple orders of magnitude.

2022-09-03 15:42:07 It seems to me that the intellectual current of simplistic hyper-reductionism that is so common in AI today originates in part from a lack of respect for biological intelligence -- or perhaps simply a lack of interest in other human beings.

2022-09-03 04:16:44 @ghostofhellas @archaeologyart @shingworks

2022-09-03 01:00:05 It's not exclusively trolls living in their mom's basement, either. Some of them have jobs in the tech industry. Some of them are even powerful. And yes, it's scary.

2022-09-03 00:55:17 People who have not experienced online harassment campaigns themselves can be skeptical that they exist. "Surely you must be exaggerating", they say.No, it really is a thing. Yes, some people are profoundly malfeasant like that. It's not a joke.

2022-09-02 22:30:56 RT @oneunderscore__: KiwiFarms archives data of political enemies — frequently trans people, often private citizens — then uses it to creat…

2022-09-02 22:30:38 RT @oneunderscore__: I've been covering bad parts of the internet for long time now.For years, there was one site extremist researchers w…

2022-09-02 21:59:19 The mind has evolved to spot similarities and operationalize them, and it's exceedingly good at it.

2022-09-02 17:24:58 Also true. Good latent space photographers will do great things with the medium.But typing a generic prompt into an AI image generation app isn't any more interesting than pressing a shutter-release button. https://t.co/pJngWp59i5

2022-09-02 17:13:21 RT @Scobleizer: @fchollet Yes! Just like the real world, some people are highly talented at capturing the world in a way that makes it lo…

2022-09-02 17:10:24 Instead of walking around in the real world and taking pictures of it, you can now walk around in a latent space that interpolates past human creations, and take pictures of it.Latent space photography. And just like photography, it's art. It requires the eye of the artist.

2022-09-02 17:08:58 Image generation is a form of photography. Photography in a latent space that interpolates between hundreds of millions of images.When you take a photo, you don't "create" the picture, you take it. You find the scene you want, and you capture it the way you want. It's curation.

2022-09-02 15:48:46 Reports of art's death are greatly exaggerated. New tools don't kill art, they expand it. https://t.co/JcWHxYdGCN

2022-09-02 00:23:20 RT @yoshi_hide_ban: 2019kaggleARC@fchollet ARC2 GPT-3 , DALL-E , Stable Diffusion AGI …

2022-09-01 21:51:08 @mrigankanath_ TBD

2022-09-01 17:52:20 RT @michaelgmadden: This is an important dataset/challenge: a new iteration of the Abstraction and Reasoning Corpus, a sort of "AIQ test" f…

2022-09-01 17:20:07 @timohear I'd expect us to reuse many ARC 1 tasks but not all. And yes, any model that did well on ARC 1 will do well on ARC 2 (unless it was somehow specialized in specific ARC 1 tasks, which would have been a form of cheating)

2022-09-01 17:17:06 Go create some tasks here: https://t.co/O0eJ7gjZCr

2022-09-01 01:12:43 Me catching up on my reading list after I'm dead https://t.co/B5XJ9obURl

2022-08-31 16:48:28 A postdoc position where you will get to enjoy the New Mexico weather and views -- and work on solving ARC at the same time! https://t.co/yqX1eYiIWF

2022-08-30 22:35:08 "We shouldn't make our system more representative of the will of the people because then we couldn't hold on to power" is a remarkable message https://t.co/MRRK1wvJlL

2022-08-30 21:40:42 RT @seanjtaylor: @fchollet I think of this as inductive bias for products. The users are searching parameter space to accomplish their task…

2022-08-30 20:43:26 When we design Keras APIs, our goal is to make simple use cases extremely easy, *while* making sure the most complex use cases remain easier than they would be with a different framework.

2022-08-30 20:42:00 A well-designed product can be more usable *and* more flexible than a poorly designed product that has focused on either ease of use or flexibility.

2022-08-30 20:41:59 When you design a product, you have to navigate a fundamental tradeoff between ease-of-use and flexibility.What people miss, however, is that the area under the curve available for this tradeoff is a function of a how well you design the product. https://t.co/sKYTrIsKhy

2022-08-30 02:56:08 Anyway, if you're eagerly waiting for crypto to come back up, don't worry: we are still so early. https://t.co/S13WquDDVA

2022-08-30 02:48:35 Related: that time Europe got plastered with ads for Floki, the token with a "cute" Viking dog mascot. Floki proceeded to fall by >

2022-08-30 02:44:29 You can tell crypto is a great investment because of all the billboards and TV ads telling you to buy it (~$1B ad spend in 2021).Those paying for the ads want to share with you the fabulous profits to be made in crypto. They definitely don't need you to be their exit liquidity. https://t.co/T5tfsnV2AU

2022-08-30 02:15:59 @liron @mark_dow Crypto has a clear value proposition though... It enables you to issue and trade securities while bypassing all existing securities laws. Thanks to blockchain tech you can run boiler rooms selling imaginary tokens to anyone gullible enough to fall for it, and get away with it.

2022-08-29 21:59:27 @Grady_Booch Only 99%? We're still so early.

2022-08-29 15:43:52 New tutorial on https://t.co/m6mT8SaHBD: abstractive summarization with T5 https://t.co/FMLElXGRfS

2022-08-29 15:04:47 If a mesure is popular, the more pushback you get passing it the better for you. Because you want your opponents to be very publicly seen taking a stand against the popular thing that helps people. It doesn't just increase your support, it reduces theirs.

2022-08-29 15:01:11 This may come as a shock, but enacting measures that help people is a good move politically. https://t.co/ihyIV5ta6R

2022-08-28 23:45:14 Every few weeks we take our little one to the children's museum, and every time he plays with the same things in completely different ways

2022-08-28 18:56:30 @mpbontenbal This is fixed now.

2022-08-28 16:38:39 New tutorial on https://t.co/m6mT8SaHBD: audio classification with Wave2vec and Transformershttps://t.co/ekcMKqM5pQ

2022-08-28 08:16:08 @Noahpinion "Ineffective egocentrism"?

2022-08-28 03:54:31 Back when I was painting more regularly, it was common for artists to throw a "no refs" when posting a new piece. Now it's "no refs no AI". https://t.co/R6WhWPo5D1

2022-08-28 00:36:47 @Plinz The key part of DL is that you learn X-to-Y vector space transformations that must be differentiable, i.e. continuous and smooth. And you learn them incrementally, via gradient decent. That is severely limiting.

2022-08-28 00:34:51 @Plinz This is how you can start making sense of (e.g. generating, correctly classifying, etc.) points that were not part of the training data -- which is what generalization is.

2022-08-28 00:33:29 @Plinz Rather it means finding a representation space where all input points fit on low-dimensional manifold such that you can go from any one point to another via a continuous path along which all points are valid input representations.This is what enables generalization.

2022-08-27 23:44:18 If such a space cannot be learned, then you have on your hands a problem for which the manifold hypothesis does not apply, and DL is simply *not* a good fit for such problems.

2022-08-27 23:42:18 That's because your choice of encoding space (e.g. RGB pixel space) is completely arbitrary. How you encode your data is a choice you make, not some kind of law of nature! Defaulting to the Euclidean distance is also a choice! Nothing about it is intrinsic to your problem.

2022-08-27 23:42:17 The misconception has gotten so bad that even when you point to models that are explicitly *built to learn interpolative embedding spaces*, like Transformers, some folks are like "oh no that's not interpolation, the model contains nonlinear transforms!"

2022-08-27 23:42:15 The biggest difficulty people (even fairly senior folks) seem to have in grasping that most deep learning models perform interpolation is that they think "interpolation" means "input-space L2 interpolation", i.e. linear regression

2022-08-27 20:05:53 In particular, if you want to master a particular language, don't wait. Do it now. The cost only goes up from here.

2022-08-27 20:04:44 The younger you are the more you should learn, first because it's easier for you now and will get increasingly harder every passing year, and second because you will have more opportunities to use what you've learned throughout the rest of your life

2022-08-27 19:24:49 This is largely orthogonal to the question of compression. You will get compression in your model if you constrain it to compress. Models that are allowed to fully memorize their inputs will do so, which can degrade their generalization capabilities (again, it depends)...

2022-08-26 14:57:30 @Cassius75495871 What are you going to do about it, call the grammar police?

2022-08-26 14:50:05 Technology can obsolete specific business models and specific human skills. But it cannot obsolete human ingenuity and creativity, much like it cannot obsolete economic competition. They'll simply move to new areas.

2022-08-26 13:29:31 @OneDeanBocobo GPUs are useful for deep learning because deep learning models "fold" an input vector space into another vector space, which is done via floating point matrix multiplications, which can be efficiently parallelized on GPUs.

2022-08-26 13:24:08 An interpolative search engine.Now, just like Google has changed the world twenty years ago, you should expect deep learning to have incredible impact. It doesn't have to be intelligent at all to be tremendously useful when you scale it up to an immense corpus of data.

2022-08-26 13:20:11 Because it is analogous to a database, the usefulness of a deep learning system depends entirely on the data points it was constructed with. You get back what you put in (or interpolations of the same). Similarly to how the usefulness of a search engine depends on what it indexes

2022-08-26 13:15:12 Deep learning takes data points and turns them into a query-able structure that enables retrieval and interpolation between the points.You could think of it as a continuous generalization of database technology.

2022-08-26 01:42:32 One thing I really can't get used to is just how fast toddlers learn. Blows my mind every day.

2022-08-26 00:54:39 RT @ScienceInsider: BREAKING: White House issues new policy that will require, by 2026, all federally-funded research results to be freely…

2022-08-25 21:16:46 This can make discussion frustrating, because the arguments become religious, based on dogma. Pushing back against the dogma makes you "evil". After all, we're facing an imminent risk of apocalypse, right?

2022-08-25 21:12:25 There has long been a mythology of AI. But the moment billion dollar fundraising efforts started to get organized around that mythology, it turned into organized religion.

2022-08-25 18:05:22 RT @erikbryn: Congrats to @geoffreyhinton on yet another well-deserved accolade!https://t.co/4dWhhhpQeL

2022-08-25 15:29:17 True story: many years ago when I first saw a Kirkland Signature product I happened to be in Kirkland WA so I assumed it was a local brand

2022-08-25 03:01:56 Those who want change take steps towards it, no matter how small. Those who are only pretending to want change make excuses.

2022-08-24 19:19:19 But that tech is not being developed by those who talk about AI alignment.

2022-08-24 19:18:32 Because to some extent these are technology problems. You can develop infrastructure that makes it easier for others to develop ML systems that are safe and fair by default (though tech will never remove the need for human judgment at the product design stage).

2022-08-24 19:15:15 One example of the mismatch between discourse &

2022-08-24 19:01:49 Things like: What data should we collect, or not collect? What kind of decisions should we automate, or not automate, with that data? Does our collected data capture a fair picture of the world? How does our system affect the human users it interacts with? ... https://t.co/otdGz3uO6Y

2022-08-24 18:49:47 Talking about alignment is good PR because:- It seems to imply progress towards general AI (despite lack of any)- It makes you appear to care about ethics and safety (without needing to be ethical or having to do anything towards safer/fairer deployment of ML systems) https://t.co/fn0CtfhYY0

2022-08-20 00:44:06 The really hard part here is the "future" bit. You're optimizing for an unknown objective -- the end goal is to be able to operate in the situations you'll face *next*, which aren't like the situations you've seen before.

2022-08-20 00:40:25 @SchwabeHenning This is classic selection bias towards only publishing things that work.

2022-08-20 00:39:55 @SchwabeHenning It isn't being ignored, quite a few people are looking at it. A major reason why you don't see many publications about it is because it's *hard*. I've seen multiple sizeable research projects on ARC get started and then shut down due to lack of results.

2022-08-20 00:38:05 To be intelligent is to optimize your existence trajectory to maximize experience while minimizing risk -- and converting this experience into learnings applicable in future situations with maximal efficiency.

2022-08-20 00:35:39 Experience efficiency refers to the conversion ratio between the space of situations you've experienced and the space of situations you've then become able to handle. Underrated concept

2022-08-20 00:32:12 To create mouse-level intelligence doesn't mean writing a program that behaves like a simulated mouse in a limited set of situations. It means creating something that can learn the same range of things (which is enormous) with the same experience efficiency and risk efficiency.

2022-08-19 22:04:41 @_onionesque AI gore is just those "research code" repos on GitHub

2022-08-19 21:56:11 We are now in an age of AI hubris. Folks with a very low understanding of the nature of human intelligence make very big claims very confidently.

2022-08-19 04:19:32 Democracy is an unlikely and unstable system, that needs to be continuously willed into existence by everyone participating in it

2022-08-19 04:15:45 To survive, democracy needs more than institutions. It needs a culture of democracy. If a significant block of voters are no longer on board with democratic values, the clock is already ticking.

2022-08-18 15:29:26 Integrity over convenience, always.

2022-08-18 15:29:15 You won't find happiness if you settle for less than you deserve. You can compromise on many things, but you can't compromise on your fundamental values and goals.

2022-08-17 21:36:37 A sky so clear and so blue it makes you want to spread your wings and ride the wind https://t.co/vqxKIr25qW

2022-08-17 19:44:32 One thing we should all learn from children: how to use play to create meaning, fun, and learning opportunities wherever we are and whoever we are with.

2022-08-17 18:17:03 If you want to understand clearly how modern image-generation methods work, check out this Keras tutorial: https://t.co/e2o4BYcZcA https://t.co/QwZMOrTCQq

2022-08-17 01:22:26 RT @stevemullis: Quite a time we’re living in when healthcare workers, teachers and librarians are under threat and harassment from far-rig…

2022-08-16 22:59:10 RT @POTUS: The Inflation Reduction Act is now law. Giving Medicare the power to negotiate lower prescription drug prices. Ensuring wealth…

2022-08-16 17:33:53 The yearly "state of machine learning and data science" survey run by @kaggle is now open: https://t.co/wuKXtqQ6kMGo fill it -- these insights benefit the entire industry!

2022-08-15 15:45:25 VCs subsidizing the creation of new Netflix content https://t.co/V9N1jyg7nD

2022-08-14 22:34:03 Anti-pattern:- User makes a mistake. Framework doesn't run any checks.- Deploy. Long deployment time.- After deployment, mistake causes runtime error.- Error message is obscure.You'll need a while to run through another iteration of this loop. Good luck debugging...

2022-08-14 22:30:58 Development is a loop:mental model ->

2022-08-14 22:29:18 If you want to make any dev tool better:1. Decrease latency of feedback sent to user (e.g. long startup time = bad)2. Increase informativeness of feedback (e.g. error messages)3. Make everything easier to test &

2022-08-14 21:35:14 RT @tveitdal: Europe’s rivers run dry as scientists warn drought could be worst in 500 yearsCrops, power plants, barge traffic, industry…

2022-08-14 18:20:19 A good framework is a thinking companion. It should create cognitive shortcuts and solve problems for you. It should not clash with your mental models or create additional problems for you.

2022-08-14 01:17:16 @neurobongo Maybe he just absorbs tons of CO2 from his surroundings when he transforms

2022-08-13 20:45:37 Personally I would recommend Build by @tfadell

2022-08-13 20:44:02 Founder and managers: what are the books that you've found most helpful in your careers?

2022-08-13 17:27:05 Had an elaborate dream about East Asian dragons Turns out they can fly without wings because they're filled with hydrogen, like an old-school zeppelin. This can lead to the occasional Hindenburg event.

2022-08-13 15:56:42 @qhardy @Larimer1 Of all the bad faith claims, the most unbelievable one so far is the idea he "worked" at night

2022-08-13 00:26:13 RT @carrigmat: Over the last year we've put a lot of effort into refreshing and overhauling everything TensorFlow-related at Hugging Face.…

2022-08-12 20:33:36 RT @Weather_West: New work co-led by @xingyhuang and me on the rising risk of a California #megaflood due to #ClimateChange is out today in…

2022-08-12 16:00:59 If you brainstorm 50 business model ideas and then pick the one that sounds best to you, you still only have an average pie-in-the-sky business model idea. But if you iterate on one average idea for 50 iterations in contact with customers, then you have a promising business.

2022-08-12 16:00:58 Lots of folks saying "but the way to have good ideas is to have many ideas and then select the best ones!"Disagree, the way to have good ideas is to iterate many times on the same ideas. https://t.co/AuHsOFec6m

2022-08-12 02:28:26 @Shano901 @SamHarrisOrg I mean it's hard to make an original joke on this site...

2022-08-12 02:26:06 @Shano901 @SamHarrisOrg Damn I miss this. Have to delete then

2022-08-11 22:25:05 You don't need to have many ideas, but they need to be good, and you need to able to explain them clearly.

2022-08-11 21:23:50 RT @AlecStapp: New data from @NFAPResearch:Immigrants have started more than half (319 of 582!) of America’s startups valued at $1 billio…

2022-08-11 19:41:48 RT @PyImageSearch: Join us on our LIVE Learning Session tomorrow (12th August 2022)Neural Machine Translation | TensorFlow and Keras |…

2022-08-11 18:50:22 @assadollahi No, we love Hugging Face, and we don't base our roadmap off of what other companies are doing. We're simply building up the functionality our users need the most.

2022-08-11 16:57:18 It includes how to train a tokenizer on your own data, and it covers a range of generation styles such as Beam search and Top-K search.

2022-08-11 16:55:46 New tutorial on https://t.co/m6mT8SrKDD: GPT text generation with KerasNLP https://t.co/9TbqUmq1o8Demonstrates how to build your own scaled-down GPT-style generative model and train it on your own data, using KerasNLP components.

2022-08-11 11:21:06 While the march of posting science continues unabated, one thing will never change: there will always be the same constant fraction of humor-immune people who won't notice there was a joke in the first place. Just impervious to progress.Whoosh. Did you hear that?

2022-08-11 04:47:29 Look at memes from 2010 -- it's scary how much more sophisticated online humor has become since. By the 2030s posting science will have advanced so far that we will need specialized AIs to navigate the layers of irony

2022-08-10 21:48:09 The best infrastructure is the stuff that you never notice or think about, but would leave you utterly stranded if it suddenly disappeared.

2022-08-10 15:11:33 Important to stay open to the possibility that what will come to define you tomorrow could be something that isn't even on your radar today. Explore and learn

2022-08-10 03:37:32 When developing a framework, you can never value API stability enough.

2022-08-08 21:08:29 I've consistently found that you get more energy by... spending more energy. A bit like a dam where a small crack gets a lot bigger as water rushes in.

2022-08-08 19:31:03 RT @peterbakernyt: @sbg1 @NewYorker Trump soured on generals who he thought should be loyal like he thought Hitler's officers were.“You f…

2022-08-07 20:15:39 RT @ayushthakur0: Clean implementation of SimCLR in @TensorFlow 2.x: https://t.co/1PYabwAgIuRepo: "We use tf.keras layers for building th…

2022-08-06 19:52:02 To achieve something big, you need to care about achieving it more than you care about your pet theories being right, and more than you care about personally getting the credit.Organizations with a culture that gets this right are much better positioned to achieve big things.

2022-08-06 03:03:34 The weirdest thing about life: you start out young, but before you know it, you're old. No warning, no nothing.

2022-08-05 21:55:54 RT @oneunderscore__: Verdict: Alex Jones owes $45.2 million more to Sandy Hook in punitive damages. That's on top of $4.1 million in compen…

2022-08-05 21:50:15 @willdepue Of course you could argue that this true of falling in love in general, and that falling in love with a chatbot is simply an extreme manifestation of this pattern. Falling in love is something that your brain inflicts on itself rather than something that the world inflicts on you

2022-08-05 21:48:07 @willdepue They're not "gullible" in the sense they aren't being deceived by an external force. Rather, they're actively deceiving themselves -- they want to be deceived. The chatbot is just a mirror where they're staring at something they've created.

2022-08-05 21:41:42 @jingle__belle Only if you want to learn through pretend play (actual social interaction would actually be more effective here). But it's true that pretend play can be a learning medium!

2022-08-05 21:38:02 TL

2022-08-05 21:33:45 You could think of chatbots as a video game genre -- The Sims for interpersonal interaction. If you're not interested in playing that game, then you're unlikely to find any utility in a chatbot.

2022-08-05 21:30:32 The good news for chatbot makers is that there are lots of people out there interested in *roleplaying social interactions* (more than in actual social interactions). In large part because our extremely online society does a good job at churning out socially alienated people.

2022-08-05 21:27:21 Chatbots are a bit like a mirror. They reflect back to you what you give them. As a user, if you do everything you can in your dialogue to make the chatbot behave in a human-like way, it will. But if you break character just a little bit, so will the bot.

2022-08-05 21:24:39 Generally speaking I think chatbots can be useful as a roleplay medium, but quickly break down when you step out of that specific use case. For a chatbot experience to be convincing, the human user needs to be actively working to make it convincing. You need to suspend disbelief.

2022-08-05 21:18:47 Facebook's Blenderbot might be slightly lacking in self-awareness. https://t.co/WV6AeKtLwc

2022-08-05 18:03:31 Join at https://t.co/7H0RdVKWIr

2022-08-05 18:02:58 Happening now! https://t.co/qJWrd0N0JX

2022-08-04 21:26:01 RT @luke_wood_ml: Excited and proud to share an API sneak peek of the KerasCV object detection API featuring a fit() friendly RetinaNet, Pa…

2022-07-27 06:17:19 RT @ArnaudMarechal: Amazon permet le développement de la contrefaçon dans le domaine du livre. L'illustration avec 2 bestsellers dans le do…

2022-07-27 04:41:57 To use a metaphor -- it's not as if some guy on the street were selling bootleg items next to a massive supermarket that sold genuine ones.It's as if this guy were empowered by the supermarket to systematically replace the genuine items on the shelves with his own fakes.

2022-07-27 04:41:32 This is hijacking a large fraction of total book sales -- for some books, a majority. This is theft of purchase intent (and that purchase intent typically originates outside of Amazon).For authors and publishers, this represents a massive loss of revenue.

2022-07-27 04:40:48 If someone wants to buy my book or @aureliengeron's book (etc.), they will search for it on Amazon, find the book's official page, and click "buy".And Amazon will be routing this purchase intent *by default* towards a seller of counterfeits.

2022-07-27 04:40:23 This goes far beyond "a 3rd party seller on Amazon is selling counterfeit copies."The gist of the issue is for many bestselling books, Amazon is routing people towards counterfeits *by default*. Which is a big deal because Amazon is the default online bookstore for most people.

2022-07-27 04:13:01 Exact same situation for @aureliengeron's ML bestseller right now: a fraudulent seller is the default. https://t.co/qrrZOtL9hF The level of mismanagement here is staggering.

2022-07-27 04:08:16 I spoke way too soon when I said the problem was resolved for my book -- 24 hours later a fraudulent seller is now back as the default buying option for both editions of my book. Sigh...

2022-07-27 03:23:57 Very cool! Today we're just scratching the surface of the space of things that could be done with computer vision. https://t.co/uPko0u00t8

2022-07-27 01:06:43 @guysnovelutumba I hope you can one day find your way back to sanity.

2022-07-27 00:54:46 Wherever there are people, we allocate most of our attention towards them. It's only when we are truly alone that we regain the ability to fully perceive -- and enjoy -- the inanimate world around us.

2022-07-26 21:19:58 RT @aureliengeron: Yes, sadly I can confirm that I've had several months with almost no royalties despite my book being in Amazon's best se…

2022-07-26 21:19:12 @choc_eclair Get it refunded and buy from Manning...

2022-07-26 20:51:41 @he_negash They obviously don't, otherwise those sellers would get banned after a few days -- in practice they stick around for years and sell thousands of copies

2022-07-26 20:47:47 It may not be entirely obvious at first that a given seller is selling exclusively counterfeit items, because that seller may appear to have thousands of ratings, 99% positive.An important reason why is that Amazon takes down negative reviews related to counterfeits. https://t.co/1ihQ9Rq3Sn

2022-07-26 16:30:02 RT @__DavidFlanagan: I ordered a copy of my book from Amazon third-party seller "Your Toy Mart", and indeed, as this thread says, it was co…

2022-07-26 16:29:40 @__DavidFlanagan If there's no PDF of your book out there, they may have cloned the content via a high quality scan of the print book, in which case they would have had to recreate the spine graphics on their own (with mistakes)

2022-07-26 03:12:18 If it's impossible for Amazon to ensure the trustworthiness of 3rd party sellers, then perhaps there should be an option for publishers/authors to prevent any 3rd party seller from being listed as selling their book (esp. as the default option for people landing on the page).

2022-07-26 03:09:38 Specifically, the default buying option for the 2nd edition of my book is now Amazon itself, rather than any third party seller.For the 1st edition, the default option is still a counterfeit seller, though. Perhaps this widespread problem needs more than a special-case fix.

2022-07-26 03:03:16 @__mharrison__ Step 1: find a whiteboardStep 2: draw a Pikachu from memory. You only get 1 try

2022-07-26 02:11:36 RT @ManningBooks: Now in print!Put #deeplearning into action using R and the powerful #Keras library: https://t.co/9rpU8yVSwK @RStudio…

2022-07-22 14:44:41 RT @chrisalbon: "Grok" isn't that you know something really well, it is that you know something so well that it is the prism through which…

2022-07-22 14:39:12 It's hard for a longtime software engineer to explain something without relying on programming analogies

2022-07-21 18:58:30 Artists know this well -- it's far easier to draw fantasy monsters than to draw human hands in a specific position. The former features many degrees of creative freedom, but the latter needs to follow a precise spec. https://t.co/lQsNvfcGvQ

2022-07-21 18:16:10 I've also been pointing out what would soon become possible.

2022-07-21 18:15:58 I haven't just been calling out the limitations of DL, like pointing out in 2016 that DL would be fundamentally unable to tackle abstract reasoning (vindicated so far) or that the commonly accepted timeline for full self-driving was overoptimistic by 5 years (also vindicated)

2022-07-21 18:15:37 Sometimes people will say I'm a pure skeptic, a prophet of bad news. Absolutely not!

2022-07-21 18:14:26 Now, this inaccurate statement that "no one predicted this!" is being used to implicitly dismiss AGI skeptics.Hi, it's me, I'm an AGI skeptic. And I was predicting the near-term rise of human-level creative AI back when the deep learning community had fewer than 5k people.

2022-07-21 18:13:35 In addition, while it seemed extremely far-fetched in 2014, by 2017 creative AI was starting to look within reach. We didn't achieve the current text-to-image results overnight, it has been the result of series of incremental steps over the past 8 years.

2022-07-21 18:12:14 It was certainly a very niche position at the time, and it met a lot of pushback and disbelief. But saying "no one saw it coming" is inaccurate. I was telling people about it -- and some folks have already been working with generative AI for many more years!

2022-07-21 18:12:13 Eight years ago, I was telling people that in a not-so-distant future, they'd be consuming music, art, movies created with the help of deep learning. (Image: extract from my 2017 book.)Back then the state-of-the-art in image generation was 28x28 MNIST digits (June 2014). https://t.co/xTqdHesNbd https://t.co/s24olPmScH

2022-07-21 16:54:25 The biggest lesson here: don't attempt to fix branding/narrative issues with technical solutions. Technical solutions must be in service of technical problems. This extends to many other contexts... https://t.co/TlYsIy5kfY

2022-07-21 04:47:37 @togelius @sama I remember talking about this at length in 2014 (in fact, in my book, written in 2016-2017, I mention talking about it in 2014). I also have 7+ years of tweets about the possibilities of image generation, music generation, game content generation. https://t.co/fCEN3UQdit

2022-07-20 21:45:36 RT @fchollet: Compilation thread of various Keras tips

2022-07-20 03:29:54 My general feeling is that in the near term it will settle down as a meme template -- one longer-lived than average, but nevertheless perishable. However, in the fullness of time, it will be reborn as a brave new medium.

2022-07-20 03:25:49 Right now the role of image generation technology in our culture is somewhere between a meme template and a new medium of expression.

2022-07-19 20:27:44 The first step to fixing any problem is realizing that you have the ability to fix it. That's the step at which the most possibilities die.

2022-07-19 19:08:35 RT @RisingSayak: There's a new kid on the open-source block named KerasCV. It aims at resolving some of the most burning pain points we,…

2022-07-19 16:27:25 RT @erikbryn: It's 104F in Paris now. Here's what the IPCC says about how extreme weather relates to climate change:“even relatively sm…

2022-07-18 20:22:45 @AdamSinger Cool LA pad man

2022-07-18 02:25:29 @babiejenks Hyper-individualisn, fear/hatred of others, in particular the collective

2022-07-18 02:19:00 2. These belief systems have all "evolved" to prey on easily influenceable, low-information types (typically to extract money from them), hence they all end up with the same believer base despite not otherwise sharing any theme / content.

2022-07-18 02:17:30 I can see at least two hypotheses:1. A set of correlated personality traits (including sociopathy and paranoia) that makes one more attracted to all of the above

2022-07-18 02:13:33 It's truly perplexing how much overlap there is between far-right political beliefs, antivax beliefs (and other conspiracy theories), and crypto gambling. What's the latent variable?

2022-07-18 01:21:37 Please don't show `from xyz import *` in your official docs / code examples

2022-07-17 22:08:46 @ChrSzegedy I appreciate you saying you don't understand it, as opposed to many folks who make nonsensical claims with absolute confidence (e.g. denying the existence of reasoning capabilities in humans)For my own thoughts on what reasoning &

2022-07-17 15:25:57 RT @ClimateHuman: I wish everyone on Earth knew how genuinely "off the charts" key planetary trends are right now, and how abnormal and cri…

2022-07-17 04:03:04 Really great books don't have a specific audience. You can read them at 12, at 24, at 48, at 96, and each time you will derive from the experience something different yet deeply engaging.

2022-07-17 02:02:37 @JoeGochal Deep Learning with Python 2nd edition, chapter 14 https://t.co/LvbEy5A0k8

2022-07-17 01:08:31 In general there is far too much focus on *what* to learn and too little on *how* to learn -- what learning engine to use. Most researchers seem to assume gradient descent is always the way to go. To the extent that some folks are unable to imagine there could exist alternatives.

2022-07-17 01:03:57 A lot of nonsense could be avoided if only more folks in AI understood the difference between reasoning and pattern recognition -- and could see the spectrum in between them.

2022-07-16 03:30:55 There's an extremely high correlation between blaming others for team failures and being a poor contributor.

2022-07-15 23:02:30 Bit of a missed opportunity not to have named the Chrome Web Store "Chrome Depot"

2022-07-15 22:57:32 RT @fadibadine: If you’re into #MachineLearning and #Keras, check out the below 10 Keras ecosystem packages that cover #NLP #ComputerVision…

2022-07-15 19:38:35 10. TF Quantum. Experiment with hybrid quantum-classical Keras models. https://t.co/B0ir9FoyDV

2022-07-15 19:37:31 9. TF Model Optimization Toolkit. Optimize your Keras models prior to deployment, via quantization, weight clustering, etc.https://t.co/h3qichyArj

2022-07-15 19:36:02 8. Keras-OCR. An end-to-end pipeline for detecting and decoding text in arbitrary images.https://t.co/q7rN49DI73

2022-07-15 19:34:35 7. TF-Recommenders. Build powerful recommender systems with Keras &

2022-07-15 19:33:38 6. Spektral: Deep Learning on graph data.https://t.co/C56crYfALo

2022-07-15 19:32:24 5. AutoKeras. Use AutoML to quickly create a Keras model for a given task.https://t.co/lJMuVlsfdQ

2022-07-15 19:31:16 4. Mask R-CNN. An implementation of a popular object detection &

2022-07-15 19:30:12 3. KerasNLP: Building blocks for natural-language processing workflows.https://t.co/avgzTOIe38

2022-07-15 19:29:39 2. KerasCV: Building blocks for computer vision workflows.https://t.co/IVC0DNHyFP

2022-07-15 19:28:52 10 Keras ecosystem packages to check out: a thread.1. KerasTuner: Hyperparameter tuning for humans.https://t.co/qkmMkmjLsT

2022-07-15 01:14:20 A Myst age for sale. Hope the travel book is included https://t.co/Ju9oeJyPzT

2022-07-14 19:51:22 If a hobby results in denial of life insurance coverage or otherwise much higher life insurance premiums, that's the market's way of telling you that you shouldn't be doing it.

2022-07-14 17:49:39 @10x_er Homebrew is good?

2022-07-14 17:48:55 Also macOS still shipped with a Python 2.7 installation by default in 2020, *after* the py2 end-of-life deadline. Not sure if that's still the case today

2022-07-14 17:45:28 Given that a large fraction of developers use MacBooks, it's pretty weird that macOS doesn't have an official package manager for developers.

2022-07-13 23:43:44 Young children pay far more attention to the world around them than teenagers or adults do.

2022-07-13 22:00:06 @karpathy Best of luck in the next leg of your journey, Andrej!

2022-07-13 19:13:24 @hardmaru I know peer review and conferences/journals are annoying, but they remain one notch above amateur creationist "science" websites and cult content.

2022-07-13 18:32:19 RT @petewarden: I've always dreamed of seeing @TensorFlow Lite on a Commodore 64! https://t.co/0l7tQV233V

2022-07-12 19:03:17 RT @TensorFlow: New to #ML, but have an intermediate programming background? We have tools that can help you learn. Explore the Machin…

2022-07-12 17:58:12 https://t.co/0wyo0ML3wF

2022-07-12 16:29:50 RT @simonw: This thread offers an unsurprising but interesting glimpse behind the scenes of a crypto-token pump-and-dump - including how $3…

2022-07-12 08:03:47 A big part of the reason some folks are simultaneously so clueless and so hubristic about intelligence and its artificial versions, is that they've picked bad measures of progress.

2022-07-12 08:01:56 Developing a measure of progress towards your goal should make you better understand your goal and the problem you're solving.But picking the wrong measure makes you understand your problem *less*. It makes you less likely to achieve your initial goal.

2022-07-11 21:10:10 PyPI starting to require 2FA for high-download packages is unquestionably a good thing -- and it was rolled out in a way that minimized hassle. The fact that some people decided they had a huge problem with it anyway simply illustrates that programmers love complaining.

2022-07-11 17:51:52 This is a joke, but it's also not. You know there *will* be benchmarks for joke understanding, and LLMs *will* get superhuman on them.This in itself will be a masterpiece of absurdist humor -- like many other facets of modern deep learning. https://t.co/Psh29PMsRX

2022-07-11 17:49:44 By 2026 AI humor levels will be 20x that of the funniest humans. Humanity will be stuck in a state of perpetual laughter.

2022-07-11 17:48:26 Its generative version will take a prompt and produce matching jokes that will have you in tears. Enjoy endless humor via by querying the API -- only $0.02 per joke. Comedians will be out of jobs.

2022-07-11 17:45:51 Soon, the field of ML will come up with a benchmark for joke understanding. "If a LLM can explain the following 2,500 jokes, then it has solved the problem of humor."Then by 2024 a 300B parameter model will achieve superhuman humor levels.

2022-07-11 03:56:01 Meanwhile the above is about how the product of multiple features may be (and often is) more informative than the features in isolation (considering the whole of the data).

2022-07-11 03:52:42 A couple of folks are bringing up Simpson's paradox, so to be clear: this is absolutely not Simpson's paradox.Simpson's paradox is about how the relation between a feature and a target varies based on the subset of the data you look at (it may disappear or reverse). https://t.co/soXQK8AWUH

2022-07-11 03:05:33 This is why feature selection based on some measure of mutual information between the feature and the target will only work in the simplest of cases. Avoid!

2022-07-11 03:02:05 Almost every problem features some form of this phenomenon. For instance a single pixel in an image has near-zero information about image labels. If you train individual ImageNet classifiers on each pixel and ensemble them you won't do much better than random.

2022-07-11 02:59:48 There are many examples of this. A striking one: the stock market. Past prices on their own hold no information about future prices, yet they're a must-have feature -- they just need to be combined with other sources of information.

2022-07-11 02:27:46 A fun phenomenon in ML is when a feature on its own holds absolutely no information with regard to the task, but turns out to be very useful when combined with other features.

2022-07-10 20:59:16 Believing what everyone else believes and doing what everyone else does is the surest way to achieve average outcomes.

2022-07-10 05:23:11 Summer https://t.co/xZItPdvrtH

2022-07-08 22:12:43 @AdamSinger @gaberivera @btaylor I'm willing to contribute $3

2022-07-08 21:54:57 @triketora @Twitter All evidence indicates that they don't

2022-07-08 21:18:15 RT @fchollet: "No code" is just the beginning. I'm looking forward to no software, no computer, just living in the woods

2022-07-08 20:25:12 RT @simonw: If you maintain a project on PyPI that's in the top 1% of downloads over the past 6 months you qualify for a free Titan securit…

2022-07-08 20:07:37 In all likelihood there have been thousands of fake copies sold. It has been going on for months. https://t.co/jkWAB1iD9N

2022-07-08 18:57:57 I was shocked to find out this was the case for my book, too. In retrospect it makes sense -- a book is one of the easier things to counterfeit, provided that you have the PDF version.

2022-07-08 18:56:34 This is accurate -- the same problem (ordering a product and receiving a counterfeit made by a fraudulent 3rd party seller) exists for every product category. Toys, vitamins, etc.Knockoffs are sold interchangeably with the real thing, from the "official" page of the real thing. https://t.co/toFuEKvVSV

2022-07-08 18:51:15 Besides buying from the publisher's website, you can also ask your local bookstore to order the book from the publisher (they get a better price) and then buy it from the bookstore. You get the actual book, and the added benefit of supporting your local bookstore.

2022-07-08 18:43:28 You can tell you have a counterfeit copy if you aren't able to register it on the Manning website to get the ebook version.

2022-07-08 18:42:51 The counterfeit copy is pretty similar to the original, except it uses an early version of the content (it even has pre-production colors on the cover). It also uses lower quality paper (much less thick) and is cut slightly smaller.It also makes no money for Manning or me.

2022-07-08 18:40:34 We have reported this multiple times. There has been no action.Please *do not buy my books on Amazon*. You aren't buying the actual book. Buy from the publisher Manning instead.Here: https://t.co/LvbEy5A0k8

2022-07-08 18:39:34 For instance, if you go to the page of DLwP2 on Amazon, you see that it's being sold by a 3rd party seller named "Sacred Gamez".If you click "buy", you won't get the actual book from Manning. You get a low-quality counterfeit printed by the fraudulent seller (from the book PDF) https://t.co/qAD2rg5a00

2022-07-08 18:37:48 Amazon has a book piracy problem. Besides the issue below (book content getting repackaged as a different book) there is an even bigger issue: sellers of counterfeit books.Amazon lets anyone say they're selling a particular book, and proceeds to route order to their inventory. https://t.co/00bs6FGWQM

2022-07-07 21:16:46 KerasNLP has been really shaping up lately.

2022-07-07 21:14:41 Written by @penstrokes75

2022-07-07 21:14:18 New tutorial on https://t.co/m6mT8SrKDD: an adaptation of the English-to-Spanish translation example from DL with Python using KerasNLP.Features the KerasNLP WordPieceTokenizer, TransformerEncoder/Decoder layers, and evaluation with Rouge metrics.https://t.co/NmjHGVV9ug https://t.co/fjOV51QxA6

2022-07-07 10:52:36 RT @daigo_hirooka: Keras(DDIM)CVPR'22

2022-07-07 04:26:28 RT @lmoroney: A really cool application of Keras and TensorFlow on mobile -- to help detect eye disease.https://t.co/iZt5UcSn7z

2022-07-06 19:30:39 In ML dev tooling, I feel like we're just starting to figure out the canon of how things should be done. Which means that most of the major advances will happen in the next 5-10 years

2022-07-06 19:26:49 It's easy to feel like all the cool stuff has already been done. In reality, there's never a shortage of value to create. The high-potential areas just move to a different part of the landscape.

2022-07-06 17:14:26 RT @davidADSP: Love this. Diffusion models are truly beautiful and the future of generative modelling - this is a fantastic tutorial to lea…

2022-07-06 16:37:10 RT @A_K_Nain: There is a reason why I consider https://t.co/qqPsB7ONAo as one of the best resource. Amazing tutorial!

2022-07-06 16:37:03 It was a pleasure to review this PR -- highly readable and well factored code.

2022-07-06 16:33:13 New tutorial on https://t.co/m6mT8SrKDD: generating images using denoising diffusion implicit models. Start experimenting with your own image generation models! https://t.co/e2o4BYcZcA https://t.co/wOlioLbqYj

2022-07-04 21:47:38 RT @Grady_Booch: Very true of software development

2022-07-04 21:33:20 If you want to go fast, stay focused on one thing, but if you want to go far, run as many experiments as possible

2022-07-04 19:20:47 Happy 4th to those of you in

2022-07-04 17:20:06 Blindly following rules without minding context is a surefire way to end up going against the intended purpose of those rules.

2022-07-04 17:16:40 @abhi74k @RisingSayak Trying to follow specific dogma in a context-free manner is a sure way to write bad code (and comments). I'll tell you what comments should convey: information that's useful to the reader. Describing what the code does is extremely useful when that isn't obvious to the reader.

2022-07-04 16:47:18 RT @RisingSayak: I'm sure the attached code snippet (courtesy: @fchollet) is quite known in the #keras community but doesn't hurt to talk a…

2022-07-03 20:30:37 @tdietterich @Sobheomiid Most researchers use the test (except in RL where it is customary to test on the training data). But even if they don't, at the meta level results get selected &

2022-07-03 18:40:07 This is pretty different from "concise code" or "clever code".

2022-07-03 18:38:43 Good code is code that minimizes system complexity. And system complexity is dominated by the complexity of the relationships between your system and everything it interacts with. Including people -- the system users and the system developers.

2022-07-03 01:19:38 @data_ev The sophistication of your validation method is an absolutely critical factor to win Kaggle competitions. Iterated k-fold with shuffling is the way to go...

2022-07-03 01:02:04 The goal being to ship the best possible model to production, as opposed to shipping a table of numbers

2022-07-03 01:01:27 In applied ML, you want to use the most accurate test method possible -- carefully crafting (and regularly updating) a separate test as close as possible to the prod data, and entirely separate from the validation sets used to tune the model

2022-07-03 00:59:43 One of the biggest differences I've seen between research and applied ML: in research, most people tune their hyperparameters on the test set to achieve the highest possible score vs. other approaches in the paper's results table

2022-07-02 21:22:05 @ChrSzegedy Wait, humans have a training set? What are the targets and the loss?

2022-07-02 20:19:05 RT @mhviraf: Deep Learning with Python by @fchollet. The third book I thoroughly read this year out of the many I started but never finishe…

2022-07-02 16:52:46 Training a self-supervised model on a dataset is a bit like populating the index of a database. It's the data that's doing the work. The model makes its structure query-able.

2022-07-02 16:50:27 If new advances can be made by training the same architecture on a new dataset, then surely the goal of ML research should now be to craft new datasets?

2022-07-02 03:26:32 I still think the best things to build are things that enable other people to build more/better things. At scale.

2022-07-01 23:55:35 *opening terminal* it's time to BUILD

2022-07-01 22:11:35 @amcafee In many ways fighting climate change requires strong economic growth

2022-07-01 20:22:22 New KerasNLP release! Check out the features list. https://t.co/tIbQaNYsX1

2022-07-01 12:22:16 @stanislavfort Also, if you do understand what curve-fitting implies and what the alternatives are, then you realize that this fact is very much the crux of the problem of where AI is going.

2022-07-01 12:20:41 @stanislavfort Your analogy doesn't work at all: ML is engineering, not a field of science, and the corresponding analogy would simply be "ML is just math".This is actually like saying, "pottery has become really good at shaping clay rotating on a turntable".

2022-07-01 05:36:49 Our field has become exceedingly good at fitting curves to beat benchmarks.

2022-06-30 20:14:02 New tutorial on https://t.co/m6mT8SrKDD: text classification using FNet. Another example featuring the new KerasNLP package! Created by @penstrokes75 https://t.co/tlVKVxKjhj

2022-06-30 16:40:12 Nature scientific report: a deep learning model for analysis of ophthalmic images, deployed to a mobile app to enable field diagnosis. Built with Keras and TensorFlow. https://t.co/T4UCDE1TZ7

2022-06-30 05:05:32 RT @penstrokes75: KerasNLP 0.3.0 is out! It's been one heck of a ride contributing to KerasNLP. I'm particularly excited about two major fe…

2022-06-30 04:46:18 RT @penstrokes75: On the KerasIO front, we've added a couple of interesting examples using KerasNLP: Text Classification with FNet (https:/…

2022-06-30 01:51:40 RT @fchollet: In MLPerf 2.0, TPU v4 set performance records on 5 benchmarks. This was achieved on publicly-available Google Cloud VMs.The…

2022-06-30 01:51:32 RT @AGRamesh13: @fchollet Intel Habana submissions also used TensorFlowhttps://t.co/9PJFQe2xQo

2022-06-29 21:56:20 In MLPerf 2.0, TPU v4 set performance records on 5 benchmarks. This was achieved on publicly-available Google Cloud VMs.The models all used TensorFlow.https://t.co/dnt3jGOrl8 https://t.co/VwV0v7HZfk

2022-06-29 04:42:08 In the particular timeline I come from, the tip of Pikachu's tail is black (hence why it's drawn like this in my profile picture).In *this* timeline however, it's just plain yellow.Who else remembers the black tip? https://t.co/6wJTQ9F2rq

2022-06-29 01:01:13 @cypher_text Well many researchers do use Keras. It's popular especially for DL applied to science.

2022-06-28 18:29:16 @runT1ME It's the feature set and the UX. If JAX had the same feature set / UX as TF, then it would not be a good research framework, and researchers would move elsewhere.

2022-06-28 18:24:35 IMO it's a key strategic advantage to have framework differentiation between research and production. It enables each side to be more focused and to do the best possible job for its use case and user base.Trying to be everything for everyone is not a great strategy.

2022-06-28 18:23:16 - Google is investing in both frameworks- Google is making them increasingly interoperable, so you can bring research to production more easily

2022-06-28 18:22:15 If you're interested in the relationship between TensorFlow and JAX, check out this post. In summary:- JAX fits the needs of researchers (which are in most ways opposite to the needs applied ML, especially in production)- TF/Keras fits the needs of applied ML, prod, mobile... https://t.co/kvyHwXO2X6

2022-06-28 16:20:44 RT @TensorFlow: We're thrilled to see that TensorFlow is the most used and wanted ML tool in the recent Stack Overflow Developer Survey!…

2022-06-28 00:33:52 It's certainly better for companies to do this than not do it, but ultimately, large corporations providing access to fundamental rights as a "perk" (including healthcare) is not a viable long-term solution. It also increases people's dependency on large corporate employers. https://t.co/sIhn4ZfUZ1

2022-06-26 21:57:24 RT @imbernomics: The COVID-19 Vaccine was one of mankind’s greatest accomplishments. It saved a World War’s equivalent of lives.

2022-06-26 18:35:03 One reason why children start understanding language long before they can use it, is that speech is extraordinarily difficult to produce. It requires remarkable motor coordination and precision. https://t.co/2zcRNy4eTJ

2022-06-25 23:41:57 The human ability to perform extreme generalization is backed in part by a drive to experiment. When my 1 year old is presented with two similar new objects A &

2022-06-25 22:46:44 RT @tzimmer_history: It’s terrifying to live in a country where any revelation about how the former president tried to abolish democracy ca…

2022-06-25 16:28:04 @kakarottoETH @AdamSinger @levie Google users gain value from Google products without buying any stock. And they gain the same value as all other users. The stock price actually reflects business success...

2022-06-25 16:24:49 @bheshaj @levie And you could achieve it with a DB and an API if there were any interest from companies in implementing it (but why would there be?)

2022-06-25 16:23:52 @bheshaj @levie That... entirely defeats the point of loyalty programs (which is lock-in)

2022-06-25 16:22:09 @AdamSinger @levie You can't build a mainstream foundation on top of something that exponentially rewards early adopters and that gets increasingly expensive to join as adoption grows. It's self-defeating and inevitably ends up deflating.

2022-06-25 15:51:24 RT @drgurner: In your company, there are passengers and there are drivers. Nothing better than finding great drivers.

2022-06-25 04:13:35 Nothing is static, everything changes. But some things change faster than others. Culture changes slowly. Human nature is even slower.

2022-06-25 00:04:20 RT @FaisalAlsrheed: KerasNLPKerasNLP is a simple and powerful API for building Natural Language Processing (NLP) models within the Keras…

2022-06-24 22:34:27 Many people believe that anti-abortion activism originates from christian religious beliefs. That's a complete smokescreen -- like believing pro-gun activism is rooted in constitutional originalism.It's 100% about control. In the US, Saudi Arabia, Poland, Japan, everywhere. https://t.co/nHrRPSoryj

2022-06-24 17:40:13 Buff that limbic system. https://t.co/dKtDWVQts6

2022-06-24 15:50:14 Your rights are never definitely acquired, never clearly safe. They need constant defense.

2022-06-24 15:12:39 RT @random_walker: I've been tweeting for over a decade yet I have to remind myself of this every day and I still can't bring myself to hit…

2022-06-24 02:18:40 Language isn't just words per se. It's how we use our affordances -- all of them -- to communicate with other and influence each other's behavior. As AI gives us new affordances, it will also alter and expand our language.

2022-06-24 02:15:34 Now, everyone will soon be able to use AI to generate images on command. Inevitably, this new capability will also give rise to a new mode of expression, a powerful new language extension.And this won't be done by today's AI researchers. It will be the teens. As always.

2022-06-24 02:15:33 When we started being able to exchange electronic text messages, emojis quickly emerged as a new mode of expression -- an extension of natural language

2022-06-24 00:38:45 People with expert knowledge are often reticent to talk/blog about it, either because they think it's obvious stuff (but for most people it isn't!) or because they think it's boring (but many people would love to hear about it!). Share your knowledge!

2022-06-23 23:03:44 @GreatFate4 Fact: In every user test we ever ran Keras scored far higher than ever other alternative in developer satisfaction and developing velocity. Stop the propaganda. Look at the data.

2022-06-23 21:24:23 @simonw At the time this comic was published, it was already feasible for a single specialist engineer in a couple of weeks. It was far from easy though.

2022-06-23 20:49:53 You can imagine that frameworks being adopted by learners, or frameworks that devs say they want to adopt, will keep growing going forward.This data shows TensorFlow as the highest-momentum tool in the ML space today.

2022-06-23 20:48:48 1. The "wanted" section, showing what technologies developers *want to work with next* (when they aren't already). TensorFlow tops the list.2. The "popular among people learning to code" section. TensorFlow is also ranked highly there. https://t.co/23lGeOJsv2

2022-06-23 20:46:48 This survey tells you what framework adoption looks like *today* -- not where it's headed. But it does have a couple of data points that may be predictors of the future... https://t.co/fqvzt2teRc

2022-06-23 19:45:30 You can tell you're used to Twitter (or other text-based social apps) when you tend to qualify every statement you make (not literally every one, but many of them) to preempt the various ways in which you'll get misinterpreted by your audience (actually only a small subset of it)

2022-06-23 12:16:13 "Imagine walking down the beach, not picking up your phone to see a notification, just seeing it in the corner of your eye"The notification just says, "welcome to hell" https://t.co/ZLJgLuvCHV

2022-06-23 11:55:46 @PiotrCzapla TensorFlow has a user support forum, a mailing list for user discussions, and triages a lot of support questions on GitHub.

2022-06-23 11:54:21 @PiotrCzapla This is a large-sample size industry survey, it had no relation with what kind of questions people ask on StackOverflow.

2022-06-22 23:52:15 RT @Capofwesh: As a newbie to the field of ML/DL, Deep learning with python (2nd edition) by @fchollet proved to be the best book to start…

2022-06-22 21:13:04 Results link: https://t.co/qONS0qJQhI

2022-06-22 21:12:19 IMO the fact that "TensorFlow" and "Keras" are treated as two separate entries instead of a single TF/Keras entry is diluting scores for both, because many people who use both click on one but not the other. Today ~90% of TF workflows are really Keras-based...

2022-06-22 21:10:51 Today, the results of the 2022 global developer survey (run by StackOverflow) just went out.TensorFlow is used more than any other ML tool -- 12.95% of all developers use TensorFlow (slightly more than sklearn at 12.59%, and 1.5x more than PyTorch at 8.61%). https://t.co/znrcSHb8Zl https://t.co/hn3gMuVtCG

2022-06-22 16:01:44 RT @lizoratech: The ecosystem of on-edge learning is important too. For us, tf.js provides an clearer path to merge with CoreML on IOS envi…

2022-06-22 16:01:41 RT @gusthema: I liked François' thread because he phrased what I think (way better then I would)And he didn't even mention on-device ML a…

2022-06-22 15:19:36 RT @quiteconfused: @fchollet Honestly, I have to agree with @fchollet . Pytorch and jax ( and tfv1 ) just don't do it for me ( as a lead in…

2022-06-22 06:20:37 RT @jejjohnson: I really liked the slides by @fchollet talking about designing DL frameworks. It opened my eyes to some of the difficulties…

2022-06-22 06:20:24 RT @jejjohnson: A good take. The requirements of production and research are fundamentally different. Something to keep in mind when doing…

2022-06-22 05:09:50 @erfannoury Yeah I personally disagreed with the direction (and breakages) at the time. Anyway nowadays TF can be easily compiled to XLA and is much more performant.

2022-06-22 05:02:19 RT @luke_wood_ml: ReefNet is a RetinaNet implementation written in pure Keras developed to detect Crown-of-Thorns Starfish on the Great Bar…

2022-06-22 04:58:19 @erfannoury TF has more users (and more team members) than at any point in its history. People have been saying TF was doomed since 2017. User base has grown ~10x since. Chill.

2022-06-22 03:29:35 @abhi74k Give it a try! Keras in particular. Hard to go back once you've used it.

2022-06-22 02:43:42 RT @yisongyue: My work at Argo AI uses TF for autonomous driving. My work with JPL uses TF for rover autonomy. PyTorch has been great for…

2022-06-22 01:26:45 @kosigz Either sounds fine -- that's big either way.

2022-06-22 01:09:48 @cj_battey So then Keras is Scala?

2022-06-21 18:46:32 RT @ecsquendor: Useful distinction between shortcut rule and Goodhart's law. The former is about the metric being gamed and becoming useles…

2022-06-21 18:38:42 @ecsquendor Exactly -- one is about the effect of metric-driven system design on the metric, the other is about the effect on the system. Both matter. If you're not careful about how you leverage your metrics, you end up with meaningless metrics *and* a flawed system that misses the mark.

2022-06-21 18:22:31 @RisingSayak @carted Congrats!

2022-06-21 03:23:38 @jackclarkSF Take care, hope you get well soon!

2022-06-21 03:11:29 RT @RisingSayak: How can we train a model to minimize the number of bits needed to represent input samples? What are the concepts involved…

2022-06-21 01:13:21 @idavidrein Google Cardboard with an old Android phone is the only time I actually had fun in VR

2022-06-21 01:08:03 A big issue with this idea is that human "reality" is highly multimodal. Vision alone is not enough to sustain the illusion. https://t.co/lmRGA8GbCk

2022-06-20 23:58:02 RT @karlhigley: This is both why “making the number go up” isn’t sufficient in recommender systems and why reward function design in RecSys…

2022-06-20 23:11:41 An important thing I learned in the past few years is that the best way to deal with online harassment (either harassment campaigns or individual incidents) is not to give it any oxygen. Just ignore them completely.

2022-06-20 23:04:50 Don't compute what you can measure.

2022-06-20 23:03:46 @mwfulk Definitely. Even if you end up creating a system that learns everything from the data, having domain knowledge still helps you ensure you're making the right choices. So if you have access to domain experts, ask them as many questions as possible before you get started.

2022-06-20 22:57:22 The ideal ML model is as lightweight and unsophisticated as possible (i.e. easy to develop, reliable, performant). So when feasible, offloading work to the features themselves is always a great idea.

2022-06-20 22:55:29 One of the ML mistakes that you will hurt you the most: not collecting the right features. If the information you need isn't in your data, you won't recover it with a better model.

2022-06-20 22:31:44 @robertskmiles Goodhart's Law is about measurement and bias. It says that if you start optimizing for a metric, then it becomes a biased metric.The shortcut rule says that if you follow a narrow metric, you get a system that achieves the goal but misses all other aspects of the problem.

2022-06-20 22:20:54 "An effect you see constantly in systems design is the *shortcut rule*: if you focus on optimizing one success metric, you will achieve your goal, but at the expense of everything in the system that wasn’t covered by your success metric."

2022-06-19 22:31:19 @lwhittle7 Definitely not. JAX is for researchers, TF/Keras is for engineers. Though we want both to be compatible with each other, for research-to-production flow. At Google at least the set of people attracted by Flax and those that use Keras are disjoint.

2022-06-19 21:04:28 @BehavioralMacro What are your odds that last Thursday was the lows of the year?

2022-06-19 20:31:10 @PhilsburyDoboy It is hard to estimate, but we can approximate it from docs traffic and industry surveys. It passed 2M last year and is probably around 2.2-2.3M today. Definitely much lower than 10M, but still about half of the total addressable market.

2022-06-19 20:20:05 Over May/June, Keras has crossed 10M monthly downloads for the first time, at the same time as https://t.co/m6mT8SrKDD traffic reached a new all-time high. https://t.co/rN2mMJ07k5Congrats to the team and the Keras community :)

2022-06-17 21:16:22 @dr_becker Exactly.

2022-06-17 21:10:58 @Smark_phd Yes.

2022-06-17 21:09:55 In a way, we aren't far from human-level AI, because the *distance* separating our current state from our goal is short. But to close that distance, you have to move in the right *direction*. We haven't yet started.

2022-06-17 21:07:58 The scale we're talking about here is not really massive either. There are only about 150,000 cortical columns.

2022-06-17 21:06:51 I believe that most of the magic of the brain comes from a set of relatively simple principles, operating on a very large scale.Scale is critical. But you have to scale the right thing. We have not found said principles yet.

2022-06-17 19:49:42 @A_K_Nain @SingularMattrix Are you using Flax or Haiku?

2022-06-17 05:47:01 @prem_k @dmonett As a matter of fact, yes they would. Neither crawling nor walking are culturally acquired.

2022-06-17 05:20:51 @dmonett There is absolutely a third option, which is the case where the acquisition mechanism is hardcoded but the skill itself is learned (via this mechanism). E.g. we are hardcoded to learn to walk on two legs. You still have to learn to do it (and it takes a while).

2022-06-17 02:01:07 @ilyasut Time for a 175 trillion parameters Transformer

2022-06-16 16:05:51 IMO this is the 2022 bottom, or not too far from it. (This is not financial advice.)

2022-06-15 19:51:24 I am become 300k bluecheck, the poster of takes

2022-06-15 18:36:03 The authors take a CNN (a Keras model, always nice to see!) and encode it on a photonic chip. It can classify images through direct processing of optical waves as they propagate through the chip's neural layers. It achieves an inference speed of 1.7 billion images per second! https://t.co/a7YVjfUJ8A

2022-06-15 18:33:43 To gain orders of magnitude in speed and efficiency for deep learning models, we need to investigate computing paradigms beyond traditional binary transistors. One approach I'm optimistic about (for inference specifically) is photonics. Here's an example: https://t.co/SqjFZ9p4Ws

2022-06-15 01:12:29 Inversely the higher the price rises the more electricity the network will consume.

2022-06-15 01:11:57 Bitcoin mining is a market, so the cost of the electricity consumed by the Bitcoin network is roughly equal to the value of the rewards paid out to miners. When the price of BTC goes down, the network adapts (i.e. miners take hardware offline) to maintain positive margins. https://t.co/mVepYHBefB

2022-06-15 00:47:15 The future = having to constantly question whether any image, article, video you see originates from the latent space of an AI

2022-06-14 23:42:59 @elonmusk @karpathy Andrej you up for showing me the tech next time I'm in Palo Alto?

2022-06-14 19:36:56 The greatest gap in the universe is between reality and our ability to make sense of it.

2022-06-14 18:17:35 It's easy to dunk on greed-driven pyramid schemes when they're down 70+%. Which is why I haven't been doing it lately. I was doing it when they were at all-time high. Because that's the exact time this kind of pushback is needed. https://t.co/x1blZK1OKM

2022-06-14 18:01:55 Mass deployment of self-driving cars was less than 5 years away in 2016. Then in 2021 it was 2-3 years away. In 2024 it will be 1 year away. Zeno's AI milestone.

2022-06-14 05:34:20 @divideconcept It makes perfect sense: the skills or abstractions they develop are very general, so every time they gain a new one it significantly increases their capabilities across a broad range of actions/situations and increases the scope of things they can learn next

2022-06-14 04:12:22 "In 1959, during the Cod War...""The Cold War. You mean the Cold War.""I meant what I said" https://t.co/EqSdEnfeIH

2022-06-14 01:43:10 Tried out an Amazon Go no-checkout store. Walk in, scan your code, grab stuff (under the watchful eye of dozens of cameras), walk out. Pretty smooth!The fact that it takes >

2022-06-14 01:25:55 And this, interestingly, relates to the phrase (I think from @AndrewYNg) that "if your brain can do it in less than a second, a DNN can probably do it" (paraphrasing). You can look at the number of firings in 1 second (bits) and compare it to DNN memory consumption (also bits).

2022-06-14 01:22:32 For a DNN, that's all the intermediate activations computed for a given input sample. And this is closely related to parameter count!For a brain, you could use the approximation "spike = binary event" and integrate the number of spikes over the duration of task solving.

2022-06-14 01:21:10 All computation works by turning input information into an intermediate representation, repeatedly, until you get the output. So a good way to quantify a computation is to measure the memory needed to store all intermediate representations involved in it (compressed).

2022-06-14 01:17:58 The parameter count in a deep learning model doesn't relate in any way to neuron count in a brain -- there is no analogy to be established there. At all.Yet I do think parameter count is actually a good benchmark. You just need to relate it to the right biological quantity.

2022-06-14 01:06:14 This is correct -- the No Free Lunch theorem is sometimes misused to make contrarian statements about the capabilities of certain algorithms or even that of human intelligence. In reality it is not applicable in these contexts. https://t.co/p3JkL17qDY

2022-06-14 00:59:33 Another thing to note is that the large majority of brain activity is unconscious and related to interfacing with the world (perception and motor control). The conscious reasoning part is minuscule, the tip of the iceberg. And yet it is incredibly powerful.

2022-06-13 23:22:14 A program synthesis approach (most effective kind of ARC solver developed so far) has to look at millions of different programs to find candidate solutions. The brain doesn't do that -- it just doesn't have the resources to consider so many possibilities, much less simulate them.

2022-06-13 23:20:08 And a lot of that would just be about turning the visual signal into a symbolic representation. The amount of symbolic processing going on is minimal. Yet you can do it. While no machine can.

2022-06-13 23:17:16 The number of neurons involved in solving an average-difficulty ARC task (out of reach for any LLM today) is probably a couple of billions, over a period of a few seconds. You're looking at hundreds of billions of firing events at most. At the cost of a fraction of a calorie.

2022-06-13 23:14:38 The brain's energy consumption is similar to that of a very bright LED bulb -- or 1/5th of that of a standard incandescent bulb. It's not exactly a data center.

2022-06-13 23:13:27 To put the "scale" narrative into perspective... The brain runs on 15 watts, at 8-35 hertz. And while we have ~90B neurons, usually only ~1B are active at any given time.The brain is very slow and does a lot with very little.

2022-06-13 19:00:09 Ex Machina (2015) https://t.co/4Ps8d7EWaV

2022-06-13 15:56:21 Just released by @nathanbenaich: a crowdsourced database cataloging 143 university spinouts. If you're in academia and thinking of starting your company, this is a must-read. Lots of great insights. https://t.co/HCeWepF0Ad

2022-06-12 21:34:20 As they say, if you don't schedule maintenance for your car, it will schedule it for you. Applies to people too. Discipline your life or your life will discipline you.

2022-06-12 20:45:44 "But wouldn't that distract from the research focus?" -- not at all, if it's actually general, then you don't need to spend resources on special-purpose integrations. Just make a general-purpose API available to serve your models. Easy to maintain with 10% of your staff.

2022-06-12 20:43:43 I think it's a pretty big red flag if someone tells you that they're halfway to human-level AI but need to raise $50B from you to really get there. If they had a half-AGI tech they could just print money by automating processes across every industry.

2022-06-12 20:42:11 You will tell the difference by looking at the real-world economic impact of the tech. But do note: this is *very different* from looking at valuations or amount of money raised.

2022-06-11 19:28:21 @JoshMiller656 It's definitely worth learning to use https://t.co/oiMJsLvdrt!

2022-06-11 19:27:40 @wandedob Couple of months

2022-06-11 18:33:57 What's the largest dataset you trained a Keras model on? If you didn't use https://t.co/oiMJsLvdrt and tf.distribute, what was your data pipeline &

2022-06-11 01:14:46 When you ask a question on Twitter, you might get better answers if you append "let's think step by step" to your tweet.

2022-06-10 23:27:00 Tech should serve user needs. It should not serve its own needs. Even if you have a CS PhD, you should design systems not just for the sake of system design, but with user needs in mind.

2022-06-10 21:33:13 webℵω

2022-06-10 21:29:27 Why stop at 5 or 6 or 10. Go straight to web∞

2022-06-10 20:29:46 From my publisher, 45% off Deep Learning with Python today: https://t.co/LvbEy5A0k8 https://t.co/Zr7NMLOgry

2022-06-10 16:15:24 You could say he solved the alignment problem

2022-06-10 14:49:55 My 1 year old (13 months) is now able to reliably plug in a USB-C charger cable, I am impressed

2022-06-10 04:45:10 @shingworks 100%

2022-06-10 04:19:57 RT @jaschasd: After 2 years of work by 442 contributors across 132 institutions, I am thrilled to announce that the https://t.co/wezEGzDEHt

2022-06-09 20:55:24 You are 100% correct. You only perceive the goalposts as moving if you misinterpreted what task-specific benchmarks were supposed to measure in the first place.Paragraph on chess: https://t.co/CIBrZKdDik https://t.co/6hbfHhyD1Y

2022-06-09 20:37:01 @Plinz Asimov did it first, though.

2022-06-09 02:20:28 @Plinz Joscha's basilisk

2022-06-09 01:27:20 We're hardwired to think like this. It's our theory of mind.But painting a smiley on a rock does not make it "happy".

2022-06-09 01:24:08 This is the origin of the "AI effect", the claim that goalposts are moving when folks point out that achieving task-specific skill on more tasks did not move us any closer to generality. https://t.co/9w9XeXOgRD

2022-06-09 01:21:03 A pretty common fallacy in AI is cognitive anthropomorphism: "as a human, I can use my understanding of X to perform Y, so if an AI can perform Y, then it must have a similar understanding of X".

2022-06-08 04:55:13 @ChrSzegedy @hardmaru @slatestarcodex If you can't describe what output you'd expect the AI to produce for a given input, then you don't have a "task" in the sense where I'm using the term. Only FSD qualifies as a task here.

2022-06-08 04:49:04 @scikud @hardmaru @slatestarcodex If you want a formal definition, I have a paper about it. It is not exactly data efficiency.

2022-06-08 04:37:20 @hardmaru @slatestarcodex You're getting the definition of "task" wrong -- you mean it to say "anything I want", but in AI a "task" is something you can precisely define. And if you can define the task then you can automate it. "Solving unanticipable tasks" can't be defined, so you can't call that a task.

2022-06-08 04:26:47 @hardmaru @slatestarcodex In that sense ARC is not a "task". It is a task generator. A non-predictable one. That's exactly what makes it a benchmark of intelligence.

2022-06-08 04:23:02 @hardmaru @slatestarcodex The definition of ARC is that you get tested on tasks that you cannot anticipate. Every task in ARC is new. It is a game that you cannot practice for.It is not about being "difficult". It is about producing novelty.

2022-06-08 04:19:25 @hardmaru @slatestarcodex As long as you seek to achieve task-specific skill, then you will find ways to achieve it for your tasks of choice, without demonstrating *any* intelligence (or generalizable cognitive abilities). For the most part this has been the history of AI as a field.

2022-06-08 04:17:36 @hardmaru @slatestarcodex Intelligence is the ability to efficiently pick up skills at new tasks you have not been prepared for. That's what's hard, that's what can't do. And of course no specific task can represent a "bar" for intelligence. By definition. If you fix the task then you can take shortcuts.

2022-06-08 04:14:58 @hardmaru @slatestarcodex This is an extraordinarily obtuse take. Of course you can make a machine do any task you want -- as long as the task is defined in advance. It is "a simple matter of programming" (or training, nowadays). And it is 100% orthogonal to intelligence.

2022-06-08 00:59:55 @miguelisolano @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis Agreed. I've seen a paper claiming to solve ARC at 80% despite actual test set performance of ~0. Cherrypicking test samples can make your model achieve whatever you want it to. We need to flesh out the success criterion more... formally.

2022-06-07 23:53:39 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis Also I'll bet you a bottle of Hibiki Harmony >

2022-06-07 23:45:50 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis Ok let's go with that date :) 4 years, it's still a long term bet.

2022-06-07 23:38:05 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis Oh actually that's V2. We should go with June 2026 for the 10 year anniversary.

2022-06-07 23:37:16 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis January 26, 2027. Picked because it's exactly 10 years after DeepMath hit arXiv.

2022-06-07 23:33:56 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis With a deadline of 2027 or sooner I would bet against it. 2029 is outside my risk threshold.

2022-06-07 23:30:59 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis What's the criterion?

2022-06-07 23:25:02 @ChrSzegedy @GaryMarcus @MelMitchell1 @ErnestSDavis Like all AI problems this is not a binary. Different inputs will be more or less difficult, and the system will have a non zero error rate no matter how advanced. You need to define the bar for success clearly.

2022-06-07 19:32:15 @ben_davidson8 @DynamicWebPaige Longer than many people think. But less than 100 years. A few decades.

2022-06-07 19:13:58 @DynamicWebPaige If you want computers to *think with you*, then you're going to need computers that think. That will take a little while.

2022-06-07 17:31:46 The debugging strategy formerly known as "prints"

2022-06-07 03:59:40 We like agent-centric, action-filled descriptions of the world. It's how we're wired to parse information.

2022-06-07 03:58:00 Some say that most devs only find imperative languages intuitive because that's what everybody learns, and find Prolog weird because they're not used to it. I think it's the reverse: imperative languages became mainstream because they better fit our natural thinking processes.

2022-06-07 02:07:43 Work work work

2022-06-06 15:45:14 End of an era. Thank you Anthony and Ben for everything you've done for the ML community! Kaggle has had an immense influence on the ML world. Excited to see what you build next! https://t.co/gSdiHkyJst

2022-06-06 01:23:12 Sandwiched between the meme "me and the boys at 2am looking for beans" and something about a generally intelligent humanoid robot going on sale by the end of the year

2022-06-06 00:48:41 If you plot EM's tweets on a graph and extrapolate from the trend, he's 47 days away from tweeting that global warming is a liberal hoax

2022-06-05 21:13:46 There are still so many library APIs that assume end users are aware of how the library is implemented... remember, APIs should be about what the user wants to do, not about the library's internals. Only the library implementer cares about that.

2022-06-05 20:29:54 If you want change in your life, you need to start by changing one or the other. And as it turns out it's much easier to change the things you do. Always start with the grounding layer.

2022-06-05 20:28:33 What you think shapes what you do, and what you do shapes what you think. Intellectually-oriented people are highly aware of the former but often underestimate the latter.

2022-06-05 19:25:26 @nbukhari96 These models have clear applications IMO

2022-06-05 18:50:29 @MathesonZander It's an incentive structure issue.

2022-06-05 18:49:25 This phenomenon is a big reason why the field has been thoroughly avoiding addressing hard, important problems. There are easier things you could do... and these may lead to shiny demos.

2022-06-05 18:47:16 ML research, even at big tech companies, is weirdly disconnected from high-value applications. It feels like a local optimization process that moves towards "what's the easiest thing we can do next?" rather than "what's the highest-impact thing we can do next?"

2022-06-04 18:54:32 @A_K_Nain Glad you enjoy the book :)

2022-06-04 18:54:13 RT @A_K_Nain: No matter for how long you have been doing ML, go read @fchollet Deep Learning with Python, 2e. Refreshing content all the wa…

2022-06-04 16:35:17 Untitled118.ipynb https://t.co/uNbY1fIitD

2022-06-04 00:31:34 @levie 35% of all fraud (by $ amount?) is an incredible number. Crypto has truly found its product-market fit and disrupted the only scam space.

2022-06-03 18:38:16 We're recruiting participants for the 1st European @GoogleDevEurope ML bootcamp! If you're an aspiring ML dev, you can apply here before June 20: https://t.co/awHCAX4Jgt

2022-06-03 18:02:35 This is happening now! https://t.co/UK7PQXKwmP

2022-06-03 16:21:01 No algorithm will save you if all content creators are following individual incentives that run opposite to our collective goals.

2022-06-03 16:20:44 Over a short time scale, the problem of surfacing great content is an algorithmic problem (or a curation problem). But over a long time scale, it's an incentive engineering problem.You see this with search engines and virtually all content platforms.

2022-06-02 03:25:53 Reasoning enables you to solve problems without requiring deep experience. If you understand the rules explicitly, you can generate language without having thousands of hours of practice. But of course it's much more convenient (lower energy expenditure) to do things intuitively.

2022-06-02 03:23:22 This also serves to illustrate the usual tradeoff of doing something via intuition / pattern recognition: you need a lot of experience to get it right. A speaker with low experience will need to think about the rules they're applying. They don't have the intuition.

2022-06-02 03:19:30 You can produce structurally correct language 100% intuitively, unconsciously -- it doesn't require any reasoning. The part you have to think about is *what you're trying to say*, the meaning of your speech. "Where your sentences are going", in a sense.

2022-06-02 03:17:08 I always thought that you should be able to manipulate the structure of language via a pattern recognition type system -- based on the observation that fluent speakers of a language never have to think about the grammatical rules they're applying or the words they're picking.

2022-06-01 21:54:38 RT @haifeng_jin: Keras community meeting is happening this Friday! Anyone can join. We will share the the latest news in the meeting.Meet…

2022-06-01 18:55:41 Also, IMO all of our relationships are built on the basis of mutual respect and mutual interest. This includes CEO/employee relationships. Avoid bosses that use disrespectful, one-sided, authoritarian tones.

2022-06-01 18:54:49 Remote work involves complex tradeoffs. IMO companies that will make the biggest productivity gains are those that understand these tradeoffs and show flexibility. https://t.co/yNXAqvgGcv

2022-06-01 17:22:14 Now try formulating such a bet for "AGI". You'll start seeing how fundamentally murky and ill-defined "AGI" is.

2022-06-01 17:20:40 If you want to make a bet on no-code tools and program synthesis, you could bet on the fraction of top-1000 apps in the App Store in 2032 for which the developers didn't have to write any code.

2022-06-01 17:19:22 An obvious example is self-driving cars: many people would argue we had the "capability" in 2016 or so. If you look at deployment, you see a different picture. I made one bet about self-driving cars, in 2018, and it was about a deployment outcome.

2022-06-01 17:18:02 I don't like to bet on future technological capabilities, because it's virtually impossible to measure "capabilities" objectively. I prefer to bet on specific deployment outcomes.

2022-06-01 16:56:02 Productivity is basically about realizing that each opportunity you're given is unique and may not come back. Make the best of it. And every minute of your day is an opportunity.

2022-06-01 15:20:08 A probabilistic programming language that can be used for procedural content generation! https://t.co/8Mz0e0eJEH

2022-06-01 02:43:02 What's the most relatable song lyrics you can quote?

2022-05-31 01:40:35 @hhm The inverse of data compression

2022-05-31 01:04:19 Also the "most pictures will be edited to make you look good" trend started in 2016 for most of the world. Doesn't require a very capable AI to work well enough.

2022-05-31 01:00:43 If you're going to "generate an essay from bullet points" (via a LLM), then I beg you, just share the bullet points. The rest of the words are clearly unnecessary. https://t.co/NVHFy1DPvK

2022-05-30 18:31:10 You wouldn't trust a random number generator to make decisions for you. So only listen to high-quality data sources. They're few and far between.And remember that you cannot "delegate a decision to the data". You can only use the data to inform your own (much bigger) vision.

2022-05-30 18:28:28 Most of the time your sources of data are biased or not statistically significant, but this detail is swept under the rug. And data is a multidimensional thing, that can tell a different story from different angles.As a result, "data" is often merely used as a narrative device. https://t.co/HX3wTCnaiA

2022-05-30 17:33:27 Hugging Face @huggingface is organizing a new sprint to create interactive demos for https://t.co/m6mT8SrKDD examples! Join the effort -- check out the links below. https://t.co/InKTUVIA0s

2022-05-30 15:27:24 @santoroAI What we're doing with LLMs right now is similar to this checkers example.

2022-05-30 15:26:49 @santoroAI Obviously we should not ask a random human to play checkers after a single demonstration, and say, "they learned it in one shot!". They likely already knew. But we can ask them if they've played checkers before, and restrict the experiment to those who've never seen a board game.

2022-05-30 15:25:30 @santoroAI It's much easier to control for priors and experiences in the case of humans. We roughly know what prior knowledge humans are born with, and we can observe the experience accumulated by a baby/toddler. For adults, we can simply ask them if they've seen similar tasks/games before.

2022-05-30 02:57:40 @lexfridman No one does.

2022-05-30 02:26:31 @A_K_Nain Make one for https://t.co/m6mT8SrKDD!

2022-05-29 23:50:32 @elonmusk Every turn of the clock can be a new beginning.

2022-05-29 22:31:32 @csabaveres @MuzafferKal_ Not appreciated in what sense?

2022-05-29 22:27:33 @dfarmer Yes!

2022-05-29 22:06:08 is a 2007 anime that remains, to this day, a better AR concept video than anything produced since.

2022-05-29 22:05:02 I had the opportunity to try the Magic Leap device a few days ago. It's basically what I expected cyberglass technology to be like I watched over a decade ago. But markedly less cool. At least software wise.

2022-05-29 20:09:25 A neat thing is that this evolution often goes together with accessibility and democratization -- more people can do digital art than could do oil/acrylic painting a few decades ago. AI will accelerate this trend.

2022-05-29 20:08:27 Art evolves. Concept art went from analog to digital in the 2000s, then it started incorporating 3D. Next it will incorporate latent-space-aided generation. But technology remains a tool in the artist's hand.

2022-05-29 20:06:18 However it's likely many of them will have incorporated AI as part of their workflow -- not unlike how many of them currently leverage 3D graphics tooling when creating 2D illustrations.

2022-05-29 20:06:17 Let me go on the record with the following prediction: there will still be demand for human concept artists (the kind working in the film &

2022-05-29 19:55:45 @MuzafferKal_ No.

2022-05-29 19:53:20 @maartengm I think such limitations are quantitative rather than qualitative -- you can learn &

2022-05-29 19:47:27 @maartengm I take a theoretical "yes" ("yes given enough data / the right dataset") as a plain yes. This is why the cost issue is essential: in the real world, cost can turn a yes into a no.

2022-05-29 19:44:54 You can write a web app in Prolog. That doesn't mean you should.

2022-05-29 19:41:56 The question is never, "can you do X with deep learning?" (the answer is always yes)The question is, "at what cost can you do X with deep learning?"This is what actually matters. https://t.co/mTXAx6RC6T

2022-05-29 19:39:56 DL is not missing "symbols" nor "compositionality". It has those. It's missing discrete program synthesis.See also chapter 14, Deep Learning with Python 2E https://t.co/Pret3WlDnq

2022-05-29 19:39:15 The gist of what's missing from deep learning is:- Suitability of vector abstractions to different kinds of problems (discrete problems require discrete abstractions).- (In)efficiency of gradient-descent-learning-vector-function to acquire abstractions.

2022-05-29 19:37:09 "Compositionality" is also not a limitation of deep learning. "Compositionality" means that you should be able to combine multiple abstractions into a single program. Deep learning models can do this well.

2022-05-29 19:35:49 IMO the distinction between "symbols" and the sort of abstractions learned by deep learning models (abstract vector functions) is not important. Deep learning models are capable of producing abstractions, to a degree. That's what matters.

2022-05-28 20:30:26 RT @Jeande_d: KerasCV is under development. With many good things we know about Keras API, no doubt KerasCV will also be one of the best co…

2022-05-28 19:36:04 Very relatable too. The chapter about assholes gave me PTSD from experiences on an earlier team...

2022-05-28 19:35:04 I'm halfway through it now! Easily the best book I've seen about how to navigate your tech career. Filled with insights that just click &

2022-05-28 18:35:43 Easy to produce theories and models of something. Even easier to produce words and essays. What's hard is producing crisp understanding.

2022-05-27 22:15:20 RT @elie: #TensorFlow Similarity 0.16 is out - lot of performance optimization and bug squashed. If you use it, you should update. If you d…

2022-05-26 09:45:20 @ethanCaballero Even this more limited claim isn't anywhere near true...

2022-05-26 09:33:32 If you're curious about what's missing exactly, and how deep learning will play a key role on the road to general AI, you're going to have to read chapter 14 of my book... https://t.co/Pret3W42vS

2022-05-26 09:32:16 @Singularityisc1 To an extent yes!

2022-05-26 09:29:34 Two perfectly compatible messages I've been repeating for years:1. Scaling up deep learning will keep paying off.2. Scaling up deep learning won't lead to AGI, because deep learning on its own is missing key properties required for general intelligence.

2022-05-26 08:26:27 (the above tweet should say "but" not "because". Where's my edit button)

2022-05-26 08:25:48 I think the thing is, I have many questions floating in my head about how to build minds, and the "you can do more of what we've been doing for years now" fails to answer these questions for me.

2022-05-26 08:24:29 This is how latent spaces work. It's nice to see it can scale to more refined semantic functions, more of them, and that you can combine them in fairly long chains. But I have not been impressed by the "yep, you can do more of it" line for years. Scientific fatigue.

2022-05-26 08:22:57 We knew this is 2014 -- the first example I can recall (which impressed me at the time) where things like the gender vector in word2vec. Now we replaced those simple semantic vectors with complex vector functions learned implicitly, and just like vectors, you can combine them.

2022-05-26 08:21:03 As usual, scaling up deep learning leads to more impressive demos (and in time, powerful applications), because scaling up on its own teaches us little of value. We knew we could do modular semantic compositionality via vector space transformations.

2022-05-26 08:19:10 On the other hand, from the scientific angle, I have to say I have found myself (perhaps surprisingly) really underwhelmed by the latest developments.

2022-05-26 08:18:16 On one hand, I really believe image generation has tons of cool applications, and will keep getting better (i.e. full photorealistic video generation from a script). This is something I've been anticipating for years now -- since 2015.

2022-05-26 08:17:49 Some thoughts from January 2021 that seem still relevant... https://t.co/PHod7Q0YsZ

2022-05-25 17:48:36 RT @ManningBooks: [NEW RELEASE] Deep Learning with R, Second Edition with @fchollet , J J Allaire and Tomasz Kalinowski of @rstudio - a han…

2022-05-25 13:19:45 @mhviraf Just email me.

2022-05-24 15:20:20 "The technological revolution that’s currently unfolding didn’t start with any single breakthrough invention. Rather, like any other revolution, it’s the product of a vast accumulation of enabling factors -- gradual at first, and then sudden."From https://t.co/Pret3W42vS

2022-05-24 14:55:25 Billionaires supporting an openly autocratic movement in the US are short-sighted. As we saw in China and Russia, being extremely rich does not protect you against the whims of an autocratic strongman. Democracy and the rule of law offer protections even to billionaires. https://t.co/Zi8VhrPclw

2022-05-24 08:59:10 Looking at past generations, I can't help but ask: what can *we* do to make the next generations proud of us, rather than resentful? What can we do to give them more than we take away from them?

2022-05-23 18:48:34 Specifically, we need to preserve a certain set of environmental conditions (that we evolved for) in order to survive. Nature will adapt. For us it will be more painful.

2022-05-23 18:42:06 RT @jeffheaton: You can now get my complete course, "Applications of Deep Neural Networks" as a 576-page paperback or Kindle. YouTube video…

2022-05-23 18:41:00 RT @Weather_West: Worth noting that one of the largest uncertainties in analyses of recent record-breaking heatwaves globally is that the e…

2022-05-23 18:33:22 We need nature, but nature doesn't need us. Environmentalism is fundamentally self-preservation.

2022-05-23 07:33:16 Most people in tech see their job as a way to make money and then retire. But you spend most of your time working! It shapes who you are and defines the mark you leave on the world. Your job is first and foremost your window of opportunity to grow, learn, and make an impact.

2022-05-22 11:25:35 @sutsilvanianer That's why we have a private test set. You can still make new submissions to the Kaggle competition to check your results on the private test set.

2022-05-22 11:13:28 This isn't to say that the "better models" will be "clever models", models that hardcode significant amounts of priors. I'd actually expect them to be simple models that scale. The key though, is the part that should scale is generalization ability as a function of experience.

2022-05-20 08:11:00 CAFIAC FIX

2022-10-28 18:45:09 The silver lining of dire situations is that they nudge you to reprioritize and refocus on what's actually important. Family and health -- the rest is noise.

2022-10-28 15:31:55 We bought a children's image book about vehicles (marketed as ages 3+), and it turns out it has an explainer about the 5 levels of autonomy. Perfect. https://t.co/o40i508pYK

2022-10-27 19:33:52 A tomato juice is probably one of the top 3 most chaotic things you can order as your in-flight drink

2022-10-27 14:31:27 This is how I explain overfitting now. https://t.co/lHE3p21RgR

2022-10-27 02:04:31 You get more of what you incentivize. So be deliberate about it.

2022-10-27 01:57:05 One thing Twitter is really good at: getting people addicted to being outraged at things

2022-10-26 19:14:27 Very neat project featuring high-level, mid-level, and low-level APIs for computer vision systems across a wide range of use cases: https://t.co/3KdK9FHDsN https://t.co/8Lc3ojHv1i

2022-10-26 18:36:22 @antgoldbloom Happy to answer any Keras questions you might have :)

2022-10-26 12:14:32 Great teams repeatedly ship great features.Your culture, your processes, and most of all your people provide you with a far more durable advantage than any feature or technology.

2022-10-26 12:11:28 On the topic of cheap clones. Any feature of your product can be cloned (and will be, if it's any good). But a great team culture and a finely-tuned design process are things that are very hard to replicate and that provide a long-term differentiating advantage.

2022-10-26 01:22:06 RT @gusthema: . @kaggle made a very cool announcement and enabled:VMs with 2 NVIDIA T4 are now availableYour question now is: How d…

2022-10-25 19:39:15 It's dramatic how much more fancy you feel when you drink coffee out of a nice porcelain cup as opposed to a paper cup (once in a while...)

2022-10-25 17:30:49 Multi-GPU (2x T4) now available with Kaggle Notebooks! For free :) https://t.co/5LRNYzDgQz

2022-10-25 17:07:01 You can tell a lot about someone from who they admire

2022-10-25 15:38:49 PyTorch-Lightning is what you get when you order Keras from AliExpress

2022-10-25 04:02:21 @amasad Arguably also true for image generation: while there are countless images on the web, there aren't that many high-quality ones (what you actually want to generate)... Once you train on the top 1B you're done.

2022-10-25 01:00:24 Very fun NumPy fact: if you have a dict-like NumPy object such as `obj = np.array({"1": 1})`, you can't convert it back to a dict via `dict(obj)`. You need to do, intuitively, `obj = obj.tolist()`, which returns a dict.

2022-10-24 21:27:50 @mihaimaruseac I mean, it's also capitalist architecture. It's utilitarian, "I don't care" architecture, more broadly.

2022-10-24 21:19:14 I'm not a fan of clusters of building that are carbon copies of one another. It feels like a statement denying the possibility of individuality, a statement that everything is fungible and replaceable.

2022-10-24 19:54:22 @StillTr05207382 I do agree that intelligent reasoning is only a small slice of human cognition overall (though it is distributed and constant, not confined to specific times in the day), but it's the really important slice :)

2022-10-24 19:52:05 @enceladus2000 Yes, there have been multiple efforts, from early attempts leveraging the GPT-3 API, to much more sophisticated efforts with cascades of LLMs to describe the tasks, fed into code-generation models to produce candidate solutions, etc. Unpublished unfortunately (it didn't work!)

2022-10-24 02:21:17 @Plinz The most interesting solutions I've seen so far were not genetic, and some didn't even feature a DSL (direct-to-output)...

2022-10-24 01:56:00 In AI, it's often the case that beating a benchmark says more about the benchmark than about the AI.

2022-10-23 14:02:18 RT @svpino: I spent 5 days reading everything I found about image generation. Stable Diffusion is one of the most impressive systems I’ve…

2022-10-22 23:56:43 The only way to make an informed choice is to do your own hands-on research.

2022-10-22 23:55:54 Do NOT make a choice based on peer pressure or social media chatter. Actually compare equivalent code examples side by side in different frameworks. Actually try to write part of your codebase with each option. See how fast they run. See how elegant the code is (or not).

2022-10-22 23:54:23 Most important factors IMO:- How maintainable the framework makes your codebase (concise, simple, extensible)- How quickly it enables you to get to a solution (documentation, debugability)- How fast / efficient it makes your models

2022-10-22 23:50:05 A ML framework that shrinks your codebase from 3,500 lines to 1,000 lines saves you hours of work every week. A ML frameworks that increases your device utilization by 20% saves you $100k on a $500k training job.It adds up. Pick your tools wisely.

2022-10-22 21:32:37 I'm not that attracted to the idea of building autonomous AI agents.I want to build an on-demand "better brain", through which humans and machines could co-think together.

2022-10-21 21:18:52 The views from the Google office in SF aren't too shabby... https://t.co/BHFSyO9nTC

2022-10-21 21:15:37 I think the general public would gain from understanding both the limitations *and* the benefits of this big wave of change.But arguably, the benefits are already covered by corporate PR-driven journalism, and there is a much greater need for raising awareness of limitations...

2022-10-21 21:13:07 Yoshua and I were interviewed for an @ARTEfr documentary on AI. I thought the film was pretty good -- explaining things clearly for the general public and raising awareness of some of the limitations and potentials risks of modern AI. https://t.co/cBbHpgKUJn

2022-10-21 18:33:56 RT @ach3d: Excellent documentaire @ARTEfr sur l'IA avec des explications très claires et intelligentes de @fchollet et Yoshua Bengio en par…

2022-10-21 15:58:16 Tune in! https://t.co/EqtbrQkOlU

2022-10-21 15:19:24 There are many efficiency-related reasons for building dense and human-centric cities, but IMO life quality is the single best argument.

2022-10-21 14:24:16 RT @fadibadine: Next version @TensorFlow is coming in 2023 with a focus on 4 pillars:- Fast &

2022-10-21 14:24:11 RT @lak_luster: Great roadmap. Loved seeing the explicit statement about backwards compatibility, and norming to numpy api. Excited about X…

2022-10-21 00:23:09 RT @humphd: This is great to see, esp the commitment to stability and improving edge deployments. I’m especially interested in tensorflow…

2022-10-28 18:45:09 The silver lining of dire situations is that they nudge you to reprioritize and refocus on what's actually important. Family and health -- the rest is noise.

2022-10-28 15:31:55 We bought a children's image book about vehicles (marketed as ages 3+), and it turns out it has an explainer about the 5 levels of autonomy. Perfect. https://t.co/o40i508pYK

2022-10-27 19:33:52 A tomato juice is probably one of the top 3 most chaotic things you can order as your in-flight drink

2022-10-27 14:31:27 This is how I explain overfitting now. https://t.co/lHE3p21RgR

2022-10-27 02:04:31 You get more of what you incentivize. So be deliberate about it.

2022-10-27 01:57:05 One thing Twitter is really good at: getting people addicted to being outraged at things

2022-10-26 19:14:27 Very neat project featuring high-level, mid-level, and low-level APIs for computer vision systems across a wide range of use cases: https://t.co/3KdK9FHDsN https://t.co/8Lc3ojHv1i

2022-10-26 18:36:22 @antgoldbloom Happy to answer any Keras questions you might have :)

2022-10-26 12:14:32 Great teams repeatedly ship great features.Your culture, your processes, and most of all your people provide you with a far more durable advantage than any feature or technology.

2022-10-26 12:11:28 On the topic of cheap clones. Any feature of your product can be cloned (and will be, if it's any good). But a great team culture and a finely-tuned design process are things that are very hard to replicate and that provide a long-term differentiating advantage.

2022-10-26 01:22:06 RT @gusthema: . @kaggle made a very cool announcement and enabled:VMs with 2 NVIDIA T4 are now availableYour question now is: How d…

2022-10-25 19:39:15 It's dramatic how much more fancy you feel when you drink coffee out of a nice porcelain cup as opposed to a paper cup (once in a while...)

2022-10-25 17:30:49 Multi-GPU (2x T4) now available with Kaggle Notebooks! For free :) https://t.co/5LRNYzDgQz

2022-10-25 17:07:01 You can tell a lot about someone from who they admire

2022-10-25 15:38:49 PyTorch-Lightning is what you get when you order Keras from AliExpress

2022-10-25 04:02:21 @amasad Arguably also true for image generation: while there are countless images on the web, there aren't that many high-quality ones (what you actually want to generate)... Once you train on the top 1B you're done.

2022-10-25 01:00:24 Very fun NumPy fact: if you have a dict-like NumPy object such as `obj = np.array({"1": 1})`, you can't convert it back to a dict via `dict(obj)`. You need to do, intuitively, `obj = obj.tolist()`, which returns a dict.

2022-10-24 21:27:50 @mihaimaruseac I mean, it's also capitalist architecture. It's utilitarian, "I don't care" architecture, more broadly.

2022-10-24 21:19:14 I'm not a fan of clusters of building that are carbon copies of one another. It feels like a statement denying the possibility of individuality, a statement that everything is fungible and replaceable.

2022-10-24 19:54:22 @StillTr05207382 I do agree that intelligent reasoning is only a small slice of human cognition overall (though it is distributed and constant, not confined to specific times in the day), but it's the really important slice :)

2022-10-24 19:52:05 @enceladus2000 Yes, there have been multiple efforts, from early attempts leveraging the GPT-3 API, to much more sophisticated efforts with cascades of LLMs to describe the tasks, fed into code-generation models to produce candidate solutions, etc. Unpublished unfortunately (it didn't work!)

2022-10-24 02:21:17 @Plinz The most interesting solutions I've seen so far were not genetic, and some didn't even feature a DSL (direct-to-output)...

2022-10-24 01:56:00 In AI, it's often the case that beating a benchmark says more about the benchmark than about the AI.

2022-10-23 14:02:18 RT @svpino: I spent 5 days reading everything I found about image generation. Stable Diffusion is one of the most impressive systems I’ve…

2022-10-22 23:56:43 The only way to make an informed choice is to do your own hands-on research.

2022-10-22 23:55:54 Do NOT make a choice based on peer pressure or social media chatter. Actually compare equivalent code examples side by side in different frameworks. Actually try to write part of your codebase with each option. See how fast they run. See how elegant the code is (or not).

2022-10-22 23:54:23 Most important factors IMO:- How maintainable the framework makes your codebase (concise, simple, extensible)- How quickly it enables you to get to a solution (documentation, debugability)- How fast / efficient it makes your models

2022-10-22 23:50:05 A ML framework that shrinks your codebase from 3,500 lines to 1,000 lines saves you hours of work every week. A ML frameworks that increases your device utilization by 20% saves you $100k on a $500k training job.It adds up. Pick your tools wisely.

2022-10-22 21:32:37 I'm not that attracted to the idea of building autonomous AI agents.I want to build an on-demand "better brain", through which humans and machines could co-think together.

2022-10-21 21:18:52 The views from the Google office in SF aren't too shabby... https://t.co/BHFSyO9nTC

2022-10-21 21:15:37 I think the general public would gain from understanding both the limitations *and* the benefits of this big wave of change.But arguably, the benefits are already covered by corporate PR-driven journalism, and there is a much greater need for raising awareness of limitations...

2022-10-21 21:13:07 Yoshua and I were interviewed for an @ARTEfr documentary on AI. I thought the film was pretty good -- explaining things clearly for the general public and raising awareness of some of the limitations and potentials risks of modern AI. https://t.co/cBbHpgKUJn

2022-10-21 18:33:56 RT @ach3d: Excellent documentaire @ARTEfr sur l'IA avec des explications très claires et intelligentes de @fchollet et Yoshua Bengio en par…

2022-10-21 15:58:16 Tune in! https://t.co/EqtbrQkOlU

2022-10-21 15:19:24 There are many efficiency-related reasons for building dense and human-centric cities, but IMO life quality is the single best argument.

2022-10-21 14:24:16 RT @fadibadine: Next version @TensorFlow is coming in 2023 with a focus on 4 pillars:- Fast &

2022-10-21 14:24:11 RT @lak_luster: Great roadmap. Loved seeing the explicit statement about backwards compatibility, and norming to numpy api. Excited about X…

2022-10-21 00:23:09 RT @humphd: This is great to see, esp the commitment to stability and improving edge deployments. I’m especially interested in tensorflow…

2022-10-30 03:41:41 @CastleQueen007 The code is available on GitHub here: https://t.co/QXQOV3gSko Enjoy the book!

2022-10-30 03:36:42 Code is downstream of processes &

2022-10-30 03:26:20 The most important form of capital in any organization is human capital.

2022-10-30 03:25:44 Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component of these processes are people.The code is just a by-product. More of a liability than an asset.

2022-10-28 18:45:09 The silver lining of dire situations is that they nudge you to reprioritize and refocus on what's actually important. Family and health -- the rest is noise.

2022-10-28 15:31:55 We bought a children's image book about vehicles (marketed as ages 3+), and it turns out it has an explainer about the 5 levels of autonomy. Perfect. https://t.co/o40i508pYK

2022-10-27 19:33:52 A tomato juice is probably one of the top 3 most chaotic things you can order as your in-flight drink

2022-10-27 14:31:27 This is how I explain overfitting now. https://t.co/lHE3p21RgR

2022-10-27 02:04:31 You get more of what you incentivize. So be deliberate about it.

2022-10-27 01:57:05 One thing Twitter is really good at: getting people addicted to being outraged at things

2022-10-26 19:14:27 Very neat project featuring high-level, mid-level, and low-level APIs for computer vision systems across a wide range of use cases: https://t.co/3KdK9FHDsN https://t.co/8Lc3ojHv1i

2022-10-26 18:36:22 @antgoldbloom Happy to answer any Keras questions you might have :)

2022-10-26 12:14:32 Great teams repeatedly ship great features.Your culture, your processes, and most of all your people provide you with a far more durable advantage than any feature or technology.

2022-10-26 12:11:28 On the topic of cheap clones. Any feature of your product can be cloned (and will be, if it's any good). But a great team culture and a finely-tuned design process are things that are very hard to replicate and that provide a long-term differentiating advantage.

2022-10-26 01:22:06 RT @gusthema: . @kaggle made a very cool announcement and enabled:VMs with 2 NVIDIA T4 are now availableYour question now is: How d…

2022-10-25 19:39:15 It's dramatic how much more fancy you feel when you drink coffee out of a nice porcelain cup as opposed to a paper cup (once in a while...)

2022-10-25 17:30:49 Multi-GPU (2x T4) now available with Kaggle Notebooks! For free :) https://t.co/5LRNYzDgQz

2022-10-25 17:07:01 You can tell a lot about someone from who they admire

2022-10-25 15:38:49 PyTorch-Lightning is what you get when you order Keras from AliExpress

2022-10-25 04:02:21 @amasad Arguably also true for image generation: while there are countless images on the web, there aren't that many high-quality ones (what you actually want to generate)... Once you train on the top 1B you're done.

2022-10-25 01:00:24 Very fun NumPy fact: if you have a dict-like NumPy object such as `obj = np.array({"1": 1})`, you can't convert it back to a dict via `dict(obj)`. You need to do, intuitively, `obj = obj.tolist()`, which returns a dict.

2022-10-24 21:27:50 @mihaimaruseac I mean, it's also capitalist architecture. It's utilitarian, "I don't care" architecture, more broadly.

2022-10-24 21:19:14 I'm not a fan of clusters of building that are carbon copies of one another. It feels like a statement denying the possibility of individuality, a statement that everything is fungible and replaceable.

2022-10-24 19:54:22 @StillTr05207382 I do agree that intelligent reasoning is only a small slice of human cognition overall (though it is distributed and constant, not confined to specific times in the day), but it's the really important slice :)

2022-10-24 19:52:05 @enceladus2000 Yes, there have been multiple efforts, from early attempts leveraging the GPT-3 API, to much more sophisticated efforts with cascades of LLMs to describe the tasks, fed into code-generation models to produce candidate solutions, etc. Unpublished unfortunately (it didn't work!)

2022-10-24 02:21:17 @Plinz The most interesting solutions I've seen so far were not genetic, and some didn't even feature a DSL (direct-to-output)...

2022-10-24 01:56:00 In AI, it's often the case that beating a benchmark says more about the benchmark than about the AI.

2022-10-23 14:02:18 RT @svpino: I spent 5 days reading everything I found about image generation. Stable Diffusion is one of the most impressive systems I’ve…

2022-10-22 23:56:43 The only way to make an informed choice is to do your own hands-on research.

2022-10-22 23:55:54 Do NOT make a choice based on peer pressure or social media chatter. Actually compare equivalent code examples side by side in different frameworks. Actually try to write part of your codebase with each option. See how fast they run. See how elegant the code is (or not).

2022-10-22 23:54:23 Most important factors IMO:- How maintainable the framework makes your codebase (concise, simple, extensible)- How quickly it enables you to get to a solution (documentation, debugability)- How fast / efficient it makes your models

2022-10-22 23:50:05 A ML framework that shrinks your codebase from 3,500 lines to 1,000 lines saves you hours of work every week. A ML frameworks that increases your device utilization by 20% saves you $100k on a $500k training job.It adds up. Pick your tools wisely.

2022-10-22 21:32:37 I'm not that attracted to the idea of building autonomous AI agents.I want to build an on-demand "better brain", through which humans and machines could co-think together.

2022-10-21 21:18:52 The views from the Google office in SF aren't too shabby... https://t.co/BHFSyO9nTC

2022-10-21 21:15:37 I think the general public would gain from understanding both the limitations *and* the benefits of this big wave of change.But arguably, the benefits are already covered by corporate PR-driven journalism, and there is a much greater need for raising awareness of limitations...

2022-10-21 21:13:07 Yoshua and I were interviewed for an @ARTEfr documentary on AI. I thought the film was pretty good -- explaining things clearly for the general public and raising awareness of some of the limitations and potentials risks of modern AI. https://t.co/cBbHpgKUJn

2022-10-21 18:33:56 RT @ach3d: Excellent documentaire @ARTEfr sur l'IA avec des explications très claires et intelligentes de @fchollet et Yoshua Bengio en par…

2022-10-21 15:58:16 Tune in! https://t.co/EqtbrQkOlU

2022-10-21 15:19:24 There are many efficiency-related reasons for building dense and human-centric cities, but IMO life quality is the single best argument.

2022-10-21 14:24:16 RT @fadibadine: Next version @TensorFlow is coming in 2023 with a focus on 4 pillars:- Fast &

2022-10-21 14:24:11 RT @lak_luster: Great roadmap. Loved seeing the explicit statement about backwards compatibility, and norming to numpy api. Excited about X…

2022-10-21 00:23:09 RT @humphd: This is great to see, esp the commitment to stability and improving edge deployments. I’m especially interested in tensorflow…

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-23 00:05:03 Perhaps there's something in the water right now, but it seems public displays of sociopathy (up to explicit calls for violence) are getting increasingly common and normalized. The worst people are feeling empowered. Reminds me of late 2016.

2022-11-22 20:41:14 @FProescholdt Yes, this tweet thread

2022-11-22 18:51:05 Announcing the Keras community prize, running from today to December 31st: https://t.co/ebtZVdhPet Any OSS project using (or forking) KerasCV StableDiffusion is eligible. Notebooks, GitHub repos, tutorials, etc.

2022-11-22 17:26:57 Note that economic output is different from economic input. Don't look at funding, which is merely a measure of blind hype. Look at revenue.

2022-11-22 17:26:06 The only reliable way to evaluate the importance of an AI product / advance is to wait 1-2 years after public release and look at its economic impact. Game-changers have immediate, large impact, and drive entire new genres of *profitable* startups. Economic output can't be gamed.

2022-11-22 17:21:44 Product gets hyped based on demos. Gets released. Turns out to have weak generalization power beyond the demos and to fail to live up to expectations. Hype dies down. Rinse and repeat.

2022-11-22 17:18:34 With AI systems, it's a bad idea to use a product demo (= absolute best case scenario) to extrapolate about the median case. The value of AI lies in its ability to generalize, which is entirely impossible to evaluate from a cherrypicked sample.

2022-11-22 14:47:18 RT @oneunderscore__: I talked this morning about an inflection point in this country right now, specifically for reporters: What are you m…

2022-11-22 14:46:30 Some people like to brag about being apolitical -- even unaware of all recent political events. From a selfish perspective, I can see the appeal of dispensing with the collective. But if you zoom out -- it's not something to brag about. Stand for something other than yourself.

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-25 03:41:58 RT @fchollet: We exist in two worlds at the same time. One is our everyday life. The other is the actual universe around us -- a world of i…

2022-11-24 20:36:37 "sure, people whose stuff i admire like Stephen King, Trent Reznor, or Neil Gaiman are savagely dunking on me, but at least the far-right meme makers will always have my back"

2022-11-24 15:05:50 Happy Thanksgiving to all who celebrate!

2022-11-24 05:13:41 RT @cIubmoss: Was walking outside with my phone unlocked in my hand and accidentally took this picture of an owl https://t.co/1I01asfOir

2022-11-24 03:26:45 RT @ryanwellsr: In this tutorial, you will learn about learning rate schedules and decay using Keras. You’ll learn how to use Keras’ standa…

2022-11-23 22:25:44 Don't know what to do over the long weekend? Enter the Keras community prize -- create OSS notebooks and win $9k in prizes. Open until late December. https://t.co/wV1eOkaC6D

2022-11-23 20:36:21 RT @clhubes: The funniest thing that’s ever happened to me as a parent is once my 2yo was having a full on meltdown and accidentally kicked…

2022-11-23 17:26:31 Many people in tech have so little exposure to philosophy (or the humanities in general) that when they get exposed to old ideas like Plato's cavern or the simulation hypothesis, they think it's extremely profound and novel

2022-11-23 03:38:19 I've started a newsletter. Subscribe to stay in touch! https://t.co/b678OACRKh

2022-11-23 03:28:22 RT @dbs_dsml: #AIopinions "If you want to drive change, invest your efforts in each layer of the stack proportionally to its importance. Pe…

2022-11-23 03:08:52 A nice thing about software development is that you're never done learning. There's always something new.

2022-11-23 00:05:03 Perhaps there's something in the water right now, but it seems public displays of sociopathy (up to explicit calls for violence) are getting increasingly common and normalized. The worst people are feeling empowered. Reminds me of late 2016.

2022-11-22 20:41:14 @FProescholdt Yes, this tweet thread

2022-11-22 18:51:05 Announcing the Keras community prize, running from today to December 31st: https://t.co/ebtZVdhPet Any OSS project using (or forking) KerasCV StableDiffusion is eligible. Notebooks, GitHub repos, tutorials, etc.

2022-11-22 17:26:57 Note that economic output is different from economic input. Don't look at funding, which is merely a measure of blind hype. Look at revenue.

2022-11-22 17:26:06 The only reliable way to evaluate the importance of an AI product / advance is to wait 1-2 years after public release and look at its economic impact. Game-changers have immediate, large impact, and drive entire new genres of *profitable* startups. Economic output can't be gamed.

2022-11-22 17:21:44 Product gets hyped based on demos. Gets released. Turns out to have weak generalization power beyond the demos and to fail to live up to expectations. Hype dies down. Rinse and repeat.

2022-11-22 17:18:34 With AI systems, it's a bad idea to use a product demo (= absolute best case scenario) to extrapolate about the median case. The value of AI lies in its ability to generalize, which is entirely impossible to evaluate from a cherrypicked sample.

2022-11-22 14:47:18 RT @oneunderscore__: I talked this morning about an inflection point in this country right now, specifically for reporters: What are you m…

2022-11-22 14:46:30 Some people like to brag about being apolitical -- even unaware of all recent political events. From a selfish perspective, I can see the appeal of dispensing with the collective. But if you zoom out -- it's not something to brag about. Stand for something other than yourself.

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-26 00:39:30 @gusthema Awesome! Thank you.

2022-11-26 00:34:34 @gusthema I didn't see that! What's the PR?

2022-11-26 00:22:13 @lawrencecchen Once you go back, causality branches, so the young you is a different person, with their own life ahead of them. And you're killing that person to replace them...

2022-11-26 00:17:01 What if you could go back in time to when you were X year old but with your current knowledge / memories? Well, that would mean you'd be killing the younger you and taking their place in that timeline. You monster.

2022-11-25 19:41:53 Or in some cases, not paying the bills, I suppose. https://t.co/LL5PP9j9cm

2022-11-25 19:41:33 I'm never leaving. I started tweeting before EM (he started in 2011, I started in 2009), and I'll still be here after he's gone (unless I get kicked out or the site shuts down). This is my page. He's just paying the bills. https://t.co/50uDC8EFuH

2022-11-25 14:57:01 One possibility: add support for StableDiffusion 2.0 in KerasCV https://t.co/LYjoadxrtg

2022-11-25 03:41:58 RT @fchollet: We exist in two worlds at the same time. One is our everyday life. The other is the actual universe around us -- a world of i…

2022-11-24 20:36:37 "sure, people whose stuff i admire like Stephen King, Trent Reznor, or Neil Gaiman are savagely dunking on me, but at least the far-right meme makers will always have my back"

2022-11-24 15:05:50 Happy Thanksgiving to all who celebrate!

2022-11-24 05:13:41 RT @cIubmoss: Was walking outside with my phone unlocked in my hand and accidentally took this picture of an owl https://t.co/1I01asfOir

2022-11-24 03:26:45 RT @ryanwellsr: In this tutorial, you will learn about learning rate schedules and decay using Keras. You’ll learn how to use Keras’ standa…

2022-11-23 22:25:44 Don't know what to do over the long weekend? Enter the Keras community prize -- create OSS notebooks and win $9k in prizes. Open until late December. https://t.co/wV1eOkaC6D

2022-11-23 20:36:21 RT @clhubes: The funniest thing that’s ever happened to me as a parent is once my 2yo was having a full on meltdown and accidentally kicked…

2022-11-23 17:26:31 Many people in tech have so little exposure to philosophy (or the humanities in general) that when they get exposed to old ideas like Plato's cavern or the simulation hypothesis, they think it's extremely profound and novel

2022-11-23 03:38:19 I've started a newsletter. Subscribe to stay in touch! https://t.co/b678OACRKh

2022-11-23 03:28:22 RT @dbs_dsml: #AIopinions "If you want to drive change, invest your efforts in each layer of the stack proportionally to its importance. Pe…

2022-11-23 03:08:52 A nice thing about software development is that you're never done learning. There's always something new.

2022-11-23 00:05:03 Perhaps there's something in the water right now, but it seems public displays of sociopathy (up to explicit calls for violence) are getting increasingly common and normalized. The worst people are feeling empowered. Reminds me of late 2016.

2022-11-22 20:41:14 @FProescholdt Yes, this tweet thread

2022-11-22 18:51:05 Announcing the Keras community prize, running from today to December 31st: https://t.co/ebtZVdhPet Any OSS project using (or forking) KerasCV StableDiffusion is eligible. Notebooks, GitHub repos, tutorials, etc.

2022-11-22 17:26:57 Note that economic output is different from economic input. Don't look at funding, which is merely a measure of blind hype. Look at revenue.

2022-11-22 17:26:06 The only reliable way to evaluate the importance of an AI product / advance is to wait 1-2 years after public release and look at its economic impact. Game-changers have immediate, large impact, and drive entire new genres of *profitable* startups. Economic output can't be gamed.

2022-11-22 17:21:44 Product gets hyped based on demos. Gets released. Turns out to have weak generalization power beyond the demos and to fail to live up to expectations. Hype dies down. Rinse and repeat.

2022-11-22 17:18:34 With AI systems, it's a bad idea to use a product demo (= absolute best case scenario) to extrapolate about the median case. The value of AI lies in its ability to generalize, which is entirely impossible to evaluate from a cherrypicked sample.

2022-11-22 14:47:18 RT @oneunderscore__: I talked this morning about an inflection point in this country right now, specifically for reporters: What are you m…

2022-11-22 14:46:30 Some people like to brag about being apolitical -- even unaware of all recent political events. From a selfish perspective, I can see the appeal of dispensing with the collective. But if you zoom out -- it's not something to brag about. Stand for something other than yourself.

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-27 01:09:58 We're pretty good at solving hard problems while sleeping or while doing something else. But for that effect to kick in, you need to have spent a while banging your head at the problem first. It's only payback for prior effort.

2022-11-26 00:39:30 @gusthema Awesome! Thank you.

2022-11-26 00:34:34 @gusthema I didn't see that! What's the PR?

2022-11-26 00:22:13 @lawrencecchen Once you go back, causality branches, so the young you is a different person, with their own life ahead of them. And you're killing that person to replace them...

2022-11-26 00:17:01 What if you could go back in time to when you were X year old but with your current knowledge / memories? Well, that would mean you'd be killing the younger you and taking their place in that timeline. You monster.

2022-11-25 19:41:53 Or in some cases, not paying the bills, I suppose. https://t.co/LL5PP9j9cm

2022-11-25 19:41:33 I'm never leaving. I started tweeting before EM (he started in 2011, I started in 2009), and I'll still be here after he's gone (unless I get kicked out or the site shuts down). This is my page. He's just paying the bills. https://t.co/50uDC8EFuH

2022-11-25 14:57:01 One possibility: add support for StableDiffusion 2.0 in KerasCV https://t.co/LYjoadxrtg

2022-11-25 03:41:58 RT @fchollet: We exist in two worlds at the same time. One is our everyday life. The other is the actual universe around us -- a world of i…

2022-11-24 20:36:37 "sure, people whose stuff i admire like Stephen King, Trent Reznor, or Neil Gaiman are savagely dunking on me, but at least the far-right meme makers will always have my back"

2022-11-24 15:05:50 Happy Thanksgiving to all who celebrate!

2022-11-24 05:13:41 RT @cIubmoss: Was walking outside with my phone unlocked in my hand and accidentally took this picture of an owl https://t.co/1I01asfOir

2022-11-24 03:26:45 RT @ryanwellsr: In this tutorial, you will learn about learning rate schedules and decay using Keras. You’ll learn how to use Keras’ standa…

2022-11-23 22:25:44 Don't know what to do over the long weekend? Enter the Keras community prize -- create OSS notebooks and win $9k in prizes. Open until late December. https://t.co/wV1eOkaC6D

2022-11-23 20:36:21 RT @clhubes: The funniest thing that’s ever happened to me as a parent is once my 2yo was having a full on meltdown and accidentally kicked…

2022-11-23 17:26:31 Many people in tech have so little exposure to philosophy (or the humanities in general) that when they get exposed to old ideas like Plato's cavern or the simulation hypothesis, they think it's extremely profound and novel

2022-11-23 03:38:19 I've started a newsletter. Subscribe to stay in touch! https://t.co/b678OACRKh

2022-11-23 03:28:22 RT @dbs_dsml: #AIopinions "If you want to drive change, invest your efforts in each layer of the stack proportionally to its importance. Pe…

2022-11-23 03:08:52 A nice thing about software development is that you're never done learning. There's always something new.

2022-11-23 00:05:03 Perhaps there's something in the water right now, but it seems public displays of sociopathy (up to explicit calls for violence) are getting increasingly common and normalized. The worst people are feeling empowered. Reminds me of late 2016.

2022-11-22 20:41:14 @FProescholdt Yes, this tweet thread

2022-11-22 18:51:05 Announcing the Keras community prize, running from today to December 31st: https://t.co/ebtZVdhPet Any OSS project using (or forking) KerasCV StableDiffusion is eligible. Notebooks, GitHub repos, tutorials, etc.

2022-11-22 17:26:57 Note that economic output is different from economic input. Don't look at funding, which is merely a measure of blind hype. Look at revenue.

2022-11-22 17:26:06 The only reliable way to evaluate the importance of an AI product / advance is to wait 1-2 years after public release and look at its economic impact. Game-changers have immediate, large impact, and drive entire new genres of *profitable* startups. Economic output can't be gamed.

2022-11-22 17:21:44 Product gets hyped based on demos. Gets released. Turns out to have weak generalization power beyond the demos and to fail to live up to expectations. Hype dies down. Rinse and repeat.

2022-11-22 17:18:34 With AI systems, it's a bad idea to use a product demo (= absolute best case scenario) to extrapolate about the median case. The value of AI lies in its ability to generalize, which is entirely impossible to evaluate from a cherrypicked sample.

2022-11-22 14:47:18 RT @oneunderscore__: I talked this morning about an inflection point in this country right now, specifically for reporters: What are you m…

2022-11-22 14:46:30 Some people like to brag about being apolitical -- even unaware of all recent political events. From a selfish perspective, I can see the appeal of dispensing with the collective. But if you zoom out -- it's not something to brag about. Stand for something other than yourself.

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-28 23:43:37 @Plinz Can you give me an example of "freedom of thought" being threatened in recent years? Do you mean use of executive or legislative state power to prevent free expression/protest, like in Florida? https://t.co/wDZw5tZglq

2022-11-28 22:50:53 Never for one minute believe that *any* element of past social progress is safe, permanently acquired. That's not how it works. It's a perpetual fight.

2022-11-28 22:50:03 Reactionaries these days are mostly talking about frontier issues like trans rights, and they have largely stopped talking about things like women's suffrage, desegregation, or the legalization of abortion. But they're still just as opposed to those as they were decades ago.

2022-11-28 22:49:26 The replies here reminded me: lots of people still think women shouldn't be allowed to vote. https://t.co/CmSC0O7CTJ

2022-11-28 21:31:20 RT @TensorFlow: The @huggingface team brings you the power of XLA in the form of fast and powerful text generation models re-written in Te…

2022-11-28 21:04:26 RT @levie: Apparently the best way to get rich in crypto is to be a bankruptcy attorney.

2022-11-28 18:45:17 I actually like dark comedy packed with social satire, so I'm rather enjoying the new Twitter storyline. The main character constantly posting his L's online is a fun exposition device

2022-11-28 17:57:55 RT @MaximZiatdinov: Excellent summary of what AI is and is not, by @fchollet https://t.co/stNlHMYYiZ

2022-11-28 17:52:38 RT @W7VOA: Homo homini lupus https://t.co/Pe7Ib5aXQm

2022-11-28 17:36:10 Just sent out this week's newsletter edition: https://t.co/ldCQNVL0up

2022-11-28 06:28:57 100 years ago, giving women the right to vote was going to cause societal collapse. Fellas, the woke mind virus is giving your wives crazy ideas... https://t.co/i5pLmDBzvn

2022-11-28 03:10:12 Looking at kids teaches you a lot about adult psychology

2022-11-27 23:05:09 Someone should make a thread about the history of the narrative "recent social progress is causing societal decline/collapse" decade by decade, going back to the 5th century BC. Same old, same old. https://t.co/C2sJlNryH9

2022-11-27 22:51:18 The only decline caused by the "woke mind virus" fantasy is the cognitive decline that those obsessed with it seem to be afflicted with.

2022-11-27 22:48:58 The 2010s saw the rise of tech infrastructure as fundamental as mobile or the web: modern cloud, deep learning, VR, recommenders, and many more. Tech progress isn't really accelerating nor slowing down. Each decade is about as transformational as the prior one.

2022-11-27 22:46:18 More generally, the narrative that we're in decline (in tech or otherwise) because of "wokeness" (a term used exclusively by a certain kind of people to designate everything they don't like about social progress, the way they might have used "hippies" in the 1960s) is nonsense.

2022-11-27 22:44:09 I recently saw a popular tweet arguing that no popular tech product was launched in the 2010s because that decade was too "woke". That's nonsense. Several of the most popular apps of all time were launched in the 2010s. But you'd need to wait a few more yrs to really take stock.

2022-11-27 22:41:44 If you asked a teenager, I bet it would be closer to 5 years. You'd find: Instagram 2010 Snapchat 2011 TikTok 2016 BeReal 2020 etc.

2022-11-27 22:41:43 The median age of the tech products I use is 10 years: GSearch 1998 Gmail 2004 YT 2005 Spotify 2006 Twitter 2007 Chrome 2008 Android 2008 GitHub 2008 LINE 2011 Zoom 2012 Slack 2013 Telegram 2013 Lyft 2013 Signal 2014 GPhotos 2015 VSCode 2015 Discord 2015 Mastodon 2016 Meet 2017

2022-11-27 01:09:58 We're pretty good at solving hard problems while sleeping or while doing something else. But for that effect to kick in, you need to have spent a while banging your head at the problem first. It's only payback for prior effort.

2022-11-26 00:39:30 @gusthema Awesome! Thank you.

2022-11-26 00:34:34 @gusthema I didn't see that! What's the PR?

2022-11-26 00:22:13 @lawrencecchen Once you go back, causality branches, so the young you is a different person, with their own life ahead of them. And you're killing that person to replace them...

2022-11-26 00:17:01 What if you could go back in time to when you were X year old but with your current knowledge / memories? Well, that would mean you'd be killing the younger you and taking their place in that timeline. You monster.

2022-11-25 19:41:53 Or in some cases, not paying the bills, I suppose. https://t.co/LL5PP9j9cm

2022-11-25 19:41:33 I'm never leaving. I started tweeting before EM (he started in 2011, I started in 2009), and I'll still be here after he's gone (unless I get kicked out or the site shuts down). This is my page. He's just paying the bills. https://t.co/50uDC8EFuH

2022-11-25 14:57:01 One possibility: add support for StableDiffusion 2.0 in KerasCV https://t.co/LYjoadxrtg

2022-11-25 03:41:58 RT @fchollet: We exist in two worlds at the same time. One is our everyday life. The other is the actual universe around us -- a world of i…

2022-11-24 20:36:37 "sure, people whose stuff i admire like Stephen King, Trent Reznor, or Neil Gaiman are savagely dunking on me, but at least the far-right meme makers will always have my back"

2022-11-24 15:05:50 Happy Thanksgiving to all who celebrate!

2022-11-24 05:13:41 RT @cIubmoss: Was walking outside with my phone unlocked in my hand and accidentally took this picture of an owl https://t.co/1I01asfOir

2022-11-24 03:26:45 RT @ryanwellsr: In this tutorial, you will learn about learning rate schedules and decay using Keras. You’ll learn how to use Keras’ standa…

2022-11-23 22:25:44 Don't know what to do over the long weekend? Enter the Keras community prize -- create OSS notebooks and win $9k in prizes. Open until late December. https://t.co/wV1eOkaC6D

2022-11-23 20:36:21 RT @clhubes: The funniest thing that’s ever happened to me as a parent is once my 2yo was having a full on meltdown and accidentally kicked…

2022-11-23 17:26:31 Many people in tech have so little exposure to philosophy (or the humanities in general) that when they get exposed to old ideas like Plato's cavern or the simulation hypothesis, they think it's extremely profound and novel

2022-11-23 03:38:19 I've started a newsletter. Subscribe to stay in touch! https://t.co/b678OACRKh

2022-11-23 03:28:22 RT @dbs_dsml: #AIopinions "If you want to drive change, invest your efforts in each layer of the stack proportionally to its importance. Pe…

2022-11-23 03:08:52 A nice thing about software development is that you're never done learning. There's always something new.

2022-11-23 00:05:03 Perhaps there's something in the water right now, but it seems public displays of sociopathy (up to explicit calls for violence) are getting increasingly common and normalized. The worst people are feeling empowered. Reminds me of late 2016.

2022-11-22 20:41:14 @FProescholdt Yes, this tweet thread

2022-11-22 18:51:05 Announcing the Keras community prize, running from today to December 31st: https://t.co/ebtZVdhPet Any OSS project using (or forking) KerasCV StableDiffusion is eligible. Notebooks, GitHub repos, tutorials, etc.

2022-11-22 17:26:57 Note that economic output is different from economic input. Don't look at funding, which is merely a measure of blind hype. Look at revenue.

2022-11-22 17:26:06 The only reliable way to evaluate the importance of an AI product / advance is to wait 1-2 years after public release and look at its economic impact. Game-changers have immediate, large impact, and drive entire new genres of *profitable* startups. Economic output can't be gamed.

2022-11-22 17:21:44 Product gets hyped based on demos. Gets released. Turns out to have weak generalization power beyond the demos and to fail to live up to expectations. Hype dies down. Rinse and repeat.

2022-11-22 17:18:34 With AI systems, it's a bad idea to use a product demo (= absolute best case scenario) to extrapolate about the median case. The value of AI lies in its ability to generalize, which is entirely impossible to evaluate from a cherrypicked sample.

2022-11-22 14:47:18 RT @oneunderscore__: I talked this morning about an inflection point in this country right now, specifically for reporters: What are you m…

2022-11-22 14:46:30 Some people like to brag about being apolitical -- even unaware of all recent political events. From a selfish perspective, I can see the appeal of dispensing with the collective. But if you zoom out -- it's not something to brag about. Stand for something other than yourself.

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-11-29 03:19:06 RT @AngelLamuno: “There are three kinds of AI that we could build. Three A’s: Cognition Automation, Cognitive Assistance, and Cognitive Aut…

2022-11-29 03:07:42 Consider subscribing, so you don't have to rely on Twitter to get notified of new posts. It's free, and you can always change your mind later.

2022-11-29 03:06:02 On Substack: "AI is cognitive automation, not cognitive autonomy". An easy read on how to make sense of AI. https://t.co/GZ9dFW35QF

2022-11-29 01:34:51 @hardmaru Yeah but how much did he raise? And did he mention his machine was going to "capture the light cone of all future value in the universe"?

2022-11-28 23:43:37 @Plinz Can you give me an example of "freedom of thought" being threatened in recent years? Do you mean use of executive or legislative state power to prevent free expression/protest, like in Florida? https://t.co/wDZw5tZglq

2022-11-28 22:50:53 Never for one minute believe that *any* element of past social progress is safe, permanently acquired. That's not how it works. It's a perpetual fight.

2022-11-28 22:50:03 Reactionaries these days are mostly talking about frontier issues like trans rights, and they have largely stopped talking about things like women's suffrage, desegregation, or the legalization of abortion. But they're still just as opposed to those as they were decades ago.

2022-11-28 22:49:26 The replies here reminded me: lots of people still think women shouldn't be allowed to vote. https://t.co/CmSC0O7CTJ

2022-11-28 21:31:20 RT @TensorFlow: The @huggingface team brings you the power of XLA in the form of fast and powerful text generation models re-written in Te…

2022-11-28 21:04:26 RT @levie: Apparently the best way to get rich in crypto is to be a bankruptcy attorney.

2022-11-28 18:45:17 I actually like dark comedy packed with social satire, so I'm rather enjoying the new Twitter storyline. The main character constantly posting his L's online is a fun exposition device

2022-11-28 17:57:55 RT @MaximZiatdinov: Excellent summary of what AI is and is not, by @fchollet https://t.co/stNlHMYYiZ

2022-11-28 17:52:38 RT @W7VOA: Homo homini lupus https://t.co/Pe7Ib5aXQm

2022-11-28 17:36:10 Just sent out this week's newsletter edition: https://t.co/ldCQNVL0up

2022-11-28 06:28:57 100 years ago, giving women the right to vote was going to cause societal collapse. Fellas, the woke mind virus is giving your wives crazy ideas... https://t.co/i5pLmDBzvn

2022-11-28 03:10:12 Looking at kids teaches you a lot about adult psychology

2022-11-27 23:05:09 Someone should make a thread about the history of the narrative "recent social progress is causing societal decline/collapse" decade by decade, going back to the 5th century BC. Same old, same old. https://t.co/C2sJlNryH9

2022-11-27 22:51:18 The only decline caused by the "woke mind virus" fantasy is the cognitive decline that those obsessed with it seem to be afflicted with.

2022-11-27 22:48:58 The 2010s saw the rise of tech infrastructure as fundamental as mobile or the web: modern cloud, deep learning, VR, recommenders, and many more. Tech progress isn't really accelerating nor slowing down. Each decade is about as transformational as the prior one.

2022-11-27 22:46:18 More generally, the narrative that we're in decline (in tech or otherwise) because of "wokeness" (a term used exclusively by a certain kind of people to designate everything they don't like about social progress, the way they might have used "hippies" in the 1960s) is nonsense.

2022-11-27 22:44:09 I recently saw a popular tweet arguing that no popular tech product was launched in the 2010s because that decade was too "woke". That's nonsense. Several of the most popular apps of all time were launched in the 2010s. But you'd need to wait a few more yrs to really take stock.

2022-11-27 22:41:44 If you asked a teenager, I bet it would be closer to 5 years. You'd find: Instagram 2010 Snapchat 2011 TikTok 2016 BeReal 2020 etc.

2022-11-27 22:41:43 The median age of the tech products I use is 10 years: GSearch 1998 Gmail 2004 YT 2005 Spotify 2006 Twitter 2007 Chrome 2008 Android 2008 GitHub 2008 LINE 2011 Zoom 2012 Slack 2013 Telegram 2013 Lyft 2013 Signal 2014 GPhotos 2015 VSCode 2015 Discord 2015 Mastodon 2016 Meet 2017

2022-11-27 01:09:58 We're pretty good at solving hard problems while sleeping or while doing something else. But for that effect to kick in, you need to have spent a while banging your head at the problem first. It's only payback for prior effort.

2022-11-26 00:39:30 @gusthema Awesome! Thank you.

2022-11-26 00:34:34 @gusthema I didn't see that! What's the PR?

2022-11-26 00:22:13 @lawrencecchen Once you go back, causality branches, so the young you is a different person, with their own life ahead of them. And you're killing that person to replace them...

2022-11-26 00:17:01 What if you could go back in time to when you were X year old but with your current knowledge / memories? Well, that would mean you'd be killing the younger you and taking their place in that timeline. You monster.

2022-11-25 19:41:53 Or in some cases, not paying the bills, I suppose. https://t.co/LL5PP9j9cm

2022-11-25 19:41:33 I'm never leaving. I started tweeting before EM (he started in 2011, I started in 2009), and I'll still be here after he's gone (unless I get kicked out or the site shuts down). This is my page. He's just paying the bills. https://t.co/50uDC8EFuH

2022-11-25 14:57:01 One possibility: add support for StableDiffusion 2.0 in KerasCV https://t.co/LYjoadxrtg

2022-11-25 03:41:58 RT @fchollet: We exist in two worlds at the same time. One is our everyday life. The other is the actual universe around us -- a world of i…

2022-11-24 20:36:37 "sure, people whose stuff i admire like Stephen King, Trent Reznor, or Neil Gaiman are savagely dunking on me, but at least the far-right meme makers will always have my back"

2022-11-24 15:05:50 Happy Thanksgiving to all who celebrate!

2022-11-24 05:13:41 RT @cIubmoss: Was walking outside with my phone unlocked in my hand and accidentally took this picture of an owl https://t.co/1I01asfOir

2022-11-24 03:26:45 RT @ryanwellsr: In this tutorial, you will learn about learning rate schedules and decay using Keras. You’ll learn how to use Keras’ standa…

2022-11-23 22:25:44 Don't know what to do over the long weekend? Enter the Keras community prize -- create OSS notebooks and win $9k in prizes. Open until late December. https://t.co/wV1eOkaC6D

2022-11-23 20:36:21 RT @clhubes: The funniest thing that’s ever happened to me as a parent is once my 2yo was having a full on meltdown and accidentally kicked…

2022-11-23 17:26:31 Many people in tech have so little exposure to philosophy (or the humanities in general) that when they get exposed to old ideas like Plato's cavern or the simulation hypothesis, they think it's extremely profound and novel

2022-11-23 03:38:19 I've started a newsletter. Subscribe to stay in touch! https://t.co/b678OACRKh

2022-11-23 03:28:22 RT @dbs_dsml: #AIopinions "If you want to drive change, invest your efforts in each layer of the stack proportionally to its importance. Pe…

2022-11-23 03:08:52 A nice thing about software development is that you're never done learning. There's always something new.

2022-11-23 00:05:03 Perhaps there's something in the water right now, but it seems public displays of sociopathy (up to explicit calls for violence) are getting increasingly common and normalized. The worst people are feeling empowered. Reminds me of late 2016.

2022-11-22 20:41:14 @FProescholdt Yes, this tweet thread

2022-11-22 18:51:05 Announcing the Keras community prize, running from today to December 31st: https://t.co/ebtZVdhPet Any OSS project using (or forking) KerasCV StableDiffusion is eligible. Notebooks, GitHub repos, tutorials, etc.

2022-11-22 17:26:57 Note that economic output is different from economic input. Don't look at funding, which is merely a measure of blind hype. Look at revenue.

2022-11-22 17:26:06 The only reliable way to evaluate the importance of an AI product / advance is to wait 1-2 years after public release and look at its economic impact. Game-changers have immediate, large impact, and drive entire new genres of *profitable* startups. Economic output can't be gamed.

2022-11-22 17:21:44 Product gets hyped based on demos. Gets released. Turns out to have weak generalization power beyond the demos and to fail to live up to expectations. Hype dies down. Rinse and repeat.

2022-11-22 17:18:34 With AI systems, it's a bad idea to use a product demo (= absolute best case scenario) to extrapolate about the median case. The value of AI lies in its ability to generalize, which is entirely impossible to evaluate from a cherrypicked sample.

2022-11-22 14:47:18 RT @oneunderscore__: I talked this morning about an inflection point in this country right now, specifically for reporters: What are you m…

2022-11-22 14:46:30 Some people like to brag about being apolitical -- even unaware of all recent political events. From a selfish perspective, I can see the appeal of dispensing with the collective. But if you zoom out -- it's not something to brag about. Stand for something other than yourself.

2022-11-22 06:28:52 If you have trouble understanding something, maybe you just need a better metaphor.

2022-11-22 01:45:28 RT @drufball: No matter how differentiated your tech, you're dead in the long run if you can't work through ambiguity, learning and iterati…

2022-11-21 23:59:37 Episode 2: https://t.co/TXOxKd2UKp

2022-11-21 20:08:57 @migueldeicaza @AlexBBrown I am on Mastodon at https://t.co/VOSzJ4foka. I'll stay on Twitter too though. Unclear how active I'll be on Mastodon

2022-11-21 17:24:12 RT @greglinden: Good long form article from @fchollet, don't miss it: "If you want to drive change, invest ... People first. Then culture.…

2022-11-21 05:53:36 1,800 subscribers so far. We're still early!

2022-11-21 05:48:57 I just sent out this week's edition of my newsletter. https://t.co/TXOxKd2UKp If you end up liking this post, consider subscribing. It's free, and you can always unsubscribe later.

2022-11-21 04:22:53 @levie But not until he gets rid of 85+% of the staff

2022-11-21 03:01:37 model dot fit() https://t.co/81zaWMfgrN

2022-11-21 02:23:18 Especially right now. You still have so much. Still ahead of what's to come. Hope you can appreciate it. You have tigers, for instance. Orangutans. It won't last.

2022-11-21 02:15:35 If you ever find yourself bored, that's just a signal that you need to go somewhere you haven't been before, do something you haven't done before, meet someone you haven't met before. The world is full of wonder and you won't ever see more than 0.0...01% of it. Can't get bored.

2022-11-21 00:40:11 Reference map for non-US folks. https://t.co/MwbxWwXTkR

2022-11-21 00:33:47 @npthree Yes, the 101 starts (ends?) near Olympia. You can get to LA via the 101 by taking it in the opposite direction to what you're describing. It's not the fastest route by any means, though -- the fastest route is via the I-5.

2022-11-21 00:24:44 The 101 is in so many songs... but for me it will always rhyme with Silicon Valley traffic.

2022-11-21 00:23:38 Today I drove on the 101... near Olympia, WA. Weird feeling to think that I could just keep driving and get to LA without ever having to turn. I mean, theoretically -- if I could drive 20+ hours straight.

2022-11-20 17:06:24 @kylebrussell Geometry diagrams are more akin to symbolic language than to an exact simulation of the system they represent. Ideograms come to mind.

2022-11-20 16:23:39 To be clear, this is not a diss directed at the NYT. I occasionally read the NYT and Le Figaro. They're fine!

2022-11-20 16:20:34 But I can certainly believe that the NYT is "far to the left" of those who call it "far left".

2022-11-20 16:19:31 "Far left" would be something like Jacobin. And even that is very much the bourgeois brand of far left -- it's cosplay. In 2022, the far left is pretty much extinct.

2022-11-20 16:16:49 In pretty much any country, the New York Times would be classified as a center-right paper -- it has about the same ideological color as Le Figaro in France. Pro-establishment, center-right, with a solid chunk of the readership that's upper middle class or downright elite.

2022-11-20 05:53:54 With creative writing, you need to weave threads of thought over a much larger-scale context, and you get zero external feedback. You can't "run" your writing against reality -- nor even your mental models.

2022-11-20 05:52:00 I think programming tends to involve local, small-scale cognitive workspaces, which are easy to spin up even when tired. A sequence of microtasks, with a lot of environmental guidance (your code's actual output).

2022-11-20 05:49:54 I find programming a lot easier than creative writing -- when I'm exhausted I can still code and debug just fine, but I absolutely could not write three coherent paragraphs of interesting content.

2022-11-20 04:23:23 RT @MLStreetTalk: New show with Prof. Julian Togelius @togelius (NYU) and Prof. Ken Stanley @kenneth0stanley we discuss open-endedness, AGI…

2022-11-20 04:20:20 One of my biggest regrets is not putting enough effort into learning to sing. I wish I could sing better songs to my kid

2022-11-20 02:32:38 @pwang My timeline is chronological. On the Android app, the oldest tweets I can see (at the very bottom of the feed) are from 5 hours ago. Never seen the before -- usually it goes back 48 hours without any problem.

2022-11-20 01:45:16 Just so you know... being an online bootlicker for powerful people who don't care whether you live or die is not going to make you rich.

2022-11-20 01:12:35 Wow, that poll was about as suspenseful as a Russian presidential election! Since we got the desired result, the poll definitely embodies the "voice of the people" -- please ignore all previous comments mentioning massive bot participation. Vox Populi, Vox Dei.

2022-11-20 00:38:34 Children are a miracle.

2022-11-19 23:10:07 Some folks mentioned the poll could be a honeypot. Maybe! Given that he mentioned wanting to reduce the reach of "bad" accounts, the goal might be to reduce the reach of everyone who voted "no". Hence why he'd want as many people to vote as possible.

2022-11-19 22:20:38 "Top priority is fixing the bot problem" "Let's start by firing nearly all moderators and disbanding the machine learning team" What was that word again? "Fad baith"?

2022-11-19 22:04:23 As long as the outcome is my desired outcome, my ridiculous form of opinion polling must be fully trustworthy. But if not... Either I'll have to fix the numbers, or I'll declare it's the fault of bots and that we should discard the results.

2022-11-19 22:01:59 This is interesting: 1. I thought we had defeated the bots already? 2. Funny how bots always push things I don't like. It's a good way to know if someone is a bot or not: do I agree with them? No? Then bot. 3. If EM admits that the poll is rigged by bot armies, why trust it? https://t.co/BYyvs1cF7W

2022-11-19 21:47:54 We could replace US elections with Twitter polls. It'd be faster, cheaper, it'd let the whole world participate, &

2022-11-19 21:39:21 @AlainRogister @ZoeSchiffer For the most part, they have stay to keep their immigration status. They don't have much of a choice unfortunately. Then there's probably a handful who are EM fans.

2022-11-19 21:23:52 @ludwig_stumpp Thank you Ludwig

2022-11-19 21:20:53 @kemar74 At least they're no longer able to buy checkmarks right now...

2022-11-19 02:00:05 @ZoeSchiffer He meant that comedy is now legal *at* Twitter, I suppose

2022-11-19 01:54:10 New posts on Sunday evenings for now!

2022-11-19 01:52:53 I've started a newsletter. Subscribe to keep in touch! https://t.co/b678OACRKh

2022-11-18 21:52:45 RT @TensorFlow: Want to learn how to generate your own custom images with stable diffusion in a few lines of Python code? Join the…

2022-11-18 18:40:53 RT @fchollet: There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some…

2022-11-18 17:49:31 Software isn't fire-and-forget, even if you're using a managed cloud (which Twitter isn't -- it's all on-prem). Software rots. It needs upgrades, restarts, patches. It needs constant maintenance. And the more complex the system, the heavier the traffic, the faster the rot.

2022-11-18 17:46:45 (lol -- my last tweet didn't go through -- "something went wrong", said Twitter. Let me retype.)

2022-11-18 17:43:09 Could be issues with moderation (hardly anyone left there). With bots/scammers (it's an adversarial problem: they evolve fast). Security vulnerabilities. Or it could be site reliability issues. That's stochastic. You don't know when it will crop up. But when it does, good luck.

2022-11-18 17:40:39 For the record, the risk created by Twitter being down to 12-13% of its original employee base is not "the site instantly goes dark". It's a robust site. The risk is that Twitter won't be able to address issues that come up going forward.

2022-11-18 11:03:43 I wonder what the breakdown is. How many of the 20 are ML eng? 2? One for the ML infra &

2022-11-18 10:52:19 Twitter previously had 7,500 employees. I'm guessing about 3,000 were engineers. So 20 engineers should be able to handle the same scope of work just fine, as long as they're 150x engineers.

2022-11-18 05:59:07 "I could build Twitter in a weekend" https://t.co/sp6iSc0RIT

2022-11-18 03:21:45 @aryalpranays I was hugely pro Elon until at least 2017. I didn't change my mind: he changed my mind.

2022-11-18 02:23:19 RT @fchollet: Software companies aren't made of code. They're made of processes that produce and maintain code. And the foremost component…

2022-11-18 01:45:51 Thread collecting Twitter employee resignations. Good time to say this: I'm grateful for Twitter. It's a great app, and I've found it immensely valuable over the years. Thank you for your part in building it! https://t.co/q6xUQOr5AY

2022-11-18 00:09:40 Twitter is now a "developing situation", as they say https://t.co/bYCx6beEif

2022-11-17 23:53:49 @AdrianThonig @nicopellerin_io I recall @chipzel was suspended, for instance. Thankfully she's back now.

2022-11-17 23:43:48 @nicopellerin_io For mild criticism of the new management. Multiple accounts (include a few I followed) have already been suspended for making fun of the new proprietor.

2022-11-17 23:36:27 My first post was very short and lightweight -- much more like a Twitter thread than a proper article. If I'm going to write weekly, this might be what that looks like. I have a full-time job and a family, after all. I might also infrequently write longer, more polished essays.

2022-11-17 23:33:27 However, diversification makes a lot of sense in the current situation. I'm actually pretty excited about writing longer-form content on a regular basis! I'm still in the process of figuring out the format I want to adopt, though. I'll keep iterating. https://t.co/VRl314OhI3

2022-11-17 23:31:34 To be clear, I'm not leaving Twitter. I've been on Twitter for over 13 years and I like it here. Besides, I think it's very likely that the site will be under new management within one year. https://t.co/oc8Jz499dL

2022-11-17 23:29:45 There's some uncertainty right now about Twitter. There's some chance the site might break, or that I get kicked out at some point, or that non-subscribers get soft-muted, or that Twitter gets paywalled. If you want to keep in touch: I started a Substack. https://t.co/b678OACRKh https://t.co/USOcfgYXUZ

2022-11-17 21:09:09 A closely related point: health insurance being tied to your employer. Extremely backward.

2022-11-17 20:50:19 The sad part is that the 10 staying are probably on H-1B or have a pending green card application :( Work visas should not be tied to specific employers. https://t.co/hj46D7MAS7

2022-11-17 19:32:58 We've reorganized Keras tutorials by category and subcategory to make them easier to find! https://t.co/eE1hRBF8Gt Next, we'll add tags and a search bar specifically for tutorials.

2022-11-17 18:50:18 @dwarkesh_sp LLMs (and large self-supervised deep learning models in general) are a continuous generalization of database technology. (This makes them potentially far more useful than databases, but also presents entirely new challenges, especially with respect to reliability.)

2022-11-17 16:39:45 The future is going to be weird. And with a little luck, it's going to be good, too.

2022-11-17 02:09:22 @behzadnouri Genuinely, no. One year ago, this was an extremely controversial opinion -- everyone felt the need to pay respect to "Blockchain tech" out of fear of appearing unserious. Today it's far less controversial. In 3 years it will be universally accepted as self-evident.

2022-11-17 01:34:23 Insiders have generally made money (VCs, project founders...) because they were the very top of the pyramid. Nearly every one else has lost money -- and the fraction of losers has still plenty of room to grow, because we're still very far from terminal valuations.

2022-11-17 01:30:09 In a regular zero-sum Ponzi, >

2022-11-17 01:20:11 In crypto, you can be one of two characters: the scammer or the mark. https://t.co/UdXUVxIgg8

2022-11-16 18:37:16 Salvatore, the creator of Redis, is one of the software engineers I really look up to -- a real artist in his craft. Super happy to read this from him. And grateful for Redis :) https://t.co/CvHZPUTiea

2022-11-16 18:33:55 @antirez You're too kind! I'm glad you enjoyed the book :) And thank you for Redis, I was a big user back when I did web development. Beautiful software!

2022-11-16 18:33:40 RT @antirez: I said that @fchollet's book (https://t.co/koT2AgOXkF) is good. However I've to refine my opinion: it is outstanding, one of t…

2022-11-16 17:22:12 New tutorial on https://t.co/m6mT8SaHBD: action identification from electroencephalogram signals -- using a CNN for EEG signal classification. https://t.co/G25BMdkLMG

2022-11-16 17:07:23 RT @carrigmat: Keras notebooks for protein tasks with @huggingface are up! The same approach that made large language models so successful…

2022-11-16 16:03:50 Any mediocre consulting shop knows how to recruit young folks who don't know any better and make them work 70 hrs week. It's not a business advantage. You know what's an advantage? Having the best talent on your team. The folks who are now running away because they have options.

2022-11-16 10:44:25 Life tip: if your employer gives you a choice between a hyper toxic and exploitative work environment or severance, you take the severance. I feel sorry for those on visas. This is one of the reasons why visas should not be tied to a specific employer. https://t.co/qlScyAr9iY

2022-11-16 04:11:07 Good managers hire folks smarter than them that tell them what to do. Bad managers fire those.

2022-11-16 00:03:47 RT @RMac18: Elon Musk has been directing subordinates to comb through Twitter's Slack and make lists of people making fun of him or his pla…

2022-12-07 20:11:45 There is little doubt that AI will be developed in the online world first. Physical robots will be a late side effect.

2022-12-07 20:10:58 Culture is people interacting with each other. And today people interact online, so that's where culture has moved. Like every other epochal trend, it all stems from logistics -- physicality just gets in the way.

2022-12-07 19:43:33 RT @TensorFlow: 3⃣ Stable Diffusion with Keras Workshop with Shilpa Kancharla Learn how to generate your own custom images with stabl…

2022-12-07 18:33:53 RT @sambit9238: Just tried it out, very handy for some basic data processing like guessing missing values, outliers etc in google sheet its…

2022-12-07 18:33:50 RT @rseroter: What a wonderful case of making ML useful and approachable to everyone. Even *I* can do this to find missing values in my spr…

2022-12-07 18:33:03 RT @TensorFlow: To make ML accessible beyond ML experts, we’ve released Simple ML for Sheets. #SimpleMLForSheets is an add-on for Google…

2022-12-07 17:33:37 In kids cartoons, the characters that kids identify with tend to advance the plot either through luck (things happen and they react to them) or determination/courage (I can do it! I can be myself!) but rarely ever through ingenuity and knowledge. Big Hero 6 is an exception.

2022-12-07 16:41:21 RT @greglinden: Good article on experimentation and trying lots of things to learn more. I'd add that not only does the experiment need to…

2022-12-07 20:11:45 There is little doubt that AI will be developed in the online world first. Physical robots will be a late side effect.

2022-12-07 20:10:58 Culture is people interacting with each other. And today people interact online, so that's where culture has moved. Like every other epochal trend, it all stems from logistics -- physicality just gets in the way.

2022-12-07 19:43:33 RT @TensorFlow: 3⃣ Stable Diffusion with Keras Workshop with Shilpa Kancharla Learn how to generate your own custom images with stabl…

2022-12-07 18:33:53 RT @sambit9238: Just tried it out, very handy for some basic data processing like guessing missing values, outliers etc in google sheet its…

2022-12-07 18:33:50 RT @rseroter: What a wonderful case of making ML useful and approachable to everyone. Even *I* can do this to find missing values in my spr…

2022-12-07 18:33:03 RT @TensorFlow: To make ML accessible beyond ML experts, we’ve released Simple ML for Sheets. #SimpleMLForSheets is an add-on for Google…

2022-12-07 17:33:37 In kids cartoons, the characters that kids identify with tend to advance the plot either through luck (things happen and they react to them) or determination/courage (I can do it! I can be myself!) but rarely ever through ingenuity and knowledge. Big Hero 6 is an exception.

2022-12-07 16:41:21 RT @greglinden: Good article on experimentation and trying lots of things to learn more. I'd add that not only does the experiment need to…

2022-12-08 20:43:38 @levie This feels like arbitraging the fact that people are still calibrated to perceive such emails as coming from a place of human attention and thoughtfulness. The moment people start to pattern-match them as AI generated, they will prefer the 2-liner (more respectful of their time).

2022-12-08 20:11:04 Related: I can't be the only one who perceives the carefree and mindless vibe of 90s pop as an expression of the "end of history" geopolitical atmosphere of those times

2022-12-08 20:05:24 It's underappreciated to what extent recent cultural evolution has been influenced by globalized supply chains and low interest rates. Culture is downstream of logistics

2022-12-08 19:06:07 The doom loop: you have an automation system that's almost good enough, so you start relying on it -- it's cheaper and more convenient. Then you lose the ability to do things properly. Finally, your automation, which depended on high-quality examples, starts degrading.

2022-12-08 18:30:41 Abilities you don't use atrophy, so if you're going to outsource a category of tasks to someone else or to a computer, make sure they're not backed by abilities that you want to develop in yourself.

2022-12-08 16:10:49 @VitalikButerin @VovaVili @kacodes Yeah, that response from the devs seemed surprisingly user-hostile. "You're just holding it wrong" type stuff. If that pattern is widespread in user code, then it's worth fixing.

2022-12-08 05:10:49 RT @molly0xFFF: all my apes 404ed

2022-12-08 04:48:58 Python is usually an elegant language. But `if __name__ == "__main__"` is one exception.

2022-12-07 20:11:45 There is little doubt that AI will be developed in the online world first. Physical robots will be a late side effect.

2022-12-07 20:10:58 Culture is people interacting with each other. And today people interact online, so that's where culture has moved. Like every other epochal trend, it all stems from logistics -- physicality just gets in the way.

2022-12-07 19:43:33 RT @TensorFlow: 3⃣ Stable Diffusion with Keras Workshop with Shilpa Kancharla Learn how to generate your own custom images with stabl…

2022-12-07 18:33:53 RT @sambit9238: Just tried it out, very handy for some basic data processing like guessing missing values, outliers etc in google sheet its…

2022-12-07 18:33:50 RT @rseroter: What a wonderful case of making ML useful and approachable to everyone. Even *I* can do this to find missing values in my spr…

2022-12-07 18:33:03 RT @TensorFlow: To make ML accessible beyond ML experts, we’ve released Simple ML for Sheets. #SimpleMLForSheets is an add-on for Google…

2022-12-07 17:33:37 In kids cartoons, the characters that kids identify with tend to advance the plot either through luck (things happen and they react to them) or determination/courage (I can do it! I can be myself!) but rarely ever through ingenuity and knowledge. Big Hero 6 is an exception.

2022-12-07 16:41:21 RT @greglinden: Good article on experimentation and trying lots of things to learn more. I'd add that not only does the experiment need to…

2022-12-09 00:51:28 What's nice about generative deep learning is that in addition to being extremely valuable, it's really fun

2022-12-08 23:18:46 .@antoniogulli and colleagues have a new edition of their Keras &

2022-12-08 20:43:38 @levie This feels like arbitraging the fact that people are still calibrated to perceive such emails as coming from a place of human attention and thoughtfulness. The moment people start to pattern-match them as AI generated, they will prefer the 2-liner (more respectful of their time).

2022-12-08 20:11:04 Related: I can't be the only one who perceives the carefree and mindless vibe of 90s pop as an expression of the "end of history" geopolitical atmosphere of those times

2022-12-08 20:05:24 It's underappreciated to what extent recent cultural evolution has been influenced by globalized supply chains and low interest rates. Culture is downstream of logistics

2022-12-08 19:06:07 The doom loop: you have an automation system that's almost good enough, so you start relying on it -- it's cheaper and more convenient. Then you lose the ability to do things properly. Finally, your automation, which depended on high-quality examples, starts degrading.

2022-12-08 18:30:41 Abilities you don't use atrophy, so if you're going to outsource a category of tasks to someone else or to a computer, make sure they're not backed by abilities that you want to develop in yourself.

2022-12-08 16:10:49 @VitalikButerin @VovaVili @kacodes Yeah, that response from the devs seemed surprisingly user-hostile. "You're just holding it wrong" type stuff. If that pattern is widespread in user code, then it's worth fixing.

2022-12-08 05:10:49 RT @molly0xFFF: all my apes 404ed

2022-12-08 04:48:58 Python is usually an elegant language. But `if __name__ == "__main__"` is one exception.

2022-12-07 20:11:45 There is little doubt that AI will be developed in the online world first. Physical robots will be a late side effect.

2022-12-07 20:10:58 Culture is people interacting with each other. And today people interact online, so that's where culture has moved. Like every other epochal trend, it all stems from logistics -- physicality just gets in the way.

2022-12-07 19:43:33 RT @TensorFlow: 3⃣ Stable Diffusion with Keras Workshop with Shilpa Kancharla Learn how to generate your own custom images with stabl…

2022-12-07 18:33:53 RT @sambit9238: Just tried it out, very handy for some basic data processing like guessing missing values, outliers etc in google sheet its…

2022-12-07 18:33:50 RT @rseroter: What a wonderful case of making ML useful and approachable to everyone. Even *I* can do this to find missing values in my spr…

2022-12-07 18:33:03 RT @TensorFlow: To make ML accessible beyond ML experts, we’ve released Simple ML for Sheets. #SimpleMLForSheets is an add-on for Google…

2022-12-07 17:33:37 In kids cartoons, the characters that kids identify with tend to advance the plot either through luck (things happen and they react to them) or determination/courage (I can do it! I can be myself!) but rarely ever through ingenuity and knowledge. Big Hero 6 is an exception.

2022-12-07 16:41:21 RT @greglinden: Good article on experimentation and trying lots of things to learn more. I'd add that not only does the experiment need to…

2022-12-09 02:02:51 I see that Twitter is recommending right below my ML tweets some bangers from the likes of TomFitton, ClownWorld_, EndWokeness, etc. Fun app!

2022-12-09 01:59:19 @MelMitchell1 That's what's fun about LLMs: they're right 70% of the time, just enough so you might start trusting them if you don't know any better, then they're horrifically wrong the remaining 30% :)

2022-12-09 00:51:28 What's nice about generative deep learning is that in addition to being extremely valuable, it's really fun

2022-12-08 23:18:46 .@antoniogulli and colleagues have a new edition of their Keras &

2022-12-08 20:43:38 @levie This feels like arbitraging the fact that people are still calibrated to perceive such emails as coming from a place of human attention and thoughtfulness. The moment people start to pattern-match them as AI generated, they will prefer the 2-liner (more respectful of their time).

2022-12-08 20:11:04 Related: I can't be the only one who perceives the carefree and mindless vibe of 90s pop as an expression of the "end of history" geopolitical atmosphere of those times

2022-12-08 20:05:24 It's underappreciated to what extent recent cultural evolution has been influenced by globalized supply chains and low interest rates. Culture is downstream of logistics

2022-12-08 19:06:07 The doom loop: you have an automation system that's almost good enough, so you start relying on it -- it's cheaper and more convenient. Then you lose the ability to do things properly. Finally, your automation, which depended on high-quality examples, starts degrading.

2022-12-08 18:30:41 Abilities you don't use atrophy, so if you're going to outsource a category of tasks to someone else or to a computer, make sure they're not backed by abilities that you want to develop in yourself.

2022-12-08 16:10:49 @VitalikButerin @VovaVili @kacodes Yeah, that response from the devs seemed surprisingly user-hostile. "You're just holding it wrong" type stuff. If that pattern is widespread in user code, then it's worth fixing.

2022-03-19 12:55:09 RT @carljackmiller: When we say Kyiv is winning the information war, far too often we only mean information spaces we inhabit. Pulling ap… 2022-03-19 01:22:02 RT @fchollet: Example: consider this very simple convnet, trained on a K80. https://t.co/ljZtdAPXSW - Baseline: 3.10s/epoch (jit_compile=F… 2022-03-18 18:00:29 @dward4 If they want to learn Keras from scratch, I'd simply recommend my book: https://t.co/1luyISfgBY 2022-03-18 16:27:42 RT @Jeande_d: Keras community is really vibrant. There are now over 100 concise and clear code examples that demonstrate the latest deep l… 2022-03-18 00:52:50 RT @yarotrof: Russia’s rapid thrust into Voznesensk was meant to showcase its military prowess. Instead, the Ukrainian town dealt Russian f… 2022-03-18 00:18:26 In fact, literally days before the invasion, plenty of folks (e.g. Ted Cruz, Balaji...) were talking about how the Russian military could outmatch the *US military* -- absolutely batshit insane. Was already obvious at that time that RU mil was shambolic. https://t.co/fax0iYrHxx 2022-03-18 00:13:58 Boom. Russia has no path to victory at this point. On February 24, all media depicted a total Russian victory within 3 days as a foregone conclusion, and were talking about Poland and Baltic states being imminently at risk. How things have changed. https://t.co/ZA7PPeaoKx 2022-03-17 23:39:19 RT @joncoopertweets: Friendly reminder that Mike Pompeo and Donald Trump ordered that Marie Yovanovitch, the US ambassador to Ukraine, to b… 2022-03-17 03:56:36 Guess who's back https://t.co/Sa9GFmfaMr 2022-03-17 03:11:01 RT @JuliaDavisNews: Putin Says Russia Must Undergo a 'Self-Cleansing of Society' to Purge 'Bastards and Traitors.' In his unhinged speech,… 2022-03-17 02:28:11 You do not, under any circumstances, gotta hand it to Putin 2022-03-17 02:25:25 You know the guys delivering the "actually Putin has a point" takes would have doing the same if Twitter had been around in 1939... 2022-03-17 01:38:38 RT @olgatokariuk: Russia dropped a bomb on a building of a drama theatre in Mariupol, where about a thousand people, including children, we… 2022-03-16 17:21:18 @MarcusKlarqvist Fixed! 2022-03-16 17:14:32 Tip #7: use `keras.utils.plot_model(model)` and `model.summary()` to get succinct visualizations of the contents and structure of your model. https://t.co/ieAUvJkLlS 2022-03-15 19:56:53 Reproduction code: https://t.co/ffQZfd8ciV 2022-03-15 19:47:32 Curious to see what the performance looks like in MXNet/PyTorch if someone wants to try it. (With Jax/Flax it should be the same as the jit_compile=True version since it wraps XLA) 2022-03-15 19:45:20 Example: consider this very simple convnet, trained on a K80. https://t.co/ljZtdAPXSW - Baseline: 3.10s/epoch (jit_compile=False) - Just step fusing: 2.96s (jit_compile=False, steps_per_execution=32) - Just XLA: 2.54s (jit_compile=True) - Both XLA & -21%! 2022-03-15 19:25:21 Tip #6: use `set_random_seed()` to make your workflow deterministic. This will simultaneously seed Python's `random` module, NumPy, and TF/Keras. If you need CUDA op determinism, then also use `tf.config.experimental.enable_op_determinism()`, which comes at a performance cost. https://t.co/DHEDcntchz 2022-03-15 19:17:56 @MattPotma `steps_per_execution` does not change the batch size or otherwise affect batching. 2022-03-15 18:10:32 RT @anastasia_maga: URGENT. Russian troops have taken the staff and patients of #Mariupol Hospital as hostages. Thread 1/5 2022-03-15 17:28:27 RT @LearnOpenCV: Deep Learning with TF & https://t.co/P0ph5QmAgo Here is the second question in our mini-c… 2022-03-15 15:19:25 RT @MarcusKlarqvist: Tip #3.1: Lower precision floats use less memory meaning that you can fit more data on your GPU (bigger batches). Lar… 2022-03-15 03:29:25 @Ani_Offl Yes the two are orthogonal. 2022-03-15 03:25:15 The case of the man who had 90% of his brain missing also comes to mind. https://t.co/RUQ44murPl 2022-03-15 03:22:57 Loss of temporal lobe in early stages of development due to perinatal stroke -- no cognitive deficit. Striking demonstration that cognitive ability is of course not "proportional" to the number of neurons (or modules) in the brain, as often claimed by confused DL folks. https://t.co/nSMR14CqF9 2022-03-15 01:16:29 RT @alexadobrien: Russian forces are killing civilians and looting stores and homes across occupied parts of southern Ukraine, residents sa… 2022-03-15 00:34:25 RT @HeerJeet: Among other things, this is a huge lesson on the importance of vaccination. Get vaxxed and boosted folks. 2022-03-14 21:45:16 @awsaf49 It's a bit of a particular case: TPUs *only* run XLA, so if you run on TPU you're already XLA-compiling independently of the jit_compile option. 2022-03-14 20:51:24 Curious: if you've used the XLA jit_compile option in Keras, what speedup did you observe compared to the original model? And compared to the same model in a different framework? https://t.co/W2U5PTl0wb 2022-03-14 17:09:53 @andrewljohnson This is in v2.8. 2022-03-14 16:48:57 Tip #5: XLA is a linear algebra compiler capable of automatically fusing & You can compile your Keras training loop to XLA via `jit_compile=True`. https://t.co/XlH2d7Nvew 2022-03-14 00:40:07 RT @RichardHaass: Reports that Putin asking Xi for military help. To do so means China would open itself to substantial sanctions and make… 2022-03-13 21:13:07 @V_Ravi_Chandra @awsaf49 Put data preprocessing layers in the https://t.co/oiMJsLvdrt pipeline if you want to utilize your CPU. They will be run asynchronously on CPU. 2022-03-13 20:20:52 @awsaf49 Pick the value that maximizes device utilization for your model and device. You'll find out in practice. 2022-03-13 19:23:05 @kelvindotchan The training schedule stays the same (unlike what happens if you increase the batch size, in which case you have to increase the learning rate) 2022-03-13 18:17:49 Tip #4: "step fusing" consists of running multiple steps of model training back-to-back on an accelerator device (a GPU or TPU) without syncing with the host CPU RAM. It's a great way to get to near-100% device utilization without having to increase your batch size. https://t.co/UrSZuLLz8l 2022-03-13 16:25:26 RT @javisamo: Great to see a warning on misalignment of loss functions with actual goals in @fchollet's book. This (surprisingly) one of t… 2022-03-12 22:35:23 The ability to attract and retain top AI talent will define the long-term trajectory of current tech companies. Underappreciated fact. https://t.co/wsgeS9Ph9u 2022-03-12 19:07:38 I love taking leftovers and cooking something brand new out of them. 2022-03-12 17:51:13 Tip #3: mixed precision is a way to train models significantly faster (~2x) on modern GPUs at virtually no loss of accuracy. It consists of using a lower precision (float16) for forward pass computation while keeping model state in float32. You can enable it in Keras in 1 line. https://t.co/mx62CeEirf 2022-03-12 16:22:57 @tornike_o Honestly, the fact that folks like you invariably turn out to be PT users is a great reason to use Keras instead. Keras has a better community: nicer people, no hate, no flame wars, stronger community support, devs that don't send hate mail to other devs https://t.co/guat1wvtQm 2022-03-12 08:11:00 CAFIAC FIX 2022-01-25 06:06:36 Wow you guys sure like corny jokes 2022-01-25 06:04:21 @ChrSzegedy It's only been 9 months. This isn't even my final farce 2022-01-24 20:50:20 PSA: Unicode is short for "universal code". Just like Unicorn is short for "universal corn". 2022-01-24 04:24:51 Because task-specific skill and general intelligence are orthogonal to each other, disappointment always follows. 2022-01-24 04:23:29 The oldest marketing play in AI is to build an impressive special-purpose system (e.g. play this one game) then leverage the illusion that this task-specific capability has general relevance in order to raise money, sell something else, negotiate an acquisition, etc. 2022-01-23 21:25:16 It's likely democracy won't survive Facebook, but it's possible that millions of people won't survive Facebook either. 2022-01-23 21:21:50 One hypothetical scifi future: the Covid antivax movement morphs into a general antivax movement, gains unconditional backing by a major political party, becomes the law of the land, and mortality keeps climbing as we see the return of deadly diseases such as Polio, Tetanus, etc. 2022-01-23 06:13:06 To be romantic isn't about flowers and chocolates and pink ribbons. It is the uncompromising pursuit of an absolute ideal. And an appreciation for the beauty to be found in the hope, despair, delight, and suffering felt along the way. 2022-01-22 21:16:26 @AdamSinger The 2013 3D-printing bubble was something... back then 3D-printing was about to become the next big wave of consumer tech. Or so we were told. 2022-01-22 04:25:29 Sleep is "install update and restart" for humans 2022-01-22 03:27:23 RT @juantomas: @TensorFlow When people ask me about why #keras is my all time favorite #AI framework, the answer is: is so simple and beati… 2022-01-22 03:26:53 RT @TensorFlow: Build Keras-native input processing pipelines with the #Keras preprocessing layers API. Learn how to leverage this API → h… 2022-01-22 02:21:37 RT @woodruffbets: EXCLU: We obtained the draft Trump executive order that would have seized the voting machines and named a special counsel… 2022-01-21 20:16:36 RT @oneunderscore__: NEW from me: Big Ivermectin is headed to DC to march on Sunday. They've sold out hotels, shared directions on forging… 2022-01-21 20:16:18 Feedback loops are the force that shapes systems and organizations over time. To understand a complex system, look for its feedback loops. To steer a complex system, initialize a feedback loop. 2022-01-21 17:25:47 RT @aureliengeron: Look what I just received! Awesome book by @fchollet, he's put a huge amount of work into this 2nd edition. Excellent… 2022-01-21 17:25:42 @aureliengeron Coming from you Aurélien, I'm honored by the compliment! Thank you for the kind words, and I hope you enjoy reading through it! 2022-01-21 02:10:24 @mark_dow So when does Michael get liquidated 2022-01-21 00:53:27 This is true for cognitive abstraction as well. Intelligence is about generalizing to future unknowns, and for this reason it cannot be achieved purely by compressing past experience. To generalize well, you need to store far more information than what you have needed so far. 2022-01-21 00:51:22 And part, crucially, is that maintainability and extensibility require being ready for future needs that you simply cannot predict. To generalize to future unknowns, your code needs to be more general than its past and present scope of operation. 2022-01-21 00:48:28 Part of the reason is that human intelligibility requires verbosity (you see this especially in unit tests, where DAMP is better than DRY). Part is that compression focuses on optimizing syntax (the medium) rather than mental models (the message, what you actually care about). 2022-01-21 00:44:21 There is some overlap between compression and good abstraction (abstraction will significantly shorten and regularize your code compared to a naive approach), but the two often run opposite of each other. Good abstraction involves some level of verbosity and duplication. 2022-01-21 00:07:03 Compression, i.e. minimizing redundancy, is a poor heuristic to follow to generate good programming abstractions. Good abstractions don't make your code shorter, they make it easy to grasp (matching intuitive mental models), easy to read, easy to maintain and extend over time. 2022-01-20 20:44:53 The best tech products feel *simple* and *real*. Keep it simple, keep it real. 2022-01-20 18:56:24 When you say, "I'm going to build X", remember that X is actually your starting point, not your destination. You will start with X in mind, then you will keep iterating, and you will end up with a successful Y. So don't be too anxious about perfectly nailing the initial concept. 2022-01-20 17:12:03 New tutorial on https://t.co/m6mT8SrKDD: implementing the TabTransformer architecture for structured data classification. https://t.co/FbvvCFL4fi TabTransformer significantly outperforms dense NNs for tabular data, and matches the performance of tree-based ensembles. 2022-01-20 15:57:36 RT @beingebru: It's a book that deserves all the praise. I couldn't put it down for three days. keep working. @fchollet #deeplearning #AI h… 2022-01-20 15:57:19 RT @svpino: Do yourself a favor. https://t.co/Roj10XzNLc 2022-01-20 06:12:45 GitHub is a great product. The UX (code reviews, search, issues, etc) has kept getting steadily better over the years. 2022-01-20 06:07:47 The extent to which some folks in deep learning research waste entire datacenter-months of computational resources to produce nothing but hot air is just so sad. It's like watching a macaque smear expensive oil paint on thousands of pristine linen canvases. 2022-01-20 02:00:34 RT @TheLancet: NEW—An estimated 1.2 million people died in 2019 from antibiotic-resistant bacterial infections, more deaths than HIV/AIDS o… 2022-01-20 01:24:49 Technology is only valuable to the extent that it solves a problem that people have. Obfuscating what it actually does via a blitz of buzzwords only sends a negative signal. Just tell me what problem it solves and how it changes the game compared to previous solutions. 2022-01-19 14:03:14 The reason we're making so little progress on generality in artificial intelligence may be because we have such a poor understanding of the metaproblem. 2022-01-19 14:01:13 If you can't seem to be able to solve a hard problem, then you're facing a metaproblem. You first need to solve the problem of how to search for a solution. Different problems call for different approaches. 2022-01-19 13:41:36 @NicolasBeuchat What a compliment! Thanks for the kind words, and I'm glad you enjoyed the book! Good luck with your projects! 2022-01-19 05:58:37 "A city named Orleans already exists. Save as copy?" -> 2022-01-19 05:57:50 Me starting a new city City Editor -> All done 2022-01-19 02:46:39 "New City" is perhaps the laziest name you could possibly give to your new settlement, yet it is so common -- Villeneuve, , Neapolis, etc. 2022-01-18 03:47:20 The first half of the new Matrix movie was quite interesting, actually. Very different product compared to the original movies, but enjoyable in its own right. The second half was somewhat disappointing. 2022-01-18 03:45:16 "Matrix multiplication" is when Matrix sequels start sprouting up one after the other 2022-01-17 23:37:41 Nice work! An implementation of Upside-Down Reinforcement Learning. By @edersantana https://t.co/wYYwFr2Sib 2022-01-17 22:12:23 New code example on https://t.co/m6mT8SrKDD: an implementation of "ViViT: A Video Vision Transformer" (https://t.co/c8FPPDAo1q) Created by @ayushthakur0 + @ariG23498 Super clean and readable walkthrough! https://t.co/xWMSRJ3MBs 2022-01-17 14:01:34 It couldn't be a game where you control a character's movements in real time, as that would be entirely impractical. Probably a narrative-centric game where you take action once in a while. 2022-01-17 13:59:25 1st-person game: you see the character you control through their own perspective. I am the character. 3rd-person game: you see your character through an external perspective. They are the character. So what's a 2nd-person game? 2022-01-17 08:11:00 CAFIAC FIX 2022-01-12 18:24:34 New code walkthrough on https://t.co/m6mT8SrKDD: training Vision Transformers (ViT) on small datasets https://t.co/yHh5h0uFAl Created by @ariG23498 2022-01-12 16:33:59 RT @bhutanisanyam1: To encourage more people to spend time playing with Keras: Announcing #27DaysOfKeras! Spend 15 min - 24 h/day pla… 2022-01-12 01:32:58 @trylks Chat and email solve a different problem: it's peer to peer communication not content distribution. I can't tweet by email 2022-01-12 01:18:17 E.g. syndication protocols did not keep up with the rise of microblogging. Having a single entity able to modify the protocol at will enables much faster product iteration speed and will always win out in the end. And that's even before we get into data collection network effects 2022-01-12 01:14:38 You could imagine that we might move to open syndication protocols, like RSS (adding blockchain to that would serve no purpose and would only add a new layer of impracticality), but in practice such protocols are inertia-heavy and unable to move at the speed of product needs. 2022-01-12 01:11:53 The bar for running your own compute infra is kind of low -- it "just" needs to work. The bar for running your own distribution is much higher: you're going to need not only reach, but also a top notch recommendation algorithm, which requires both a large dataset and good AI. 2022-01-12 01:09:07 Much like you probably don't want to be in charge of your own servers (hence why cloud services are a huge success), you probably don't want to be in charge of your own distribution network. Distribution is an even harder problem, and specialized providers solve it far better. https://t.co/L3UXlNXMpl 2022-01-12 00:25:37 I'm feeling pretty good about having no idea what a "wordle" is 2022-01-11 18:31:55 Heraclitus, CS PhD https://t.co/tEUfGFshbV 2022-01-11 08:11:00 CAFIAC FIX 2022-01-05 19:09:45 @ecsquendor Hope you enjoy the read! :) 2022-01-05 17:11:40 RT @AdamSinger: New post sharing some history on music curation from my perspective doing this 20 years in my free time now, and as promise… 2022-01-05 04:56:37 @matvelloso I really want this to not be a plain rug pull and to get enough traction to actually move to the execution phase, because I want to watch the Netflix documentary about the subsequent disaster 2022-01-05 01:19:59 @giant_hornet It happened last September. The purpose was to make it easier to contribute to the Keras codebase by isolating it and no longer requiring building all of TF when making a change in Keras. 2022-01-04 18:12:55 RT @Jeande_d: Keras Processing Layers were my favorite. The fact that you can do normal processing things like data augmentation or usual… 2022-01-04 17:17:44 @soumikRakshit96 Thank you for being part of the community and for your great contributions! 2022-01-04 17:12:40 @gusthema Please post anyway! 2022-01-04 17:00:32 RT @ankur310794: One & 2022-01-04 16:42:06 Fourth: https://t.co/m6mT8SrKDD tutorials! In 2021, 76 new high-quality code walkthroughs were added to https://t.co/m6mT8SrKDD, bringing our total to 131. All of them were created by Keras community members. Thank you for the invaluable contributions! https://t.co/QFl5mdzgfN 2022-01-04 16:39:49 Third: user satisfaction! Keras scored an average satisfaction rating of 4.3 out of 5 in our yearly user survey (N=424), up slightly from 2020. It won't be easy to go up from there, but we'll do our best to keep improving and keep exceeding your expectations :) 2022-01-04 16:37:42 It has also been a big year for TensorFlow/Keras ecosystem packages. 2021 has seen the launch of: TF Similarity https://t.co/MMgOXBfEK6 TF Decision Forests https://t.co/ecfatKo9Y8 TF-Agents Bandits https://t.co/cGH8PNPmJF TF Graph Neural Nets https://t.co/zTvdQdZUYV 2022-01-04 16:35:55 Second: launches! 2021 saw TF 2.5, 2.6, and 2.7. We launched Keras Preprocessing Layers, on-device training in TF Lite, ExtensionTypes, new tools for performance profiling and responsible AI, and a completely revamped debugging experience – among many other new features. 2022-01-04 16:34:40 This growth is also reflected in developer surveys, like the yearly ML developer survey run by Kaggle (N=25,973) and the yearly global developer survey run by StackOverflow (N=59,921). Today over 16% of *all developers in the world* use TensorFlow – a big increase from 2020! https://t.co/9O5imQmVUi 2022-01-04 16:32:55 First of all: growth! The TensorFlow user base has recorded its 6th consecutive year of growth, as measured in downloads, usage in Kaggle & TensorFlow remains the #1 deep learning framework by a large margin – for the 6th year in a row. https://t.co/iastbnBFc7 2022-01-04 16:31:11 2021 has been a big year for Keras & As we look forward to what we will achieve in 2022, I'd like to share some highlights from the past year. 2022-01-03 20:37:01 RT @gusthema: https://t.co/11zj7n6PvB is such a great resource to learn! And the community posting new tutorials is incredible! 2022-01-03 19:22:10 RT @soumikRakshit96: @fchollet Attaching some sample results @RisingSayak and I have been able to generate. Also, if you found this exampl… 2022-01-03 18:57:06 @cjameyson Cute doggo :) enjoy the book! 2022-01-03 18:37:34 Created by @soumikRakshit96 & A great read to start the year :) 2022-01-03 18:35:53 New https://t.co/m6mT8SrKDD code walkthrough: generate images from "cue" segmentation masks using GauGAN. https://t.co/B1suWvE0sO https://t.co/9KlDfzsx1g 2022-01-02 21:06:02 All my apes gone... 2022-01-02 18:25:48 @mark_dow Wanting to be part of a "community" of "rebels" is also an important secondary motivation for those who buy crypto 2022-01-02 02:31:17 New track just dropped https://t.co/Cbv1CIh3S6 Happy new year, everyone! 2022-01-01 12:56:10 @BharatNishant @ManningBooks can you answer this? 2022-01-01 03:53:13 RT @ovodibie1: the limitations of deep learning and the bridge to Artificial General Intelligence. This book--Chp 14 in particular-- is one… 2022-01-01 03:53:10 RT @ovodibie1: I completed @fchollet's Deep Learning in Python 2nd Ed a couple days ago and it is even more impressive that the 1st Edition… 2022-01-01 03:08:04 I think I'm pretty close to understanding how music works. *Why* music works. 2022-01-01 00:36:15 @mat_kelcey Thanks Mat! Just Logic Pro and a MIDI keyboard 2022-01-01 00:05:50 I have no idea what I'm doing 2022-01-01 00:05:23 I spent the afternoon making a new track. Inspired by the snowy weather where I am now. Ambient. https://t.co/kgErxIroO7 2021-12-31 19:26:50 I wish everyone a happy 2022, full of love, kindness, learning, inspiration, discovery, and creation! And remember these are difficult times for many, so be good to each other. 2021-12-31 12:07:39 @Imengby Hope the book will be useful :) Enjoy! 2021-12-31 12:07:21 RT @Imengby: My copy of deep learning with python (2nd edition) by @fchollet, is here. Deep learning is a great fit to #Network operations… 2021-12-30 22:53:21 The neat thing about software engineering is that it's a field where generalists can thrive. Logic & 2021-12-30 21:54:09 @ahmadchalhoub99 You can start with the second edition directly. Thanks for the kind words :) 2021-12-30 21:12:52 @cjIsALock Enjoy! 2021-12-30 20:55:29 One thing FB and its employees appear to do a lot: covert negative marketing, i.e. plant disinformation about stories they don't like or about competing products. https://t.co/29TAWRmmBf 2021-12-30 18:20:00 Culture flows from the top. Developers who are absolute trolls end up with toxic user communities. The only ones who seem not to be able to perceive that toxicity are the trolls themselves. 2021-12-30 18:18:35 If you develop a framework and you spend your time "anonymously" attacking your primary competitor via fleets of sockpuppet Reddit accounts, hate emails, and even GitHub issues, then you're a bad person. 2021-12-30 18:07:34 My brain seems to have a really low frame rate today -- presumably sleep deprivation 2021-12-30 16:53:35 Thread that brilliantly captures why I don't like OKRs. I want to do great work and create something I can be proud of. Numbers are never what really matters. https://t.co/YIKOkVK4e2 2021-12-29 08:58:28 RT @whydoyouaskwhat: @fchollet this is in rural india... extreme internet penetration coupled with almost no understanding of underlying te… 2021-12-29 08:48:07 @trylks It was a technology circa 2010-2013 (I remember reading the Bitcoin white paper in 2011, before eventually buying BTC in 2013). 2016 and later it has been a speculative bubble with zero substance. 2021-12-29 08:30:32 RT @canolcer: @fchollet Berlin Neukölln is also full with them https://t.co/UdWinasjnR 2021-12-29 08:30:30 RT @d2smond: @fchollet same here in australia 2021-12-29 08:29:49 @trylks Deep learning is a technology that enables those who use it to build better products (like Google Photos). It creates value. It's not an "asset" whose price is determined by how many people speculate on it. 2021-12-29 07:58:55 In 2017, when Bitcoin started showing up regularly on CNBC and the like, I thought, well, we must be close to market saturation. Then when I saw ads for crypto exchanges on taxis in Rome in 2018, ok, NOW we must be close to market saturation. In 2021 it's every last country... 2021-12-29 07:46:44 Every human with an internet connection and $50 in savings has already been bombarded over the past 3 years with ads for crypto exchanges and numbers-go-up coins. But don't worry, you're still early, it's going 1000x from here and you're all going to be millionaires, as planned. 2021-12-29 07:41:59 European retail investors can afford to put their savings in dog coin pyramid schemes, I suppose. Perhaps more concerning is that a lot of crypto trading is now coming from places with GDP per capita < 2021-12-29 07:38:12 Someone paid a lot of money to plaster Europe with Floki ads. https://t.co/AAG0YhXeJZ 2021-12-28 16:13:00 @Jeande_d Absolutely, the goal is to make it really easy to implement any sort of industry-standard CV or NLP pipeline. We've got big plans for the project in 2022! 2021-12-28 11:00:40 RT @stevejarrett: Completely agree. I think the primary competitive advantages of any company are their speed of iteration and the hiring a… 2021-12-28 05:44:28 RT @eliotpeper: More refinement cycles is what distinguishes Pixar's story development process. Here's a breakdown of how they developed t… 2021-12-27 18:30:38 If I had to boil down the Keras API philosophy to four words, that would be it: "try new things faster". 2021-12-27 18:27:56 If you think you've had some big insight, your first reflex shouldn't be "where can I find/generate data to confirm this", it should be "where can I find/generate data to prove myself wrong". The faster you figure out why you're wrong, the faster you get to the next step. 2021-12-27 18:26:31 The reason why it's really bad to fall in love with your own ideas is that it makes you waste a huge amount of time in the end. If you care too much about the state of your ideas at time t, that actively reduces your iteration speed. 2021-12-27 18:23:21 Anything that directly reduces your iteration time improves your output. That can mean: - Getting better at programming - Using a framework that makes it easier to experiment, that reduces cognitive load - Adopting a self-adversarial mindset to invalidate your bad ideas faster 2021-12-27 18:20:30 The more refinement cycles your ideas go through, the better they become. And the faster you try new things, the more refinement cycles your ideas go through in a given timespan. So to get better ideas, you simply need to *try new things faster*. 2021-12-27 12:33:53 I got a list. It goes: dense, batch norm, relu, and rezy. 2021-12-27 12:31:35 The canonical answer is, before the residual. But make sure the residual comes from a layer that's similarly normalized, to avoid mixing scales. 2021-12-27 08:20:00 CAFIAC FIX 2021-12-21 05:26:31 Distribute power, not hash functions. https://t.co/UenKhsRfUn 2021-12-20 21:02:22 The great myth of our time is that making things more complicated represents progress. In reality, simpler is better. Perhaps we should try to remove chips and routers from appliances, not add them in. 2021-12-20 11:11:35 The purpose of technology is to solve problems that people have. To help people. Technology is never an end in itself, no matter how "cool" the tech may look. 2021-12-19 13:27:51 @mohdjawadi @paul_rietschka You should really check out https://t.co/m6mT8Sa9M5 tutorials for some concrete examples 2021-12-19 13:27:19 @mohdjawadi @paul_rietschka This "conventional wisdom" is pure disinfo. It is *when* you go into advanced usage that you realize the power of using Keras. The fact that Keras makes standard things easy isn't very impressive, it's the fact that it makes hard things easy that makes it worth using. 2021-12-19 08:14:49 It's impossible to get someone to realize that the race horse they're looking at is actually a donkey if they believe they're about to make some money. 2021-12-19 08:12:29 It's unsettling how, no matter how obvious the scam, it seems to find an endless supply of marks. Now I'm starting to understand why email advance-fee scams have been around for so long. 2021-12-18 18:30:02 Deep learning is really, really similar to cooking 2021-12-18 18:29:24 In particular, if you have residual connections, you need to apply instance normalization again after adding the residual to avoid mixing scales 2021-12-18 18:28:04 Side note: I've been finding that instance normalization can often work as well or better than batch normalization or layer normalization, but you can't use it as a drop-in replacement. You have to adjust other things in your network 2021-12-18 17:55:41 Lots of moments like "oh yeah, I can do it this way, so much simpler". Things just click 2021-12-18 17:55:08 I've been working on a new research project these past few days. I have to stay: it's far more enjoyable to use Keras to implement complex, highly unusual ideas than to do standard workflows. Implementing basic stuff is boring, but implementing unusual stuff feels fun & 2021-12-18 17:13:05 @jjvincent This is inspiring -- what if we created some kind of database technology that could, in fact, lie to you. You'd have to trick it into returning correct answers, like some sort of digital sphinx 2021-12-18 10:48:44 And if your main motivation for moving into a new domain is "it's an opportunity to be early and claim new land for myself", then what you're actually doing is scalping with extra steps. Go solve problems you care about instead. Contribute value, not comments that say "first!". 2021-12-18 10:44:40 Another way to state it: if your only advantage is "I was here first and I already have users", then you don't have an advantage at all and a later-mover will eat your lunch. 2021-12-18 10:40:44 Anyway, go build a search engine or a social network or an online payment app, or solve whatever problem you actually care about. If you fail, it won't be because you were late. 2021-12-18 10:38:17 We forget all the early movers that didn't make it or made it too small to become household names. DeviantArt had all the features of modern social networks in 2002. And we forgot that "early" is a spectrum: none of the successful early movers were 1st. Google was late to search. 2021-12-18 10:33:59 The social web was big in the mid-2000s. But TikTok proceeded to launch in... 2017. It wasn't even a new idea: it mimicked Vine, from 2013. It just did it better. 2021-12-18 10:30:20 The success of early movers in previous waves of tech booms has convinced people that in order to make it, they need to find a new frontier and be early. Nothing could be further from the truth. There's never a bad time to start producing value for others and get rewarded for it. 2021-12-17 18:28:25 RT @abhi1thakur: Just arrived! Deep Learning with Python 2nd Edition by @fchollet. My holiday gift to myself :D https://t.co/qzzFnUfhN8 2021-12-17 18:28:22 @abhi1thakur Nice! Hope you enjoy reading through it! 2021-12-17 13:49:07 @profdvp Usually I'm completely unsurprised by who turns out to go into crypto or NFTs (the whole scene is a kind of personality test), but I didn't expect this one... 2021-12-17 12:23:16 To learn more, you need to study less. 2021-12-17 12:13:23 @PottersLesley If you're just looking to gain practical coding experience, try Kaggle competitions and check out code examples on topics that interest you at https://t.co/QFl5mdQREn 2021-12-17 11:58:47 The more content is available in a field, the more important it is to have some kind of mentor or guide to help focus your attention on what actually matters. I imagine it would be pretty challenging to get started in AI today 2021-12-16 17:44:32 @FrankWilliamsb4 @cabel My understanding of web3 comes from the tweets, blog posts and YT vids of its proponents. And my understanding is that 1) on the technical level, it doesn't exist, 2) as for the vision, it's about sprinkling the web with toll booths owned by web3 founders/VCs to extract new rents 2021-12-16 17:26:23 @cabel The crypto scene functions as a personality test. 2021-12-16 17:25:04 @cabel So perfect. For me, with BTC, it's this comic that really stuck with me. It expresses the bitcoiner dream so concisely: get rewarded with unbelievable wealth and power for owning a title, feudalism-style. Don't bother creating any kind of value for others. https://t.co/gWvpYg8Xmz 2021-12-16 16:57:45 New paper walkthrough on https://t.co/m6mT8Sa9M5: an implementation of the Vision Transformer TokenLearner from "Adaptive Space-Time Tokenization for Videos" by Ryoo et al. (https://t.co/VdImYBdzK2). Created by @ariG23498 and @RisingSayak. https://t.co/EehRhj3TWe 2021-12-16 00:10:40 Often, organizations have thoughts and beliefs of their own. Collective beliefs aren't just beliefs held individually by the members of a group. Beliefs can be encouraged, reinforced by social structures. They may even be born purely from the specifics of a social organization. 2021-12-15 23:28:17 Young adult fiction in general is often not very ambitious. Imagine a world of fantasy where *anything* is possible... oh well it's just a small variant of our everyday world 2021-12-15 23:25:12 Real, contemporary technology is so much more impressive (not to mention useful) than any magic from the Harry Potter books 2021-12-15 22:04:28 @svpino There are many cases where your sample weights don't represent class weights -- for instance, for problems that aren't classification problems. Here's an example: the weights are just part of the dataset https://t.co/oCBubfnD26 -- we know in advance how important an event is. 2021-12-15 13:58:42 These are some of the problems that "society" (the set of solutions/systems that the "self-sovereign" folks explicitly reject) tries to solve. Justice system, state monopoly on violence, banking system with theft/fraud controls, etc. 2021-12-15 13:56:40 Most people don't seem to get this tweet. The point is simply that "self-sovereignty" via crypto implies that you're going to have to ensure your own OpSec and your own physical security, because you're always one irreversible transaction away from ruin. It's a naive idea. 2021-12-15 10:50:19 Likewise if you're writing custom losses. That's it -- the most important part is, remember to use the `weighted_metrics` argument in `compile()` if you want to apply sample weighting to your inference metrics. 2021-12-15 10:48:34 If you're writing custom metrics, you can leverage sample weighting by using the `sample_weight` argument in `update_state()`. Here is the AMS metric from the 2014 Kaggle Higgs Boson competition. https://t.co/E3Jna01e1Q 2021-12-15 10:45:19 You can tell Keras to sample-weight inference metrics via the `weighted_metrics` argument in `compile()`. These metrics will receive the `sample_weight` values at inference time. https://t.co/kOp354Vlgs 2021-12-15 10:42:19 However, in some situations it's still useful to apply sample weighting to your inference metrics during evaluation (e.g. if you want your evaluation to take into account the importance of getting different events right). 2021-12-15 10:40:17 At inference time (e.g. in `evaluate()`), usually such weights aren't available (because, in a realistic setting, weights are a type of label) and so they get ignored by your metrics. You will see unweighted metrics even if you pass `sample_weight`. 2021-12-15 10:38:49 If you use TF Datasets, you can also simply pass a dataset that yields (inputs, targets, sample_weight). https://t.co/jlySs2Ck1N 2021-12-15 10:38:48 Keras tweetorial: weighted losses & `fit()` and `evaluate()` provide a `sample_weight` argument, where you can pass a "weight score" (typically between 0 and 1) that estimates how important the sample is. Let's take a look https://t.co/Bie24A65UP 2021-12-15 09:51:40 In crypto, everyone is self-sovereign until they lose their password or someone shows up with a $5 wrench 2021-12-15 09:12:52 Now that I do have an edge-to-edge phone screen, I can confirm this is a major issue. I practically cannot use it sideways. Usability should take priority over pretty design ideas. https://t.co/sFdFohG3C9 2021-12-14 22:28:44 Looking for vector classification datasets (2-5 classes). What are a few interesting (50+ features), large-scale (50k+ instances) datasets? Ideally not image or audio data, just vectors of various features. 2021-12-14 14:14:46 @PalaeoPython I'm talking about *number of neurons*, like the original tweet. 2021-12-14 13:23:53 RT @RichmanRonald: Finally arrived on SA shores: @fchollet’s updated edition of DL with Python. Paging straight to section on Transformers.… 2021-12-14 11:46:58 Reminder: African Elephants have more neurons than humans, and great apes typically only have half as many (which, for deep learning models, would be a very marginal difference). Intelligence is not how many neurons you have (obviously). https://t.co/mS6Wis1Wzp 2021-12-14 07:15:44 RT @thomasnield76: I rarely buy print anymore, but this is one of those rare finds that is worth buying both PDF and hard copy, even with a… 2021-12-13 17:27:52 In a world so focused on beating task-specific benchmarks, on training ever larger deep learning models on ever larger datasets, trying instead to ask the right questions is an act of rebellion. 2021-12-13 11:20:16 You can, at best, cover a large subset of possible speaker groups and speaking situations. Your mastery of a language is always context-specific. 2021-12-13 11:19:11 A language isn't a single well-defined object. Writing isn't like speaking, which isn't like listening. Homespeak isn't like officespeak. There are slangs and accents. It varies from context to context, group to group -- perhaps even person to person. You never "know a language". 2021-12-12 18:29:45 RT @TheAndyCamps: This is not at all a controversial opinion when you plan on putting actual money on your model. If I have skin in the gam… 2021-12-12 16:22:01 Another way to state this: show what didn't work. If you automatically try 1,000 things and report the one that works, vs. coming up w/ an idea, trying it once, and reporting on it, you're introducing considerable selection bias that can potentially completely change your results 2021-12-12 16:18:43 I was training on a big dataset (JFT) and had limited resources, so it would have been impractical to train more than a couple of models. 2021-12-12 16:18:00 An important reason why Xception turned out to generalize to many datasets and many tasks (like segmentation) is because I never tuned the hyperparameters. Meanwhile, many architectures where the hyperparameters were learned on the ImageNet test set don't generalize as well. 2021-12-12 16:16:26 Controversial opinion: in a research paper, I don't think you should tune your hyperparameters, which implies selectively hiding or showing certain results (massive source of bias). Rather, you should show which configurations you tried and what results you got (in appendix). 2021-12-12 14:23:17 @jackclarkSF Even if some folks end up richer as fiat moves around, the collective as a whole ends up poorer since a lot of power & 2021-12-12 14:17:32 @jackclarkSF If the same capital went into productive investments, the US would end up richer and more competitive as a whole. 2021-12-12 14:16:55 @jackclarkSF It's a good argument, but it's true in a more general way: a lot of capital goes into crypto, yet crypto is not an investment in any real sense because it doesn't produce returns like investing in companies or infrastructure would. Negative-sum games are bad capital allocation. 2021-12-11 20:06:02 Sure, my vacuuming shoes prototype has its share of critics. But remember what Paul Krugman said about the Internet that one time? Same thing, same thing. 2021-12-11 20:01:45 Fun fact: you can use the fact that technophobic or tech-myopic people exist (and were around during the rise of the web in the 90s) to argue that absolutely any random thing is going to be as big as the internet. It's not a tired argument at all and it's very clever so go for it https://t.co/o1OzAtyMBf 2021-12-11 16:52:58 "A good model architecture is one that reduces the size of the search space or otherwise makes it easier to converge to a good point of the search space. Just like feature engineering and data curation, model architecture is all about..." 2021-12-10 12:33:37 This is a pretty counter-intuitive fact. It has to do with the fact that while the *mean* validation loss is degrading, the *distribution* of validation losses keeps moving towards a more generalizable fit for a while. 2021-12-10 12:31:53 Related: it's usually good to keep training models a bit past the point of onset of overfitting (when the validation loss starts degrading). You get worse validation loss but improved accuracy, i.e. a better model 2021-12-10 12:29:59 This is especially tricky when doing hyperparameter optimization -- selecting your model configuration based on the validation loss is badly suboptimal. You should make sure to select based on the validation metrics you care about the most. 2021-12-10 12:26:38 In general there is no simple relationship between what you optimize for (the training loss) and what you care about (test metrics). Even validation loss is a terrible proxy: the model with the best accuracy or precision/recall is usually not the one with the best validation loss 2021-12-09 13:01:53 RT @gusthema: My Christmas gift arrived and right on time for my eoy break! Thanks @fchollet for the great work and sharing your knowled… 2021-12-09 13:01:51 @gusthema Enjoy :) 2021-12-08 18:09:35 Good outcomes lead to overconfidence. Overconfidence leads to bad decisions. Bad decisions lead to bad outcomes. Bad outcomes make you gain experience. Experience leads to good decisions. Good decisions lead to good outcomes. Good outcomes... 2021-12-08 17:55:29 "the second generation system solves this" -- I'm sure it does, in fact it's going to solve so many problems that it's going to fail. https://t.co/kRegfYK1Uw 2021-12-08 17:52:40 It's very easy to say, "system X solves this problem" for a wide range of problems when "system X" is an as-of-yet unimplemented, vaguely-defined aspirational project. 2021-12-08 12:32:31 A fun indicator of a country's food culture: when Mickey D has to create localized, higher-quality menu items. You see this a lot in France and Japan. https://t.co/M3LL62qn9h 2021-12-08 12:00:51 Telling people "actually, there's a better solution" is typically enough to trigger new breakthroughs, because it causes folks who were fully in phase 2 to revert to phase 1. https://t.co/MvF6MmmHWx 2021-12-08 11:32:48 In particular, progress generally follows two phases: first, exploring new ideas, which leads to sudden jumps forwards (breakthroughs), then refining the ideas that work best, which leads to slow, constant-rate progress. 2021-12-08 11:31:36 I like Kaggle competitions because they're like a microworld for observing how scientific & 2021-12-07 21:13:37 RT @eileenomara: Exciting report & 2021-12-07 10:19:58 I think one thing we're observing now is that the decline of traditional religions is unlocking a boom of "religion" as a broader trend, as it is now free from the constraints of the past, free to experiment and reinvent itself. Lots of cult leaders in the tech space... 2021-12-06 20:01:13 The term "software developer" suggests the existence of the inverse occupation, the software enveloper 2021-12-06 18:48:36 You buy this book: https://t.co/LvbEy5ipsA https://t.co/MJ6qvHFCwy 2021-12-06 18:17:22 I'm happy I live in a world where there are fireplaces and Christmas trees and Swiss chocolate 2021-12-06 14:44:36 Complex, poorly understood problems tend to have an inconvenient property: with every step we take towards a solution, we realize the problem is significantly more complex than previously thought -- keeping the solution eternally a few decades away. 2021-12-06 10:55:03 @morimeister In TF SavedModel format any such layers part of the model get saved just like regular layers 2021-12-06 10:17:55 StringLookup can be used for both string input features and string labels. When used for input features, I recommend setting `num_oov_indices=1` so as to handle never-seen-before string values. 2021-12-06 10:16:15 Keras tweetorial: using preprocessing layers to normalize a NumPy dataset and one-hot encode a set of string labels. https://t.co/4tzzXqCK7V 2021-12-05 11:03:36 VCs: nice computing infrastructure you have. What if... we made it thousands of times less efficient... so we can add toll booths at every node. Which we would own. Ah, perfect! This is the new frontier! No? ...Do you hate freedom?? Do you hate technology and progress??? 2021-12-04 20:38:45 @smdiehl web3 is a "tech stack" where, when you try to get to the technical bottom of it, you realize it's all make-believe weaved by BS artists. I've seen many tech hype waves, but it's the first time I've seen a situation like this 2021-12-04 20:34:21 @smdiehl For any engineer who hasn't realized this fact yet: I encourage you to check out ostensibly *technical* guides about web3 applications. Get it directly from the source! It's all empty technobabble that glosses over intractable technical issues. https://t.co/c4ZvdloS9Q 2021-12-04 17:42:22 You don't give enough credit to your brain for creating the entire universe you live in. 2021-12-04 09:23:42 Blending program synthesis with common-sense knowledge is hard 2021-12-04 09:23:12 I would have expected more like OH -> 2021-12-04 07:37:10 @kcimc Also, a key reason why it works it that deep learning is an engineering science. It has very little theory, it is widely applied, and findings are driven by empirical results almost exclusively. It doesn't work for fundamental research, theory, etc. 2021-12-04 07:28:04 @kcimc It works in a fast-paced research environment with high existing interest. I said: - for *most types* of DL papers -- not for all types of papers - *has become* -- not always was - for *deep learning* papers -- not for all fields 2021-12-04 06:51:41 It also strictly incentivizes reproducibility and general pragmatic usefulness. It aligns the entire process with the end goal. 2021-12-04 06:50:20 Adoption as a research validation mechanism is in fact far preferable, because it de-incentivizes bullshitting. Objective reality is a better grounding function than what people think of your method upon reading your paper. https://t.co/nZoaGpnvrQ 2021-12-04 06:50:19 For most types of deep learning papers, it has become entirely viable to eschew conferences and journals and instead stick to arXiv only. That's because there's an objective grounding function that provides a much better feedback signal than publication ever could: adoption. 2021-12-03 16:10:30 I finally got to meet some folks whose work I've followed for many years, like Jürgen Schmidhuber and Rolf Pfeifer 2021-12-03 16:08:29 Unfortunately the Ai-Con conference in Zürich and the award ceremony ended up getting cancelled at the last minute due to new Covid measures. But I'm grateful I had the opportunity to connect with lots of fascinating folks. Inspiring conversations all around. Thank you! 2021-12-03 16:08:28 I'm honored and humbled to receive the Global Swiss AI Award! Many thanks to the jury and organizers. It's encouraging to see work on general AI evaluation get recognition :) Some thoughts I shared with Swiss magazine Netzwoche: https://t.co/vK5q5aF9s7 https://t.co/N4kAfkH3Nz 2021-12-02 15:42:38 @RisingSayak Likewise, I want to say thank you to you too! I really appreciate your ever-positive spirit, and the series of contributions you've made has been amazing. They're now helping the entire community. People like you are what open-source is all about! 2021-12-02 09:26:18 Want to learn about Keras Preprocessing Layers? Check out this detailed tutorial by Matt from the Keras team. https://t.co/sMgFzxkzj9 2021-11-30 23:17:33 RT @TensorFlow: A decade after the loss of his arm, Jason Barnes' passion for music helped him co-create the world's most advanced prosthet… 2021-11-30 10:38:11 "To use a tool appropriately, you should not only understand what it can do, but also be aware of what it can’t do" From https://t.co/LvbEy5ipsA 2021-11-29 09:53:15 "Down with centralized corporate control! Power to the people! Topple the system!" -- Definitely not 3 corporations and a VC fund in a trench coat 2021-11-28 11:10:07 I think better human-computer interfaces in the future are more likely to be more embodied (audio-visual, tactile, gestures...) than less embodied (brain signal only) 2021-11-28 08:46:45 @AmnonTal Human brain + body + environment is definitely sufficient for intelligence, but it's a very different kind of intelligence, much closer to animal intelligence than human intelligence 2021-11-28 08:42:44 The dominant philosophical current in AI is characterized by narrow-minded reductionism and ahistoricity -- ignorance of the thinking that came before (often, the "before" threshold is... 2015!) 2021-11-28 08:39:55 In reality, intelligence is open-ended and embodied, embedded in an environment, in an ecosystem. In the case of humans, it's also embedded in a culture, and externalized as social and technological systems. Cognition cannot be understood in a fragmented manner 2021-11-28 08:36:37 Many deep learning researchers have this conception of intelligence as a kind of disembodied brain in a jar, acting on its environment in a one-sided fashion, trying to maximize some sort of reward score -- you could call it the "neocortex as RL agent" mindset 2021-11-28 08:02:52 Me too https://t.co/Q0e8iWvCgm 2021-11-25 19:25:19 On this Thanksgiving, I am grateful for all the people in my life who bring to it meaning and positive vibes -- my family, friends, awesome teammates, and the Keras community 2021-11-25 19:05:47 @smdiehl @AdamSinger Also it's been heartwarming seeing all these VC funds siding with the people and dedicating billions of $ to building a fully decentralized world where creators own and monetize their content (or raise money) themselves without any intermediary taking a cut. So altruistic of them 2021-11-25 18:22:12 New tutorial on https://t.co/m6mT8SrKDD: multiple instance learning https://t.co/PaIeVcekeg 2021-11-25 05:23:02 People project their illusion of choice on this statement. It does apply to many things throughout human history. Humans love their fantasies. But reality has a way of catching up. For reference, here's the original context where I said it, two years ago. https://t.co/t3yxh5oWDp 2021-11-25 04:38:19 The fact that many people have staked a lot on an illusion doesn't mean it's not still an illusion. 2021-11-25 03:05:51 RT @RisingSayak: New work with @ariG23498 on implementing Masked Autoencoders for self-supervised pretraining of images. Thanks to @enderne… 2021-11-25 03:05:47 RT @ariG23498: Our work (\w @RisingSayak) on masked image modelling has been published in https://t.co/xurTwXlKUR. The tutorial is the pap… 2021-11-24 23:18:27 Productivity tip: if you're procrastinating on something difficult or important, don't set your goal to "get it done", which may seem overwhelming and thus paralysis-inducing. Set it to "sit down, get started, and build momentum". Because that's something you can always do. 2021-11-24 22:04:38 If you're interested in Keras codebase internals: Luke from the Keras team posted a detailed walkthrough of the Model class. Implement a simplified version of the class yourself to understand how it all works under the hood! Check it out: https://t.co/ZhiYntA2J7 2021-11-24 18:45:48 @clickmeclicku Please ask @ManningBooks 2021-11-24 17:47:31 RT @ExpressGradient: #deep_learning_with_python by @fchollet is the best technical book I've ever read till date. It is full of wisdom. Wo… 2021-11-24 17:27:11 "Much like in biological systems, if you take any complicated experimental deep-learning setup, chances are you can remove a few modules (or replace some trained features with random ones) with no loss of performance." 2021-11-24 17:27:10 "Deep learning architectures are often more evolved than designed -- they were developed by repeatedly trying things and selecting what seemed to work." From https://t.co/LvbEy5A0k8 (which incidentally is 40% off for Thanksgiving week) 2021-11-24 17:24:04 RT @TensorFlow: Take a fresh look at #Keras Preprocessing Layers as they transition into official TensorFlow APIs. Learn more in this art… 2021-11-24 17:23:03 RT @A_K_Nain: Very cool code example! 2021-11-24 16:52:45 New paper walkthrough on https://t.co/m6mT8SrKDD: masked image modeling. Applying the principles of masked language modeling to computer vision. https://t.co/M1ftCAzC9t Created by @arig23498 and @RisingSayak https://t.co/hPSWBJbIwj 2021-11-24 06:14:31 There's a bigger lesson here -- the important properties of a dynamic system can't usually be ascribed to any specific artifact within the system -- e.g. "is this piece of code (or data) reliable". They derive from the interaction between the different parts. 2021-11-24 06:05:05 By then, your validation performance has informed 19 choices that went into your model, and so that model is already the result of a search process (a training process, in fact) -- one specific instance among hundreds of possibilities. 2021-11-24 06:03:01 A slightly counter-intuitive fact is that the reliability of your evaluation method changes under you. The first time you looked at performance on your validation set, it may have been reliable. But by the 20th time, it no longer is. Even though nothing about it has changed. 2021-11-24 05:50:21 It's important to understand that there's no binary "either your evaluation method is tainted, or it isn't". It pretty much always is. What matters is how much. Always take your validation results with a grain of salt. The production data will look quite different anyway. 2021-11-24 05:46:54 Most people think of a validation set as a "weak test set": basically an evaluation set, but a bit less reliable than the final test set. It's more accurately a "weak training set": data that you use to improve your model (thus, on which your model will perform artificially well) 2021-11-24 04:51:57 I'll always stand with the builders, the creators, the artists. And that's why I'll always keep an eye out for the scammers, the takers, the exploiters. 2021-11-23 07:17:22 "time of onset of overfitting" can even be used as a quantitative measure of *generalization difficulty* between your training set and test set. The earlier overfitting occurs for a given model, the harder the test set. 2021-11-23 07:15:57 Overfitting comes from the fact that your test data differs in subtle (and not so subtle) ways from your training data. The more the test data deviates, the earlier overfitting occurs during training, and the more severe it becomes as time passes. 2021-11-23 03:55:43 My wife just called a phone an "activity brick" 2021-11-22 23:57:00 An exciting new machine learning competition with real-world impact! https://t.co/1FoI6TIrJP 2021-11-22 23:55:17 RT @random_forests: New tutorial for TF Decision Forests! If you’re curious about using trees and neural networks together, then check… 2021-11-22 21:42:08 What social media does to your ability to think deeply and gain perspective is similar to what a strobe light does to your ability to perceive space and time. A series of disconnected flashes of information, lasting half a second each, adding up to nothing. 2021-11-22 02:47:01 (yes, this is sarcasm. needs to be disclosed I suppose) 2021-11-22 02:42:32 Maybe you, too, are a token holder. If so, you can take active part in the DAO governance. It's *your* DAO. Isn't that great? 2021-11-22 02:39:37 You get airdropped "citizenship" (a single token representing fraction of ownership and governance rights) when you're born or naturalized. The DAO can collectively raise funds for projects that benefit citizenship holders. It can resolve conflicts, execute contracts, etc. 2021-11-22 02:38:13 If you have trouble understanding the necessity of paying taxes and if you think the public sector is "terrible at capital allocation", just pretend the government is a DAO of 330M people, featuring very advanced mechanisms for distributed governance 2021-11-21 20:06:45 Branding matters. https://t.co/2Ywbemza6j 2021-11-20 21:55:07 To be directionally right but early is still to be wrong. You have to be contextually right. If you have to wait for decades to prove the skeptics wrong, then the skeptics were right (and you're long out of business). 2021-11-20 21:53:15 If you had tried to start an airline in 1910 you'd have been mocked. Rightfully so: it would have been dumb as hell to start an airline in 1910. Try again two decades later. 2021-11-20 18:59:43 "If you get an extra 50 hours to spend on a [ML] project, chances are that the most effective way to allocate them is to collect more data rather than search for incremental modeling improvements." From: https://t.co/LvbEy5A0k8 2021-11-20 05:34:55 @aureliengeron La plus belle langue du monde... 2021-11-20 05:22:40 This tweet is more controversial than I would have expected. I mean, objectively, English *is* concise. You guys need to consider that liking one thing does not mean dismissing every other thing. I speak five languages and I love every one of them. 2021-11-20 01:51:32 I like the English language because it makes it easy to express fairly complex thoughts very concisely 2021-11-19 20:44:22 @Deathn0t2 @sayannath2350 One thing that we might want to include is interop considerations like "I want to port this pytorch model to Keras or inversely, what's the workflow?". Porting models back and forth is typically easy, which means there is no strong lock in 2021-11-19 20:42:15 @Deathn0t2 @sayannath2350 If we're going to publish it on https://t.co/m6mT8Sa9M5, then I'd recommend focusing purely on Keras without referencing other frameworks. In any case I don't think this is an either/or situation: using one framework doesn't preclude the other (in fact many people use both) 2021-11-19 20:12:38 @sayannath2350 @Deathn0t2 Thank for the feedback, Sayan! Romain: perhaps we could create a public FAQ along the lines of "I'm considering using Keras for my research: why/why not?", covering any questions your advisor may have, and more. If you're interested, we can work together on it. 2021-11-19 18:12:15 @Deathn0t2 If you like it and you're productive with it, you should be free to use it! Why have your advisor prescribe the tools you should use? :) Let me know if there are any technical arguments / concerns I can address (fchollet@google.com) 2021-11-19 05:10:13 https://t.co/v7acWP5112 2021-11-19 03:10:16 "we choose to do these things not because they are easy, but because we thought they were going to be easy" -@Pinboard i think 2021-11-19 02:57:06 To be fair given how programmers estimate timelines i think a lot of them would actually be like "i have 30 minutes, let me just whip up a PDF editor" https://t.co/iKQLmv9pjU 2021-11-18 23:11:32 Always try the simplest approach first, so you have a reference point to justify any increase in complexity. In ML, this typically means using a non-ML baseline as your first model. 2021-11-18 21:58:15 @chrisweston @levie It doesn't matter what "crypto persons worth listening to" think, the fact is that Tether underlies the whole crypto economy. How does that happen in a space that is ostensibly all about zero-trust and decentralization? 2021-11-18 21:47:41 @levie The innovative idea of democracy is "you're born with power, with a fraction of collective ownership, you don't have to inherit it". Crypto kills this. 2021-11-18 21:46:07 @levie This is quite literally the crypto dream (which won't come to pass, thankfully). Is it really what "decentralization" looks like? Or would you rather have democratically-controlled (i.e. government-controlled) public infra passed down from generation to generation? 2021-11-18 21:44:58 @levie Imagine the crypto maximalist vision realized -- your grandchildren born into a world where all wealth, power, and public infra is cryptographically locked down, controlled by those who were around in the 2010s. Born not as citizens, but as servants of the crypto owner class. 2021-11-18 21:38:55 @levie That's what's at the heart of the crypto mindset really. Like most things, it's about power and how it's distributed. The technical considerations (which most people glaze over) are a backdrop. The decentralization narrative is propaganda. 2021-11-18 21:36:42 @levie If you have the public interest in mind, you're going to advocate for public data hubs / APIs. But if you're actually trying to execute a power grab, you'd go full crypto -- reinvent "government", but with *you* in charge this time, in an unaccountable way. 2021-11-18 21:35:56 @levie But we've already invested a lot in a collective power structure that sort of works and has very mature interfaces and remediation systems: the government. Might as well leverage it -- public infrastructure is its core competency 2021-11-18 21:33:48 @levie Right. And besides innovation, it's also about how we organize collective power structures -- the crypto definition of decentralization doesn't mean no one (or everyone) is in charge, it means different power structures, in favor of early/large actors in the space (you know who) 2021-11-18 21:27:39 @levie Another important point is that privacy is critical here. This means that you need access to *legal* remediation. Can't do that on-chain. 2021-11-18 21:26:52 @levie Universal public API for medical records, government-backed: https://t.co/WtGeD5TzcP 2021-11-18 21:25:25 @levie It's basically a public digital infrastructure project, so it would make the most sense to have governments do it. In fact France has started on this path. I'd definitely trust the government with having public interest in mind more than I'd trust a large VC fund. 2021-11-18 21:20:42 @levie For instance, blockchain defines provenance in cryptographic terms, and that cannot achieve mainstream adoption without intermediation by a centralized service (tying people's online identity to a secret passphrase is a nonstarter). You need a human-friendly entity to talk to 2021-11-18 21:19:13 @levie Standardizing, opening up, and platforming these things (in a privacy conscious way) to enable new businesses is a good idea! But I still don't see how blockchain solves this... 2021-11-18 21:15:40 @levie Overall I fail to see any *technical, first-principles reasons* to use on-chain computing for anything -- and I've been looking for almost 10 years. Everything in the space is shrouded in mystique and mystery, and sound technical analysis is badly lacking. 2021-11-18 21:12:55 @levie Point 5) is best illustrated by the fact that the entire crypto ecosystem critically relies on centralized, unaccountable organizations, and wouldn't exist without them (e.g. Tether) 2021-11-18 21:11:30 @levie 3) Remaining benefits are niche (ability to escape law enforcement) 4) Even that is mostly doable without a blockchain (see: SciHub, BitTorrent) 5) Red flag: 99+% of crypto enthusiast aren't interested in the decentralization or even CS aspects, just "numbers go up" 2021-11-18 21:09:24 @levie Most salient points for me: 1) An on-chain DB/computer is 10M+ less efficient than an actual computer -- limits applications 2) 80% of the benefits of decentralization are achievable with centralized computing + decentralized governance structure (see: Wikipedia, email...) 2021-11-18 20:57:41 @levie Like everything else it's a matter of cost / benefits tradeoffs. Decentralization brings certain benefits in certain use cases, but that comes at a certain cost. It's only by quantifying the tradeoffs that you can really understand where things are going. 2021-11-18 20:25:38 A Keras-based API for graph neural networks. Check it out! https://t.co/JHLopgtBmB 2021-11-18 11:58:35 @kelvindotchan @emilydoesastro Yes, you can tune data augmentation, batch size, etc, via the hypermodel class 2021-11-18 03:32:19 RT @emilydoesastro: KerasTuner is the coolest thing! You can use it to tune any hyperparameters of an ML model automatically. I'm using th… 2021-11-17 22:07:35 In tech, an obsession with chasing fashions essentially guarantees that you'll keep running in circles. 2021-11-17 20:52:02 RT @Weather_West: Being a climate scientist sometimes feels like being an astrophysicist in one of those 90s asteroid impact disaster movie… 2021-11-17 04:59:20 @NoobWonderland If you're going to select the best players -- those with the most ability -- you've got to ask, what is the *goal* of the game? The goal of the game is collective success. Putting the best players in charge means selecting those most capable of enabling collective success. 2021-11-17 04:42:37 "In the real world [...] you don’t start from a dataset, you start from a problem." From: https://t.co/LvbEy5A0k8 Keep ML real. 2021-11-17 04:32:07 Meritocracy is when leaders get selected not based on their ability to help themselves to positions of power (the default), but based on their ability to help those below them reach their full potential 2021-11-17 00:54:06 Big to the Keras community and the open-source community in general 2021-11-16 22:11:47 RT @ykilcher: Could AI solve this puzzle? This game shows very bluntly how far AI still is from human abstract reasoning. So today we're tr… 2021-11-16 18:01:20 @DynamicWebPaige @github Congrats! 2021-11-16 17:26:10 RT @TensorFlow: Enable asynchronous distributed training in TensorFlow with ParameterServerStrategy, a new tf.distribute strategy! Learn h… 2021-11-16 01:41:21 You saw it very clearly in 2016 and a few other occasions in the past few years. Even right now, there's a polarizing movement going through the tech community, and it works pretty much exactly like a personality test. 2021-11-16 01:38:46 It's pretty intuitive that the things people believe and the movements (or cults) they support would correlate strongly with their conception of their own identity. What's more perplexing is that it also correlates to a remarkable degree with personality traits. 2021-11-15 21:10:20 RT @TensorFlow: TensorFlow 2.7 has new tools and documentation for users migrating to TF2! Learn all about it ↓ https://t.co/5G92hcu1ZD 2021-11-15 21:01:11 @nextdoorsv "there was no one before me and there should be no one after me, unless they're exact clones of who I was" 2021-11-15 21:00:29 @nextdoorsv Shorter version: "the way things were when I was young is the only conceivable way things could ever be. The choices made by my generation are all perfect & 2021-11-15 20:49:54 @nextdoorsv "for hundreds of years" I guess the bay area was unoccupied land until it started getting covered with 1970s bungalows, parking lots, and strip malls, a transformation which happened during the middle ages and represents the final, optimal state of its development 2021-11-15 19:31:15 @soumikRakshit96 @RisingSayak @TensorFlow Congrats! Well deserved 2021-11-15 18:00:28 New tutorial on https://t.co/m6mT8SrKDD: a simpler way to create novel convolution layers by leveraging the public `convolution_op` method. https://t.co/CrQr6XyRNi https://t.co/hvri8mSVia 2021-11-15 11:24:09 Civilization is old enough that people in antiquity were already walking among unfathomably old ruins. Caesar is closer to our time than the great pyramid was to Caesar. 2021-11-14 22:16:40 Ideally you want a logarithmic relationship between feature scope and codebase complexity. Generalize to new categories of use cases via small changes to the code. 2021-11-14 19:30:46 I wonder if we could quantify any bump in scientific innovation velocity as a result of the rise of the internet 2021-11-14 19:29:42 Given that 90% of scientific research is about understanding the work of others and building on it, and given the v. high long-term returns of science, it's surprising how little attention is dedicated to systemically optimizing scientific communication and knowledge discovery 2021-11-14 04:30:21 @mihaimaruseac @DynamicWebPaige A number of TF features should probably be moved to standalone pure-Python libraries that could be reused outside of TF... the i/o stuff, but also tf.nest, etc. Also true of some Keras features like keras.utils.get_file(), the progress bar, etc. 2021-11-14 04:10:32 https://t.co/D9mPK8K1M8 2021-11-14 00:08:22 @benczheng Is it the same technology that enables Twitter to attribute the provenance of the tweet above to "Ben Zheng"? I always wondered how they did it 2021-11-13 23:19:53 It would be a lot easier to just nod along and ignore the space. But the ambient toxicity makes me feel that I have a responsibility to speak my mind, in case folks on the sidelines are listening 2021-11-13 23:18:04 One of the classic blunders of Twitter is posting negatively about crypto-adjacent stuff. Some folks have too much $ on the line to accept the fact that there may be people who hold different opinions about crypto 2021-11-13 22:59:23 You learn more about the mind by watching a baby grow than from reading neuropsychology textbooks 2021-11-13 16:48:18 Biotechnology is terribly underrated in software tech circles. 2021-11-13 16:47:13 The defining technological development of the 1995-2005 period was the rise of the consumer internet. 2005-2015: ubiquitous smartphones. 2015-2025? I would say it's the rapid development and deployment of multiple Covid vaccines. Incredible achievement. 2021-11-13 16:07:17 Hard to dismiss this idea in the far future -- on a long enough timeline anything can become plausible. But it's easy to see that smartphones are a long term optimal plateau for the next 15 years and that immersive VR won't replace them. 2021-11-13 16:04:14 The fundamental bet with the metaverse is that VR glasses will be the next smartphone: the new interface though which everyone access the internet and do things on the internet. A bet that the internet will become 3D, immersive, and embodied. 2021-11-13 15:48:30 Crucially it would include work meetings, etc. VR MMORPGs are already a thing, so what's special about the "metaverse" is the notion that 3D VR will become the way most people do the things they currently do on the web. A replacement for the web. I'm not saying VR games will die. 2021-11-13 15:48:29 Since "metaverse" can mean anything to anyone, I'm going to go with the definition FB used in their concept video: a set of 3D VR spaces where you exist as a 3D character, that you access via a headset/glasses, that would become the primary way people experience the internet. 2021-11-13 05:53:05 RT @vitojph: Uncrumpling paper balls is what Machine Learning is about. @fchollet's *Deep Learning with Python* https://t.co/GowHJ9A8Ut 2021-11-13 04:25:44 I wonder how they will look back on "metaverse". Will they be honest and admit, ok, none of what we said would happen happened? Or will they try to rewrite history and say, well, the open web is big, and you can use avatar filters in Zoom, so clearly our vision was realized? 2021-11-13 04:23:29 Timeline for things to disappear out of the public consciousness: - Specific NFT series: 1 year - NFTs in general: 2 years - web3: 3 years - metaverse: 5 years 2021-11-13 04:04:38 @import_robs She's alright 2021-11-13 03:58:58 Ok, 8 years ago to be specific 2021-11-13 03:54:36 Listening to some music. Throwback to when I randomly happened to ride an elevator with Taylor Swift in a building in NYC nearly ten years ago. "I don't even like Taylor Swift", I thought at the time 2021-11-13 01:17:37 @Kirti69895678 The latter 2021-11-12 22:20:02 Debugging Keras code is just so much nicer in v2.7 (which is now the default in Colab) https://t.co/f57DFRnk5c 2021-11-12 19:34:41 New tutorial on https://t.co/m6mT8SrKDD: neural style transfer with adaptive instance normalization (AdaIn). https://t.co/40ac9NsWIw Created by @arig23498 & 2021-11-12 06:19:15 RT @MeghShukla: A key aspect of data-centric research is … data! Active learning assists in curating our datasets, at the same time reduci… 2021-11-12 02:51:37 @rasbt It refers to "distributed (deep) belief networks". The name "belief network" was taken from Hinton's earlier work. 2021-11-11 19:32:10 The first ML framework I ever used was raw C. The first ML framework I used *at Google* was DistBelief. I'm old. 2021-11-11 19:18:40 RT @getdarshan: My new tutorial on https://t.co/SoxaLi26qS tries to display the merits of active learning techniques. This example saves th… 2021-11-11 19:18:36 RT @rasbt: Active learning is one of these things that I wish was easier to use in practice. Labeling data is always the annoying part, so… 2021-11-11 18:44:50 New tutorial on https://t.co/m6mT8SrKDD: active learning, an effective way to optimize your investment in data annotation. https://t.co/yAjL145D7E Created by @getdarshan https://t.co/T4JmEVEo4u 2021-11-11 17:38:50 @a7medev Yes! 2021-11-11 17:26:49 https://t.co/lCPNhyL4UW 2021-11-11 17:18:42 Did you know that Slime Mold could solve mazes, and even optimization problems? It's true. And it doesn't stop there. Slime Mold has picked up some Python. It bought my book. Now it's quickly mastering machine learning. You're getting left behind. Hurry. https://t.co/LvbEy5A0k8 2021-11-11 10:09:53 The Romans were the original masters of pixel art. 2021-11-11 02:41:11 What are the best datasets to benchmark out-of-distribution generalization? E.g. EEG/MEG recordings, etc. 2021-11-11 02:03:01 @samuelmaskell @samdman95 A single line with one train per hour, running on diesel, that sounds its 150 db horn every half mile as it crosses a road. Very urban indeed 2021-11-11 01:56:49 @nextdoorsv @landrews2702 @fringetracker I can only conclude that car-centric suburbia makes people deeply selfish and antisocial. Which is also reflected in driver behavior IMO 2021-11-10 19:48:10 @MrAstroThomas Keras was actually released 6 months before TensorFlow :) 2021-11-10 18:50:52 RT @DrTBehrens: "[...] artificial intelligence isn’t about replacing our own intelligence with something else, it’s about bringing into our… 2021-11-10 18:48:31 @nicolacatena93 99% of stocks are not meme stocks. They go up after a good earnings call and they go down after a bad earnings call. Don't try to trade meme stocks. 2021-11-10 18:31:47 Also, you're not super likely to be able to articulate a good pitch unless you work in the same industry or you just spend a lot of time researching companies. Which is why, for the large majority of people, indexes are the best option to beat the game. 2021-11-10 18:29:07 My personal (perhaps controversial) take on investing is that you shouldn't buy a specific stock if you're not able to articulate an elevator pitch about what the company does, why it enjoys an unfair advantage, and why it's going to be a clear winner. If you can't, buy an ETF. 2021-11-10 02:45:06 TensorFlow was open-sourced 6 years ago. What a journey :) https://t.co/fcDWPjXfwl 2021-11-10 02:12:03 @scottedwards200 Politics. You can't build. 2021-11-10 02:10:48 On a similar note, I really like what has been happening to the South Lake Union neighborhood in Seattle as a result of Microsoft and Amazon wealth. A formerly car-centric industrial warehouse area, it's now increasingly looking and feeling like central Tokyo. Highrises ftw 2021-11-10 02:05:07 Given its economic and cultural impact, Silicon Valley should look like Singapore or central Tokyo, with comfortable highrises, parks, and a top-notch train system -- not a vague cluster of parking lots, strip malls, office parks, and bungalows from the 1970s. What a waste. 2021-11-10 00:27:07 @pcwalton @pjakma @mitsuhiko @migueldeicaza The ecosystem explicitly built on zero-trust principles just happens to have the highest concentration of scams and deception of anything ever. It's almost as if there might be more to human institutions than code... 2021-11-10 00:13:34 @pcwalton @jesseposner @mitsuhiko @migueldeicaza People who think the Fed is evil coincidentally have no problem with a clan of unaccountable fraudsters based in the British Virgin Islands (Tether) conjuring massive amounts of "currency" out of thin air... since it pumps the value of their coin holdings 2021-11-09 22:32:02 @mitsuhiko @migueldeicaza Since lots of 3rd party observers tend to think "lots of smart folks are into web3, so there must be something even if I don't get it": if you're reading this thread, take note. 2021-11-09 22:30:22 @mitsuhiko @migueldeicaza You and Miguel are exactly the type of open-source builders that I'd expect to be excited about transformative new tech platforms. I think it's pretty telling that we all share the same disbelief and exhaustion. 2021-11-09 22:12:21 @migueldeicaza @mitsuhiko Tech relies enormously on networking and reputation. You can't just say that the emperor has no clothes when the emperor includes a collection of multi-billion dollar VC funds 2021-11-09 22:10:45 @migueldeicaza @mitsuhiko In crypto you can be one of two characters: the scammer or the scammee. There are only downsides in taking the side of the scammees, especially given how much VC money and clout backs the opposite side (very vindictive characters, too) 2021-11-09 22:08:01 @migueldeicaza @mitsuhiko My anecdotal impression is that ~80-90% of tech folks are cognizant that crypto has no applications (other than gambling / pump & 2021-11-09 21:53:54 RT @migueldeicaza: Tech executives are sitting ducks against the barrage of NFT and crypto marketing. They know their business, but these… 2021-11-08 20:44:21 Imagining various potential far futures isn't pointless speculation. It's how you put the present into perspective and contextualize our current choices. Just like reading about history, but in reverse. 2021-11-08 19:46:11 Writing simple code is not about writing code that is compact and terse. It's about the simplicity of the underlying mental model and the ease with which you can recover this mental model from reading the code. 2021-11-08 16:54:57 Prediction markets are more accurate when you adjust their predictions to account for the typical biases held by the type of people who participate in them (it's a fairly specific profile) 2021-11-08 16:44:36 RT @loicaroyer: We are excited to release Aydin, a user-friendly, feature-rich, and fast image denoising tool. Fantastic work by @_AhmetCan… 2021-11-07 05:13:27 @Ffxivmarket You know the Words, you bear the Icon, you uphold the Faith. You shall receive the fortune that was promised to you, my son. wagmi 2021-11-07 04:54:01 If you're surrounded by people who talk about dog tokens, NFTs, web3, the metaverse, and you feel like you're losing your mind because none of it makes any sense... don't worry. It's safe to ignore the noise. 2021-11-07 04:51:11 Like the old religions, they make little sense to outsiders. But unlike the old religions, they're weirdly mundane. For the most part, they don't promise eternal life and transcendence. They just promise you that if you're a believer, you're going to get filthy rich eventually. 2021-11-07 04:49:36 Tech is full of mini-religions, complete with their mythology, dogma, rituals, iconography, and requirement to always maintain faith at all time lest you get excommunicated... 2021-11-07 04:31:26 Machine learning is the science of figuring out how to organize information. Information cartography, if you will 2021-11-07 00:52:37 To paraphrase Kierkegaard, art is not an algorithm to be engineered, but a reality to be experienced. Pressing a button to generate a new landscape painting is to artistic practice what watching a robot on a treadmill is to working out. 2021-11-07 00:50:17 And no, my paintings aren't done with machine learning, sorry. I always find it funny when I get this question. It's a sign of the times -- no one would have considered the possibility 6 years ago. 2021-11-07 00:48:01 Painting process. 1. Broad strokes to figure out the colors and composition 2. Add more brush strokes 3. Keep adding more brush strokes 4. Just finish the thing already https://t.co/uBd9Mh7KhN 2021-11-06 23:20:00 CAFIAC FIX 2021-11-01 19:20:00 CAFIAC FIX 2021-11-01 17:30:00 CAFIAC FIX 2021-08-21 04:23:06 Yeah I'm calling you bros now. Ha how's it going bros. My name is *high-pitched noises* 2021-08-21 02:25:30 Thanks for the contributions bros 2021-08-21 02:25:13 Ok, let's do this again 0FA: your password, but it's just "password" 0.5FA: your password, but you can reset it via SMS 1FA: your password 1.5FA: your password + SMS code 2FA: your password + code from OTP app/device 3FA: password + OTP + biometric reading 4FA: add halo reading 2021-08-21 00:45:48 @dvtswe They call it a "halo" 2021-08-21 00:43:52 Don't settle for 1.5FA 2021-08-21 00:43:29 1FA: your password 1.5FA: your password + SMS (poor man's 2FA) 2FA: your password + hardware-based or app-based OTP 3FA: your password + OTP + biometric reading (e.g. fingerprint reader that doubles as OTP device) 2021-08-19 23:23:23 Those who spend more time congratulating themselves on their successes than reflecting on what remains to be done are already on a downward slope. 2021-08-19 23:20:48 One of the best psychological drivers of growth is awareness of your own limitations -- awareness that you only have a limited amount of time available, and that you don't know or understand most things out there. It gives you impetus to act, and to focus on the essential. 2021-08-19 20:59:53 The "get" prefix is valid and useful in plenty of cases. 2021-08-19 20:59:17 Counterpoint: verbs in attribute names indicate an action (i.e. a method) while a verbless attribute indicates a property. An action may modify some state and may take some arguments, while a property only reads static data. e.g. foo = item.get_foo(bar=False) vs. item.fоо https://t.co/DXsny4H1I4 2021-08-19 19:20:04 Last call before I close the survey 2021-08-19 10:49:54 Tired thought leadership: "crypto & Wired thought leadership: "NFTs & 2021-08-18 19:15:10 RT @fchollet: The 2021 developer survey results are here (thanks @StackOverflow)! 60k developers were asked about the technologies they use… 2021-08-18 15:40:18 @kcimc Every year there's a new flash-in-the-pan cash grab getting hyped as a world-changing new paradigm. And it's getting more boring every time 2021-08-18 15:38:19 @kcimc People have been selling files over the Internet as soon as online payments became a thing. There was piracy on the margins (which may not have been harmful to creators, and benefitted many broke students), but NFTs do nothing to stop content piracy 2021-08-18 05:49:25 You can learn something interesting from pretty much anyone 2021-08-18 05:22:07 My favorite kind of YouTube video is random people explaining how their job works 2021-08-18 01:00:26 Keras users, I am once again asking you to answer this short survey https://t.co/A5OhLBkSMO https://t.co/DSFgQz2RmY 2021-08-17 23:34:05 RT @fadibadine: TensorFlow / Keras 2.7 introduce a new debugging experience that will tremendously help in debugging your code. Check out t… 2021-08-17 23:34:01 RT @eau_de_gespa: https://t.co/GRBto1138C 2021-08-17 23:29:04 @jeremyphoward @rbhar90 This would have been virtually impossible before Python 3.7. I implemented it using the TracebackType API, which is a new addition in 3.7. You can check out the code in Keras & 2021-08-17 20:30:53 The end result is a much improved debugging experience, already available in tf-nightly, and upcoming in TF/Keras 2.7. Lots of other cool things are coming up, but right now this is what I'm most excited about. 2021-08-17 20:30:31 Lastly, we've audited every error message in the Keras and TensorFlow codebases (thousands of error locations!) and improved them to make sure they follow UX best practices. ​​https://t.co/Mj9cEiJJ9C https://t.co/9jq8csvBnc 2021-08-17 20:29:48 Second, we're automatically displaying context information related to Keras layer calls. What were the arguments passed? This is essential, since most errors have to do with the shape or dtype of the input tensors, or with the mask or training arguments, which are often implicit. https://t.co/rl6xBknM1q 2021-08-17 20:28:13 First of all, we're filtering tracebacks to eliminate TF-internal frames. (There's an option to display them, if you need to debug TF or Keras themselves.) Only frames related to your own code will show up, greatly reducing the amount of info you have to sift through. 2021-08-17 20:27:29 Let me tell you about my favorite upcoming feature of TensorFlow/Keras 2.7: a better debugging experience. We've worked on 3 things that significantly cut the time it takes to analyze and fix issues in your Keras code. https://t.co/9BmrWtJGi5 2021-08-17 20:00:37 RT @TensorFlow: Movement + AI + Friends. Our #MadeWithTFJS MoveNet model now supports multi-person pose detection and tracking, al… 2021-08-17 19:09:38 When people ask, "what will this technology look like in 10 years", they should really ask "what will this tech look like after X billion $ and N iteration cycles?" The pace of evolution of a technology is entirely dependent on interest and investment, it isn't predetermined. 2021-08-17 17:23:02 RT @daithaigilbert: Facebook is not only allowing right-wing militias to spread anti-vaxxer misinformation on its platform, it's even label… 2021-08-17 16:56:53 New code example on https://t.co/m6mT8SrKDD: a message-passing neural network for predicting properties of molecules given their graph. https://t.co/eYoZSmK9ib https://t.co/rdDs6pRYe6 2021-08-17 15:58:25 To do something extraordinary, it's usually enough to put into what you're doing an amount of time and energy that extraordinarily few people would be willing to invest. In fact, that's how most magic tricks work. 2021-08-17 15:47:33 RT @fchollet: Are you a Keras user? You can give us your feedback and help us build the most delightful product possible by answering this… 2021-08-17 06:05:57 RT @muktabh: These are really good. I did my first transformer implementation using the transformer code in this repo. Really nice. 2021-08-17 03:01:24 We're running a quick Keras user survey. Take it and make your voice heard! https://t.co/I5v4oVazXL https://t.co/DSFgQz2RmY 2021-08-17 01:28:57 Two kinds of notebooks: the cheap ones you fill cover-to-cover with scribbles and doodles, and the beautiful ones that end up staying forever blank because they're too pretty to write in, like pristine snow 2021-08-16 19:58:16 It only takes 3 minutes, take the survey! 2021-08-16 17:58:15 It has been amazing to see the effort and passion that the Keras community has invested into creating these. Huge respect for everyone who contributed. Your work is now being read and reused by many. 2021-08-16 17:55:59 There are now 100 (one hundred!) code examples at https://t.co/QFl5mdzgfN, covering essential workflows across computer vision, NLP, generative learning, RL, etc. Created by the Keras community. Focused on readability/clarity, reusability, and best practices. 2021-08-16 16:18:02 Are you a Keras user? You can give us your feedback and help us build the most delightful product possible by answering this quick survey: https://t.co/cD1TLfH5Va 2021-08-16 16:07:08 Uses the Google Research multimodal entailment dataset (https://t.co/dpq2tZ8VLW). Example created by @RisingSayak! 2021-08-16 16:06:04 New code example on https://t.co/m6mT8SrKDD: multimodal entailment. Train a model to determine if different pieces of content entail or contradict each other, across different input modalities (text & 2021-08-15 02:15:53 The basic selling point of starting a greenfield project instead of trying to fix the existing codebase is to replace the old & 2021-08-14 22:43:26 RT @techreview: Sophie Zhang, a former data scientist at Facebook, revealed that the company enables global political manipulation and has… 2021-08-14 11:11:21 RT @selectedwisdom: NOAA declared July 2021 the hottest month on Earth ever https://t.co/QEJmhq8itU 2021-08-14 05:33:05 If you missed it: check out this tutorial on learning to use a 2D image to generate *new views* of a scene (from different perspectives) https://t.co/oilZ2Mnjl3 2021-08-14 05:19:00 Two categories of skills: those where the more seriously you take the activity, the better you get (like music composition), and those where the less seriously you take the activity, the better you get (like Twitter) 2021-08-13 03:57:04 @cwarzel This is Darth Jar Jar level of mind control 2021-08-13 03:54:45 The key to get these people to finally start caring about the pandemic was to tell them it's all the fault of scary foreigners. Genius. Maybe now they'll wear masks and get vaccinated https://t.co/SCNPKzdvol 2021-08-12 23:49:29 Deadline: August 20 at 23:59 PT 2021-08-12 23:45:33 (Doing this in my own personal capacity, no connection with Google or Kaggle) 2021-08-12 23:45:01 If you win, I'll send you a signed copy of Deep Learning with Python, 2nd edition (I ship internationally). I'll send it as soon as I have my author copies (within a couple of months). 2021-08-12 23:43:09 Quick rules: - Notebook should have "Keras" in the title to be eligible - Notebook should be able to submit to the leaderboard My selection criteria: - Idiomatic Keras code both for data preprocessing and modeling - Code is concise and highly readable - Follows DL best practices 2021-08-12 23:40:29 Kaggle has just launched a new NLP competition, "Hindi and Tamil Question Answering". Let me add my own mini-contest to it: on Aug. 21 I will review the public notebooks that use Keras and I will select my favorite one. Winner get signed copy of my book. https://t.co/nnhNUFsIJI 2021-08-12 20:40:30 Created by @arig23498 and @ritwik_raha. Original paper: https://t.co/J9AAUv4Xu3 2021-08-12 20:39:32 New code walkthrough on https://t.co/m6mT8SrKDD: 3D volumetric rendering with Neural Radiance Fields (NeRF). https://t.co/oilZ2Mnjl3 Synthesize novel views of a 2D scene by learning the volumetric scene function. https://t.co/2ODzYjfsJ2 2021-08-12 18:35:15 The new release of TensorFlow (2.6) is now out! https://t.co/2radEdc8SZ Lots of great features in this release! Among other things, significant improvements to preprocessing layers, which are now part of the core Keras API. 2021-08-12 18:32:34 @divideconcept You can access an advance preview here: https://t.co/1luyISfgBY otherwise the actual product will be available in September or so. 2021-08-12 18:14:33 Thanks for the kind words! I spent a lot of time trying to figure out an intuitive way to explain Transformers, and I like how it turned out. It's unlike anything else you'll find online. https://t.co/6BQyHnisEb 2021-08-12 04:16:32 The best things I've ever created are often the things I was most scared to get started on and where I was least confident I could see them through. 2021-08-12 03:29:14 RT @ASlavitt: Almost every day I hear about some absurd thing people believe about vaccines they learned on Facebook. Here’s what I said i… 2021-08-11 19:37:10 RT @TensorFlow: Finding the right sample size for model training is easy as 1⃣, 2⃣, 3⃣! With Keras, you can estimate the optimal numbe… 2021-08-11 19:32:16 So don't tell me, "this is implicit / this is magic." Tell me, "I care about configuring X." And do realize that software tools can be used by a variety of people, who don't all care about the same things. 2021-08-11 19:29:36 99% of decisions always have to stay hidden. Unless you're the one writing the software, implicitness is just the default, universal state of software tools. What matters is *which* bits are being surfaced at the tip of the iceberg. And that's an API design question. 2021-08-11 19:28:28 There's no such thing as explicitness or implicitness in software. There's a large number of decisions the software must make, and as the end user, you can only specify a tiny number of them. Good software surfaces the decisions you care about and takes care of the rest. 2021-08-11 18:47:07 Hence why it's crucial for them to do everything they can to keep vaccination rates low and cases high https://t.co/rZm7KUZyEv 2021-08-11 18:35:18 RT @Noahpinion: The stupid urban myth that YouTube is a force for right-wing radicalization has needed to die for a long long time. Hopeful… 2021-08-11 18:15:18 RT @jeffheaton: Here is the latest PDF version of the free 500+ page textbook that I developed for my Applications of Deep Learning course… 2021-08-11 06:05:55 A programming language has a dual purpose: to enable you to express programs, and to shape the way you think about programming. Languages program you. 2021-08-11 02:54:53 Now of course, it's not that hard to understand: the ability to generate hype and the ability to generate meaningful progress / shape the future are entirely orthogonal 2021-08-11 02:53:56 It's funny how at any given time what's hyped in AI is largely meaningless while what's actually momentous is largely under the radar (AlphaFold was a rare exception to this rule) 2021-08-10 22:12:50 RT @fchollet: New code example on https://t.co/m6mT8SrKDD: recipes to improve knowledge distillation workflows (training smaller "student"… 2021-08-10 19:20:09 @jordanestern @paulg Concept drift in text classifiers occurs on a timescale of months. 2021-08-10 18:39:11 @jordanestern @paulg At the same time, the classifier trained on old articles would *simply not work* on today's articles. Too many words/concepts would be unknown or redefined, and the style cues it uses would be outdated. It would produce largely garbage output (or at least very biased) 2021-08-10 18:37:40 @jordanestern @paulg The difference in ratings between what people could predict today (wrt current articles) and what people would think 20 years from now is not something that a classifier could possibly predict. So nothing is lost. 2021-08-10 18:36:27 @jordanestern @paulg There's no tradeoff here. You have to understand that such classifiers work not because they have a crystal ball, but because they latch onto superficial markers that are statistically correlated with evergreeness or not (e.g. lots of dates = not evergreen). 2021-08-10 18:20:32 @jordanestern @paulg What makes his proposal dubious is the notion of training on articles from past decades, which means the classifier would suffer badly from concept drift. The proper setup is actually to create a training dataset by asking people to rate *current* articles as evergreen or not. 2021-08-10 18:19:11 @jordanestern @paulg His proposal is not overly egregious. Predicting the evergreen character of a news article is a common problem and works decently well even with basic techniques (fun fact: I won my first Kaggle competition on exactly this topic in 2013: https://t.co/LqrHZmi5dZ) 2021-08-10 17:42:24 Lots of casual proposals that amount to: "what if we trained a text classifier on past biology research papers to predict those that went on to win awards, then we used it to select the next award winners for our prize in physics?" 2021-08-10 17:38:21 Machine learning is too often perceived as a kind of oracle-like power capable of predicting the unknowable and working out of distribution. 2021-08-10 17:25:08 Also, a good curriculum and well-optimized learning practices (like spaced repetition and mind maps) can easily cut your total time by 3-4x. 2021-08-10 17:22:49 It doesn't take 10,000 hours of deliberate practice to master a skill. Obviously it depends on the difficulty & 2021-08-10 04:44:42 The fact that hand-written code has to be read and understood by humans is a powerful regularization term that pushes it towards simplicity. Code written by a search process (whether biological evolution or program synthesis) has no such constraint. 2021-08-09 16:31:50 Created by @RisingSayak Original paper by Beyer et al. https://t.co/tOcQQ2IPDA 2021-08-09 16:31:05 New code example on https://t.co/m6mT8SrKDD: recipes to improve knowledge distillation workflows (training smaller "student" models that retain the performance of larger "teacher" models). https://t.co/CsfdoYiAcC 2021-08-09 04:27:32 This is now a baby picture account https://t.co/CIgHbsQYFx 2021-08-09 03:15:56 Pro tip: if a bar asks you for proof of vaccination and you have a picture of your vaccination card saved in Google Photos, just search for "vaccine" in the app and your card will show up 2021-08-09 01:13:45 Free idea: a YouTube channel in the "so satisfying" genre where each video is about watching a series of red FAILED tags in the logs of a unit test suite flip one by one to green PASSED tags as a developer calmly and neatly works through each bug 2021-08-08 23:18:59 @togelius Sure, but if you *are* running a mafia, why would you willingly let yourself be audited? 2021-08-08 22:43:33 Propaganda works, and the modern world features some extraordinarily optimized propaganda delivery mechanisms. That's the main explanation. https://t.co/j84dPavXRr 2021-08-08 22:38:49 @togelius Sounds like "the mafia boss must allow his finances to be audited" 2021-08-08 20:29:19 We live next to each other, we talk to each other, but really we all live alone in our own inner universe. 2021-08-08 20:29:10 Young kids live in very different worlds compared to the adults right next to them. The very same room becomes many times larger, the very same day becomes an entire week. A year becomes an eternity. 2021-08-08 19:21:49 This must be a metaphor for Twitter https://t.co/j8ZbpHVQD5 2021-08-08 18:02:26 @LeafsOfTea My advice: do consider not being a raging racist, and not attributing supposedly immanent moral characteristics to ethnicities and cultures. 2021-08-08 00:10:56 Twitter has a few prominent examples 2021-08-08 00:10:37 Pet peeve: US weebs who spent half a year in Japan in college and are spending the rest of their lives acting as if they were experts about Japanese culture & 2021-08-07 23:13:29 Overuse of acronyms is like using obscure variable names in programming. Communication suffers. 2021-08-07 23:12:07 In any large organization where acronyms multiply left and right, you will quickly end up in meetings where half of the people talk about the new YDG and its advantages over NRK, and the other half have no clue what that means... 2021-08-07 15:42:23 @RisingSayak @GoogleColab I'm still learning JAX myself, I don't think anyone should learn it from me for now! But @A_K_Nain's tutorials are great for sure. 2021-08-07 03:08:09 @EhsanHaghighat Note that `Input()` is for Functional model construction only. The Functional API is a graph-building API. See e.g. https://t.co/Makf9SCF5U 2021-08-07 03:07:23 @EhsanHaghighat Try this. https://t.co/1MKO5of7G0 2021-08-07 02:55:04 @SingularMattrix Thanks, Matthew! I think JAX is great. It's a different take from what I'm used to, which is super interesting. It's just taking me some effort to get used to the new mental models and APIs. 2021-08-07 01:56:34 @rodrigobaron_ I don't think the code was wrong, but it could be made a bit simpler and more idiomatic. I need to learn to write idiomatic JAX code. Idiomatax code? 2021-08-07 00:55:39 I've deleted my tweet comparing different approaches to writing DL training loops because some folks told me it mad JAX look more difficult than it is. I'm still learning JAX (which has been a lot of fun), and if the JAX code I write is bad, that's entirely on me. 2021-08-07 00:17:40 @EhsanHaghighat Sure, you just need to make sure the inputs are tracked by the tape 2021-08-07 00:06:24 @ChrSzegedy Link? 2021-08-06 23:57:50 @ChrSzegedy Maybe there are better solutions, I don't know. I'm still learning JAX. 2021-08-06 23:56:53 @ChrSzegedy If I don't have any non-trainable weights, I get this, which is much simpler. However, adding non-trainable weights in the pictures adds significant complication. The same applies to metric state updates as well. https://t.co/oQ8zbfoq3W 2021-08-06 23:50:17 @ChrSzegedy Can you show me how you would train a single batch norm layer (to reduce the scope to something simple) with JAX? 2021-08-06 18:50:29 @PMinervini I mean, there's not even any clear connection between `opt` and `model/loss` in that code snippet 2021-08-06 18:48:41 @PMinervini Are you trolling? How is the GradientTape remotely similar to `loss.backward()`? And what you're showing in an end-to-end blackbox, not a low-level training loop. Where are the gradients? What does `backward()` do? Might as well show `model.train_step(input_data, target_data)`. 2021-08-06 18:09:40 Of course if you're just rendering web pages, cost-cutting isn't a factor, but if you're running deep learning models, it is. 2021-08-06 18:09:03 The only trends that will prevent us from reverting back to dumb terminals are: - privacy (some data should stay on the device or at least be decrypted/processed only on device) - cost-cutting (end users are willing to pay for local compute power, might as well leverage it) 2021-08-06 18:04:08 Computing went from mainframe + terminals, to standalone microcomputers, to a hybrid model where data / models / heavy logic are centralized in a datacenter while your terminal still runs a lot of stateless application logic https://t.co/66NnKaZR02 2021-08-06 17:50:36 I've been browser-coding all pandemic. The only real downside is that a laptop screen is small, and Chrome removes extra screen space you could be using for your code. But that's more of a "coding on a laptop" issue. 2021-08-06 17:49:01 Because the 2nd approach adds little value (latency isn't critical here) and feels unnecessary, you can safely bet on browsers. 2021-08-06 17:48:11 Good take. Files/artifacts are in the cloud, and productive workflows increasingly require advanced cloud integration features. Long term, either you code in the browser, or you code in a local client that behaves for all intents and purposes like a browser. https://t.co/aPmZgpO2Ro 2021-08-06 17:43:30 @svpino All approaches have their magic. Symbolic gradients use an implicit computation graph with unknown entry nodes. The GradientTape tracks stuff in the background. grads = grad(fn)(params, ...) is pretty baffling at first and takes a while to grok. 2021-08-06 17:40:09 @belsebubb Yes, as part of graph computation -- it won't work as-is with eager execution. 2021-08-06 17:33:49 I originally didn't like the GradientTape too much (I'm not fond of scopes in general), but the more I've worked with it the more I've liked it. It's nice to use, even beyond the value proposition of unifying eager and graph execution 2021-08-06 17:31:45 @by_niyi It's very concise, but it does introduce a bit of mystery as to what is the computation being differentiated: you specify the end point of the computation (the loss) but you don't specify the starting point, which is implicit 2021-08-06 17:22:09 Folks. Which approach to writing training loops do you prefer? Why? Feel free to propose alternative code snippets. https://t.co/QeahUGOzsO 2021-08-06 15:13:26 @qhardy That's just horrible. I'm grateful for modernity. 2021-08-06 14:58:38 @qhardy The good old times before vaccines and antibiotics, when the child mortality rate was ~30% 2021-08-06 14:37:08 Who would use Windows without the apps? Who would use Python without the libraries? It's not the features. It's the ecosystem. 2021-08-06 14:36:01 Second point: no particular feature or performance advantage of a system can last very long. But ecosystem gravity is extremely tough to replicate, and is the single most impactful element when it comes to *reducing time to solution* for end users (which is the end goal). 2021-08-06 14:34:40 First point: over time, you need to be able to change your internals without changing your public APIs (which users & 2021-08-06 14:29:13 From a maintenance perspective, the key to system longevity is to decouple interfaces and implementations. From a relevance perspective, the key to system longevity is to achieve a critical ecosystem mass. 2021-08-06 04:37:59 @mikeleinart Literally no one knows how to pronounce this word. It's a mystery 2021-08-06 04:24:39 Do you pronounce like fun-guy, or fun-jy? Or maybe even fun-guee or fun-jee? I think it should be fun-jy 2021-08-05 17:52:48 RT @DoctorVive: "Soon after Biden...released his $2T climate plan...collective ad spend by the companies like Exxon Mobil—as well as powerf… 2021-08-05 04:26:14 And yes, obviously, using i as index in a loop or using the abbreviation num_* is completely fine. The question is not "is the variable name a full word", it's "is it understandable by everyone reading the code". In m cs s var nm ct b intp. Please use common sense. 2021-08-05 01:31:17 In the tech industry, you often encounter the idea that following best practices slows you down, that good hackers ship fast by all means necessary. IMO at any time scale longer than a weekend, *not* following best practices slows you down. 2021-08-04 23:21:32 Accurate. If you want to learn about yourself, make art. https://t.co/NjFJEXdzUK 2021-08-04 15:33:14 RT @fchollet: The 2021 developer survey results are here (thanks @StackOverflow)! 60k developers were asked about the technologies they use… 2021-08-04 15:10:51 The main cost of the things in your life isn't what you paid for them, it's how much of your time and mental space they occupy. 2021-08-04 10:38:49 RT @GuglielmoIozzia: Always good to remind this It's often frustrating to spend time fixing code just to reproduce ML papers. The #keras… 2021-08-04 04:47:59 @suicuneblue Yep, we're fixing this problem right now. Major effort. The results will only be in the 2.7 release though. 2021-08-04 04:27:25 Unless you're coding with a Nokia 3310 keyboard, that is. In that case, do what you must. 2021-08-04 04:25:39 Code is meant to be read by others, and it's a very information-dense medium. So write your code like you'd write a book or an article, *not* like a telegram or a SMS circa 2005. Spell out those variable names. No one-letter variables or weird abbreviations, please. 2021-08-03 19:43:12 @EhsanHaghighat @StackOverflow It roughly matches the percentage of the user base still using TF1, so that would be my guess. Good news: the percentage is steadily decreasing. 2021-08-03 19:17:08 Full results here: https://t.co/486zng8OQI I find it amazing how fast machine learning is becoming part of the toolbox of every developer out there. 2021-08-03 19:16:39 The 2021 developer survey results are here (thanks @StackOverflow)! 60k developers were asked about the technologies they use. 10.14% of all developers reported using Keras, up from 6.2% last year. That's 64% growth! 16.5% also reported using TensorFlow, up from 11.5% last year. https://t.co/AOYtdVcMa5 2021-08-01 23:47:03 In many situations, data and evidence can be murky and subject to interpretation. The evidence for the safety and effectiveness of the Covid vaccines is *not* one of those situations. It doesn't ever get much clearer than that. Get your shots. I'm ready for Covid to be over. https://t.co/yauqqNWicr 2021-08-01 23:41:10 @qhardy The shining capitol on a hill. https://t.co/IcJcHJc1M8 2021-08-01 22:17:18 RT @DrTomFrieden: Delta emerged because of uncontrolled spread, and I worry that even more dangerous variants—including vaccine-resistant o… 2021-08-01 20:34:56 This is a fairly common form of data leakage. https://t.co/3Waim4V1wQ 2021-08-01 17:09:37 @vicariousdrama @dougfort Please, I urge you to get off Facebook. 2021-08-01 16:09:54 The vaccines work remarkably well. Yet, because a number of vaccinated individuals got Covid, many people who are either illiterate or speaking in bad faith claim that vaccines are useless. This meme needs to die. https://t.co/l9VolDGkKI 2021-08-01 02:32:14 @michaelbyrne Our entire society and economy are being held hostage by people who won't get vaccinated, despite having the option to do so at no cost, causing the pandemic to continue its run. 2021-08-01 02:18:45 The task of computing in general, and AI in particular, is to build better cognitive tools for humans. Tools to make sense of the world in ways not possible otherwise, tools we can use to achieve bigger goals. 2021-08-01 01:01:02 @funicular1 1/1,000 breakthrough infection rate, 1/100,000 breakthrough death. Literacy can be a useful asset 2021-08-01 00:56:01 @JarretCF It's a virus. Not a bacteria. Why are pro-Covid trolls always like this 2021-08-01 00:54:48 @nfosec19 In fact, there is. 2021-08-01 00:53:04 Please get your vaccine so this thing can finally be over. Given how infectious Delta is, there's likely only two possible long-term outcomes: either you get the vaccine or you get the virus. So stop wasting our time https://t.co/NBeZYdHkJe 2021-07-31 22:28:52 RT @TechOaktree: Three things we love about Keras: it provides clear patterns showing how to implement ML logic, documents those patterns a… 2021-07-31 18:30:37 When we set goals, we're often more attracted by the optionality of being able to do something, than by actually doing it. Knowing what you really want, and then taking the shortest path to it, means that you end up pursuing a lot fewer things. Leaves you more focused & 2021-07-31 02:20:59 @RickHunter7 Anti-vaxers, Covid deniers, etc. The end result is the same in every case: more people get sick, more people die. 2021-07-31 02:18:34 Whenever our species goes through some hard times, there's a group of people -- consistently the same people, across generations and continents -- that goes "I know, let's side with the hard times and make things even worse" 2021-07-31 02:15:38 Pro-Covid propaganda (and its enablers) really is the worst 2021-07-31 01:51:11 @Grady_Booch Isn't that the Keras Functional API? 2021-07-30 22:54:07 The purpose of both art and science is to reveal what you can't know from pure experience 2021-07-30 21:47:50 RT @sundarpichai: Among our research projects at Google - a time crystal, eternal change for no energy. Glad people are trying to evade the… 2021-07-30 20:03:07 Influence is the ultimate currency, and credibility is its proof-of-work. 2021-07-30 19:27:51 The one major difference with NumPy is that arrays aren't assignable -- you'll still have to use tf.Variable for storing mutable state. 2021-07-30 19:27:07 You can learn more about the TensorFlow implementation of the NumPy API here: https://t.co/N5Yy00LGdi It enables you to use TensorFlow as a distributable, GPU-accelerated NumPy. Fully compatible with Keras workflows. 2021-07-30 15:45:52 Do you like the NumPy API? Do you wish you were able to write Keras models using the NumPy math syntax? You can! Find out how with this new tutorial on https://t.co/m6mT8SrKDD: https://t.co/TQdUpmW0OB 2021-07-29 22:40:11 RT @OReillyMedia: [NEW RELEASE] Practical Machine Learning for Computer Vision -- Learn how to design, train, evaluate, and predict with mo… 2021-07-29 22:40:05 @martin_gorner Congrats! 2021-07-29 22:20:16 Despite superficial similarities, a program space isn't the same as a space program 2021-07-29 15:44:12 RT @todd_gureckis: Here is link to preprint of the paper https://t.co/gfmUR3JdTq 2021-07-29 15:37:06 An insightful analysis of human performance on a subset of ARC tasks. Fantastic work! https://t.co/8NoFFO1DGw 2021-07-29 15:34:30 Even tweets can have impact. It's fun watching the ideas you've planted grow and bloom over several years https://t.co/MvwtGlft3c 2021-07-29 15:29:01 To design a canyon, shape the path of a stream, then wait for a few millions of years. Long-term impact is rarely visible to the naked eye. 2021-07-29 10:38:26 It's easy to forget how big the world is. It's really, really big. You could create a device that displays one person's face every second and that never shows the same person twice. In perpetuity. 2021-07-29 02:37:26 @you0_0ii Eeldom knows no borders, my friend 2021-07-29 00:54:54 Today is eel-eating day () 2021-07-28 22:44:29 @migueldeicaza AWS Infiniverse 2021-07-28 22:43:45 What we do regularly shapes who we are. The best way to avoid becoming a boring person is to make sure there's a place for creativity and wonder in our daily routine. 2021-07-28 21:24:21 RT @TensorFlow: From English-to-Spanish ↔ Learn how to build a sequence-to-sequence Transformer model with Keras to perform a machine tr… 2021-07-27 17:11:02 Created by @ariG23498. Also check out the original paper: https://t.co/JfN6jbro6s 2021-07-27 17:10:36 New code walkthrough on https://t.co/m6mT8SrKDD: "involution networks". While convolution is location-agnostic and channel-specific, involution on the other hand is location-specific and channel-agnostic. Check it out: https://t.co/eULT2mKBH4 https://t.co/tiK2hPfFzq 2021-07-27 16:54:52 RT @qhardy: There's a standard curve in science & 2021-07-27 16:13:31 RT @_HannahRitchie: Share of electricity that comes from fossil fuels: South Africa: 89% Australia: 75% India: 74% Japan: 69%… 2021-07-27 00:00:06 2. To implement a fully-featured training loop (that supports distributed training, compilation, and callbacks out of the box), subclass keras.Model and define the training logic in train_step. Metrics that you want to reset across epochs should be listed in the metrics property. https://t.co/gSnfTgkQhc 2021-07-26 23:58:28 1. To create a custom layer, simply subclass keras.layers.Layer, create the layer state in __init__() (or alternatively, in build(input_shape) if you need to know the input shape to create the state), and define the layer's computation in call(). https://t.co/VO3R8pS9wq 2021-07-26 23:57:12 Two useful Keras patterns demonstrated in this example: 1. Create custom layers by subclassing the Layer class. 2. Writing custom training loops by overriding train_step() in the Model class. Let's take a look. https://t.co/YS6kRnNrba 2021-07-26 17:03:17 RT @RisingSayak: New example showing how to implement a VQ-VAE in #Keras with the PixelCNN part included. The example goes through several… 2021-07-26 16:05:13 Created by @RisingSayak! Also take a look at the original paper: https://t.co/wmpAqCdrJW 2021-07-26 16:04:38 Now on https://t.co/m6mT8SrKDD: tutorial on vector-quantized variational autoencoders (VQ-VAE), a type of VAE that uses a discrete latent space. Check it out! https://t.co/yhQgJYcLDp 2021-07-25 01:33:39 @barrkel You call that "self-taught". As I do in this thread. 2021-07-25 01:01:24 Though to be frank "self-taught" and "self-made" are generally pretty empty words. It's rare for anyone to do anything in actual isolation. Even if you learn from books and YouTube videos, you still have teachers, whether you acknowledge it or not. 2021-07-25 00:57:19 And FYI, if you learn via contact with others, through mentorship, teamwork, and collective projects, then by definition you are not an autodidact (self-taught). Being "self-taught" doesn't mean "didn't go to college". It means you learned on your own. 2021-07-24 21:30:45 Including my own code from before I joined the industry 10 years ago. It did the job but it wasn't readable or maintainable. Because I simply never faced these constraints. You can't be good at something difficult you've never done. 2021-07-24 21:28:02 You won't magically acquire skills that you've never practiced, even if you're an absolute genius. You can't excel at a team sport by training exclusively solo. I have seen many coders and never saw an exception to this rule. 2021-07-24 21:25:40 You may not like it, not it's true. It doesn't matter how smart you are or how clever your code looks. I have never seen a lone wolf write good code. It doesn't happen for the same reason that someone who plays basketball exclusively on their own cannot become a great player. 2021-07-24 07:33:40 Absolutely scandalous infographic. The use of the Fahrenheit scale, I mean. Just use Celsius like normal people. https://t.co/EycGFyNWcn 2021-07-24 06:41:34 You see, you can be angry at my takes (because they're true and you know it) all you want. It doesn't matter. The only thing that matters is what you actually do. What you make. And that's going to have to be bigger than yourself. 2021-07-24 06:37:15 (making all the right people angry here) 2021-07-24 01:56:18 @ChrSzegedy Yes but I've also been working on various teams for the past 10 years 2021-07-24 01:31:03 @shiraeis Mentorship, working on a team for several years, etc. You can't learn a team sport by playing alone 2021-07-24 01:25:07 Side note: this is why autodidacts who work alone never write good code. They simply don't face the kind of requirements that lead to writing good code. 2021-07-24 01:12:24 Writing code is about making your computer do what you need. Writing good code is about enabling teams of strangers 5 years from now to make their computers do what they need. 2021-07-23 18:33:38 Great job @huggingface team, in particular @carrigmat 2021-07-23 18:32:23 HuggingFace TensorFlow pipelines have switched to using Keras compile()/fit() for training, as well as tf.dаtа.Dаtаset for data loading. Check it out: https://t.co/Gx1RkGDFSK https://t.co/5EBhAGKPrG 2021-07-22 19:45:19 Their entire personality is based on fear. 2021-07-22 19:44:42 They're all vaccinated. They're not that stupid, and each of them is terrified by death/disease. They're just afraid to say it. https://t.co/PmsJPzqLwU 2021-07-22 17:10:30 If the creator of Minecraft felt like he needed a CS PhD + deep mastery of C++ to work on his ideas, Minecraft *wouldn't exist*. Success generally doesn't hinge on the difference between perfect execution and average execution, but in the difference between something and nothing. 2021-07-22 17:08:37 Lots of people missing the point... 1. Creating is about catching your ideas as they appear, and shipping them. You can only do that with the tools you have. Any tool that helps you ship is a good tool. 2. You can achieve greatness with any toolset. It's the maker, not the tool 2021-07-22 16:25:47 The best programming language is the one you know well and enjoy using. The one that makes you feel productive. Minecraft was written in Java, by someone who knew Java. 2021-07-22 16:14:06 This is AI at its best: helping humans do science faster and more effectively. https://t.co/AjZutJuoA9 2021-07-22 14:44:38 RT @huggingface: Transformers v4.9.0: Brand new @TensorFlow Examples : Examples for many NLP tasks are now available using Keras onl… 2021-07-22 02:28:29 Tesla was right, life is all about positive energy and good vibes https://t.co/ya9RL3WXeS 2021-07-21 21:35:00 RT @nytimes: The effects of fires in the western U.S. and Canada are being felt thousands of miles from the flames. https://t.co/oO6bHvpzII… 2021-07-21 18:23:15 RT @_KarenHao: I read @sheeraf & 2021-07-21 04:05:01 Not long ago some people were saying "Russia and Canada will actually end up better off thanks to climate change!", but of course, that's not true. They too will be affected by wildfires, smoke, scorching heat. Literally no part of the world will end up better off. 2021-07-21 03:57:24 The US is currently covered in wildfire smoke. Whether it's fires, smoke, hurricanes, extreme heat events, floods, or rising sea levels, there's no corner of the world that will be left unscathed by the climate crisis. https://t.co/Qy09Pla8AR 2021-07-21 00:20:40 @__mvalente @KarlLandheer Yes, chapters 5 and 14 for the most part, but generally the entire book 2021-07-20 23:13:48 @KarlLandheer Or embedding one-off discrete objects on a curve with no chance of generalization (i.e. most Deep RL). Lots of people are playing with the tech with no idea about the first principles that make it work 2021-07-20 23:06:41 "Meh, it's just a factory, boring... but look at my cool model rocket!" No, the boring factory is going to change the face of the world, and meanwhile your GPT model rocket won't scale past the toy stage 2021-07-20 23:02:49 This is a bit like watching a bunch of people trying to make steam-powered airplanes and rockets work in the year 1800 (it won't) while largely missing the world-changing potential of applying steam power to trains and large industrial machinery 2021-07-20 22:59:29 Deep learning isn't magic, and it won't work at all on the specific problems you want it to work on, but it is capable of more than you know when it comes to problems where the manifold hypothesis applies (and that's a lot of problems) 2021-07-20 22:57:51 Most people I meet overestimate what deep learning can do (it's curve-fitting, don't expect it to do discrete symbol manipulation, it will solve symbolic tasks via embedding + interpolation) and simultaneously underestimate what you can do with curve-fitting given enough data 2021-07-20 22:00:48 @JohnMarkR Just use Colab to run the code examples in your browser, you can access the files here https://t.co/QXQOV2YJ6g 2021-07-19 14:00:07 @somartist "Did you make it by hand or digitally?" 2021-07-19 05:48:08 Having finished Braid a few times only moderately prepared me for following the plot of Tenet 2021-07-19 03:09:03 @jkwong Two social networks with the same business model can end up with very different outcomes wrt disinfo (e.g. Twitter 2016 vs 2020) and ad-free networks (e.g. chat apps with large scale groups) can end up having a disinfo problem. 2021-07-19 02:49:03 @jkwong The business model is not the problem. It's a matter of willingness to combat disinformation campaigns. Compare Twitter 2016 and Twitter 2020, for example. 2021-07-19 02:08:48 @BharatDharma This is basically like saying "it's ok to let organized crime run rampant, individuals should learn to defend themselves" 2021-07-19 02:02:37 Certain social networks willingly let themselves be used by bad actors as attack vectors on the minds of vulnerable people, with disastrous consequences (e.g. the anti-vax movement). Because it's profitable and they don't care. Not just in the US, but around the world 2021-07-19 01:58:53 The human mind is highly vulnerable to certain patterns of social & 2021-07-19 01:54:55 "Human nature has good & 2021-07-18 19:12:03 @migueldeicaza Facebook's CEO is the kind of guy who'd say this, so it checks out. https://t.co/g6WyogFV7W 2021-07-18 02:33:55 When it comes to social and ethical issues, you shouldn't think of AI as something unique and special that can be cleanly isolated from the rest of computer science and the tech industry. It's a part of the system, like databases and cheap high-resolution camera. 2021-07-17 21:43:06 RT @NWSBoise: Smoke continues: A strong upper level high pressure is apparent on the HRRR smoke forecast today over the West. Notice how t… 2021-07-17 18:05:06 Twitter has access to an entire decade of my thoughts on every possible topic and they still serve me ads like this https://t.co/u77k8OtMJH 2021-07-17 01:46:14 RT @RisingSayak: New example on Conditional GANs. I think this recipe is important to know if you are into generative deep learning. Disc… 2021-07-17 01:05:35 The Kremlin has been openly supporting (and coordinating with) European far-right parties for many years. It's a very similar story as what happened in the US in 2016, with the results we know. 2021-07-17 01:04:24 Before you ask, no, this is not what happens spontaneously when you let people exercise free speech. It's the result of deliberate and coordinated campaigns, often seeded by the propaganda arm of hostile nations (e.g. RT, Sputnik, etc. make a killing on French social media) 2021-07-17 01:02:47 Not very young folks, mind you. Young-ish folks, 30-40. The degree of radicalization of each demographic segment is proportional to 1. time spent on FB 2021-07-17 01:00:25 In the US, the main propaganda vectors are Fox News (+ more niche channels like Newsmax & In France, there's no Fox News, so it's all Facebook. And guess what: it's younger folks that are on FB. 2021-07-17 00:58:04 In both cases, the causal variable is exposure to far-right propaganda. The more misinformation you consume, the more radicalized you become. But the misinformation channels and their audience differ 2021-07-17 00:56:06 In the US, there's a significant correlation between age and far-right radicalization & 2021-07-16 18:47:14 Another fantastic example created by @RisingSayak 2021-07-16 18:46:56 New code walkthrough on https://t.co/m6mT8SrKDD: conditional GANs, for generating new images while controlling their appearance (e.g. by conditioning the generation process on a class). https://t.co/zoAEXfvkcW 2021-07-16 18:33:21 Having kids reconfigures how you think about life, in ways that are difficult to describe 2021-07-16 18:23:44 @tszzl @MegaBasedChad It's just information hygiene, nothing against you personally. 2021-07-16 18:23:38 @tszzl @MegaBasedChad I'd rather focus on people and content that I enjoy listening to, that inspire me or make me better informed. I just don't have any time to dedicate to trolls who log in every day thinking "I'm going to make someone's day worse" (take a look at your timeline) 2021-07-16 18:23:15 @tszzl @MegaBasedChad But in retrospect that makes perfect sense: of course you didn't start being a troll just last week. Now, don't take it personally. I have lots of pressures on my time and there are lots of people on Twitter. 2021-07-16 18:22:50 @tszzl @MegaBasedChad Since you made it into my mentions, I'll take the bait and reply. I blocked you a few days ago, for being a jerk and a troll (as you probably know). I had no idea you were the author of that tweet from 2018. 2021-07-16 17:12:18 @serengil Still from TensorFlow. Only the development process changes 2021-07-16 06:09:47 @fadibadine Yes, everything happens on keras-team/keras now! 2021-07-16 06:07:17 It's impossible to learn if you're not trying new things and making mistakes. Mistakes are cool. As long as you don't repeat the same ones too many times 2021-07-16 01:49:37 @neurobongo It's very much the same dynamic as software development in general -- no specific technology is a moat, but solid processes/culture are hard to copy and provide a real differentiating advantage 2021-07-16 01:49:02 @neurobongo I don't mean so much infra as in "ability to run experiments with a research paper as ultimate deliverable" (which is accessible), more like infra for production MLOps, which is hard and requires hard-won know-how 2021-07-16 01:33:30 RT @TensorFlow: Learn to build, train and evaluate three modern MLP models for image classification. More info here ↓ https://t.co/p… 2021-07-16 01:24:41 @neurobongo There are moats in deep learning, but they're never models/algorithms, because they're always trivial and easy to duplicate (it's just gradient descent). The moats are datasets and infrastructure. See thread from 2017 https://t.co/ornOHhSKhB 2021-07-16 00:00:16 @alippai Because there's no release yet. The first release (2.6) will come in a couple weeks. 2021-07-15 23:59:27 @medalihamza93 No, at this time you should still use tf.keras. only the development process changes 2021-07-15 23:30:19 For folks asking "why?", here's the explainer: https://t.co/arB6o7UECW We've done this to make it much easier for folks to contribute. 2021-07-15 22:53:12 Reminder: Keras development has moved back to keras-team/keras (https://t.co/eu4PExJGJ8), and if you want to make a change, the PR section is open for business 2021-07-15 22:42:23 RT @guardian: 'Catastrophic’ flooding hits western Germany leaving dozens dead – video report https://t.co/pi0fOZ2oR4 2021-07-15 17:57:19 RT @kevinroose: This stat was discovered by researchers using CrowdTangle, the FB-owned data tool I wrote about yesterday. https://t.co/6M1… 2021-07-14 23:34:55 RT @mark_dow: If you follow @FacebooksTop10, you know that the Top 10 Engagement list every day is dominated—literally 9 or 10 of 10—by con… 2021-07-14 18:58:37 RT @_lewtun: tl 2021-07-14 18:29:38 RT @carrigmat: Firstly, we replaced TFTrainer with native Keras. Trainer and TFTrainer are internal @huggingface classes that abstract away… 2021-07-14 18:29:30 RT @carrigmat: Our @TensorFlow examples push for the Transformers library is now finished - check it out at https://t.co/MqdKbZUJUc! Every… 2021-07-14 18:28:03 An update https://t.co/Vt33oigV5j 2021-07-14 18:26:33 RT @TwitterSupport: We had big hopes for Fleets, but now it’s time to say goodbye and take flight with other ideas. Starting August 3, Flee… 2021-07-14 18:26:15 RT @fchollet: These new Twitter features will only last for a few Fleeting Moments 2021-07-14 02:02:03 RT @fchollet: Today on https://t.co/m6mT8Sa9M5: three MLP models for image classification, demonstrated in less than 300 lines of code in t… 2021-07-13 22:44:41 RT @dollarsanddata: For most of history, whether your kid made it past age 15 was the same as a coin flip. One of the most amazing charts… 2021-07-13 20:31:48 @TommyLofstedt You can add new methods to a model while keeping its definition in Functional style through one of the following two patterns: https://t.co/Rns1eFJB4D 2021-07-13 19:48:40 RT @martin_gorner: And here is Xception, by @fchollet, still from "Practical ML for Computer Vision" https://t.co/SbscmmYv3H. A simple arc… 2021-07-13 19:48:36 RT @martin_gorner: More ML architecture pics from "Practical ML for Computer Vision. Here is ResNet 50: The book: https://t.co/SbscmmYv3H… 2021-07-13 19:45:43 @DrGroftehauge If you have a Functional model, then using SavedModel or the h5 format both work fine. If you have a subclassed model, then SavedModel is a kind of one-way export. If you want idempotency you should load the model by reinstantiating the Python class 1st, then loading the weights 2021-07-13 19:44:02 @DrGroftehauge ...and so when loading the model, you will get a different Python object wrapping the TF graph, not your original subclassed model class. 2021-07-13 19:43:14 @DrGroftehauge If your model is a Functional model, then SavedModel will include its config and it will be reconstructed as the same Python object upon loading. However, if you have a subclassed model, SavedModel won't include the bytecode, only the TF graph 2021-07-13 04:53:25 RT @nutsiepully: 5) Since the model is an interpretable data structure, you can use tools like https://t.co/5DRvTjmNsh to quantize and spar… 2021-07-13 02:49:43 @mat_kelcey The time I save thanks to character autocompletion in Sublime Text is about equal to the time I waste deleting unwanted autocompleted characters 2021-07-13 00:38:11 RT @minxdragon: I have to say, Keras' model.summary() and plot_model are unbelievably useful for debugging. Not only is it nice to see your… 2021-07-13 00:26:57 @DynamicWebPaige Bourbon goes great with milky drinks. Hot chocolate bourbon is 2021-07-12 22:37:03 @svpino Same, I generally use the Functional API even for sequential-like models. By the way, you can actually use custom training/evaluation methods with Functional models, like this: https://t.co/TJeUvgPphZ 2021-07-12 21:35:35 @feedyurhed So if you want to load your model in JavaScript, you'd have to write a JS version of your model first, then you'd load your saved weights. This is potentially error-prone. With the Functional API, you can save your model in Python then reload it in JS w/o writing any model code. 2021-07-12 21:34:09 @feedyurhed I guess you mean only saving the weights, and always re-executing the original code when loading the model. This is definitely better than saving the bytecode, but it's limited: it implies you will still have access to the original code, and it won't work across platforms. 2021-07-12 21:11:42 @karlhigley But if you want to decouple these extra methods from the model-definition style, you can also just do this: https://t.co/M9QerkFCJ0 2021-07-12 21:09:29 @karlhigley I see, I think you may be thinking of the following style -- basically a Functional model, but defined by subclassing the `Model` class (which is useful for packaging and for adding/overriding methods, like `train_step`/ etc https://t.co/dG7XxxtCo8 2021-07-12 20:37:22 @karlhigley You mean, implementing a one-off version of the Functional Model class every time you make a new model? Doesn't seem very scalable. If your model is defined in the body of the `call` method, then it's made of bytecode. Turning it into an actual graph will require extra work. 2021-07-12 19:51:15 RT @capeandcode: @fchollet The simplicity of Keras, allowing to make quick models without much hassle when not needed is truly beautiful. 2021-07-12 19:49:23 That's it for this tweetorial. Feel free to chime in with your own takes on pros and cons of the Functional and subclassing approaches! 2021-07-12 19:48:38 Note that you don't have to inline your Functional model definitions all the time -- complex models should be broken down into stateless functions (one function per architectural block). Here's an example of a Transformer for timeseries classification. https://t.co/gBi4mO2FyT 2021-07-12 19:42:58 A last advantage of the Functional API I haven't listed here is that it is much less verbose, because it is less redundant (no need to list/name each layer twice). Consider this subclassed VAE vs. an equivalent Functional model... https://t.co/hkxVE8eXlZ 2021-07-12 19:39:39 Many runtimes other than Python TensorFlow understand the Keras graph-of-layers format, such as TF.js, CoreML, DeepLearning4J... A high level, human-readable saving format is much easier to implement for third-party platforms. 2021-07-12 19:37:38 b. Save it as a SavedModel -- which is a form of one-way export (of the TF graph) and won't let you reconstruct the exact same Python object. A graph of layers is a data structure 2021-07-12 19:36:45 If your model is a Python subclass, to serialize it you could either: a. Pickle the bytecode -- which it completely unsafe, won't work for production, and won't work across platforms 2021-07-12 19:35:17 3. The model is a data structure, not a piece of bytecode. This means it can be cleanly serialized and deserialized -- even across platforms. keras.Model.from_config(functional_model.get_config()) reconstructs the exact same model as the original. 2021-07-12 19:32:47 Having access to internal nodes also means you can access an intermediate layer output and leverage it in a new model. This is a killer feature for feature extraction, fine-tuning, and ensembling. Let's add an extra output to the model above: https://t.co/gCxafm21UF 2021-07-12 19:30:31 2. You get access to the internal connectivity graph. This means you can plot the model, for instance. This is great for debugging. Like this: https://t.co/ZnG6ym9yei 2021-07-12 19:28:43 Further, it's even capable of standardizing inputs to what it expects: if you pass data of shape (batch_size,) to a model that expects (batch_size, 1), it will just reshape it. Likewise for dtype conversion (e.g. float64 will get converted to float32). 2021-07-12 19:28:05 1. Because the model has known input shapes, it's capable of running input validation checks, for easy debugging: https://t.co/1B8E7GXmK1 2021-07-12 19:25:08 But there are several key advantages of the Functional approach over the subclassing approach: 1. Your model has known inputs shapes. 2. You get access to the internal connectivity graph. 3. The model is a data structure, not a piece of bytecode. Let's see what these are about. 2021-07-12 19:22:11 Now, of course, you could also define such a model as a Python class. It would then look like this: https://t.co/2oU0vjOIHe 2021-07-12 19:21:15 This builds the following graph: https://t.co/WQzvf6bJZh 2021-07-12 19:19:45 Tweetorial: the Functional API in Keras. Deep learning models are basically graphs of layers. Therefore, an intuitive API for defining deep learning models should be a *graph-definition API*. That's what the Functional API is: a Python-based DSL for graphs. It looks like this: https://t.co/rthCY2Jqhz 2021-07-12 04:00:08 @the_lazy_folder Thanks for the kind words! I appreciate it. And yes Keras is a collective project and everyone is welcome to join and contribute. 2021-07-12 03:42:25 If you feel angry and you want to say something, then say something positive about what you believe in. Instead of quote-tweeting the bad guy, promote someone who deserves it. Go make someone's day. The dogpile doesn't need you. 2021-07-12 03:41:35 Lastly, personal attacks aren't the right way to further the goals you believe in. I get it, you feel pressure to weigh in lest you be perceived as being on the wrong side of the controversy of the day, and you want to show off your first-rate snark to your peers. Still, don't. 2021-07-12 03:40:19 Second, even if you're sure the target is a bad person, acting in bad faith -- a troll -- dunking on them is playing right into their hands. Trolls feed off attention and controversy. So don't help them. 2021-07-12 03:39:50 I've personally had pretty vile harassment campaigns run against me by people who saw my work on Keras as a threat to them. I still get occasional insult emails -- it's been going on since 2017. I'm pretty sure it started from just one person. 2021-07-12 03:39:24 First of all, you don't know the target. Chances are it's just a regular person acting in good faith -- not a monster, not a troll. Often, harassment campaigns are not organic, but motivated -- started by 1/2 people who see the target as a competitor or otherwise a personal enemy 2021-07-12 03:38:33 You're probably not a good judge of whether the target "deserves it", and even if the target is in fact a bad person, joining a harassment campaign is always counterproductive. There are no exceptions. 2021-07-12 03:38:02 Social media mobs are everywhere these days. I want to tell you: there is *no* context where joining a mob to dogpile on someone with your own insults is a good idea. So don't do it. 2021-07-11 18:32:18 @JimLloyd @GrimmCollin @assadollahi This is accurate: the goal of a MVP is to take the shortest path to achieving initial product-market fit, but that is not the goal of a "first basic end-to-end system". Before walking you must crawl, but crawling is not a viable locomotion strategy 2021-07-11 17:35:11 @kanjun Incremental change != Continuous optimization https://t.co/xqQvYfodAs 2021-07-11 17:33:32 @kanjun The simple system is an information-collection device about the end-to-end problem (which is critical for success), and because it is useful & 2021-07-11 17:31:54 @kanjun Gradient is not a good metaphor because we're not talking about continuous optimization at all. But it is indeed about information discovery and grounding the meaning of the task 2021-07-11 17:24:47 This is also how babies learn, of course: the first step towards performing an advanced interaction with a thing is to perform a much simpler (potentially very different) interaction with the thing, not to master the first part of the complex interaction 2021-07-11 17:20:40 The best first step towards building a complex end-to-end system is to build a basic end-to-end system -- not to build a submodule of what you think the complex system should look like 2021-07-11 02:15:24 @oneunderscore__ Legacy media brain: "haha he's doing it to boost his fake salt-of-the-earth credentials with his audience, let's dunk on him" Extremely online brain: "smart, he wants high-profile liberals to dunk on him so he can make a name for himself in the culture wars, and it's working" 2021-07-11 01:04:41 10 weeks old! He's already pretty social, smiling back at people and laughing out loud. https://t.co/7v00LKAhyn 2021-07-10 21:52:30 @MattAlhonte @unfinitude The purpose of finding the right data structs / layering / architecture is to enable collective development & 2021-07-10 21:48:38 @MattAlhonte @unfinitude Yes. Abstraction is just a tool to achieve these goals. A good abstraction in CS is one that mirrors the mental models of the people who manipulate it, and that is also structured in a way that allows for future evolution (modular and hierarchical) 2021-07-10 21:12:38 The hardest problem in computer science is people 2021-07-10 17:00:45 @A_K_Nain @danijarh TF can trace Python control flow to a significant extent via autograph 2021-07-10 16:58:52 @absudabsu Saying "it doesn't answer why" is a tautology. There is no why in science, only how. Why is used to describe human intent. 2021-07-10 16:55:30 @absudabsu Just read the thread. A description says "given this input, we observed this output" ("given this search query, we get these results"). An explication provides the model that produces the output (e.g. PageRank algorithm) and thus generalizes to arbitrary inputs. 2021-07-10 15:45:40 Thinking more about this, I think it comes down to the fact that 95% of https://t.co/m6mT8SrKDD traffic is desktop, and that the audience is just developers. 80% adblocker install rate sounds about right for that kind of audience, but that isn't at all typical of regular traffic. 2021-07-10 06:46:41 I too am guilty of blocking GA trackers. I suspect developers block them at an even higher rate than the average web user 2021-07-10 06:43:40 https://t.co/m6mT8SrKDD does 400k MAU according to GA, so in theory the actual figure may be more like 2M MAU? Which in turn would be consistent with the 2020 Stack Overflow survey statistic that 6% of professional developers use Keras (there are 30M developers in the world) 2021-07-10 06:39:30 "What percentage of visitors block Google Analytics trackers?" is an interesting question. Over the past 7 days, my CDN is reporting 2.6M pageviews for https://t.co/m6mT8SrKDD, but Google Analytics reports 443k pageviews. This is roughly consistent with the 80% figure below. https://t.co/HEEJtZZRMY 2021-07-10 06:03:14 @MeysamAsgariC KerasTuner. A little known fact is that it can be used for any model, not just Keras models. In fact, it has a built-in Scikit-learn tuner https://t.co/UNjX17D72A 2021-07-10 06:00:27 When you see a library that's a work of love, you can immediately tell. I've learned a lot from Scikit-learn in the past (Keras has many elements inspired by it) and I think I can learn a lot from JAX in the future 2021-07-10 05:58:03 ML libraries I've used in the past and liked (besides Keras & 2021-07-09 23:31:03 @gusthema The book doesn't cover TFHub, sorry! 2021-07-09 23:22:45 The second edition of Deep Learning with Python will be available in stores around September-October. Meanwhile, the code example notebooks are available on GitHub: https://t.co/QXQOV2YJ6g 2021-07-09 20:37:48 @absudabsu Most of science is based on explicative models. That's what makes them good models, and that's what makes science effective. But some fields rely more on descriptive models (medicine, biology, neuroscience...). Which is still useful for specialized purposes (like neurosurgery). 2021-07-09 20:35:31 @absudabsu Gravity is an explicative model (even though it does not answer the "why"): it explains the *how* and can be used to produce new predictions and simulations. A descriptive model would be the kind of model of the solar system we had in the 13th century, based on observation. 2021-07-09 15:14:05 RT @NOAA: Just in: #June 2021 was the hottest June on record for U.S. Nation has also experienced 8 #BillionDollarDisasters so far this y… 2021-07-09 05:44:53 Now, this is a pretty obvious statement and smart people have known this since the 1970s. Yet, I don't think too many people have thought through the implications down to their conclusion 2021-07-09 05:42:35 And not conducive to building a new system that isn't just a faded Xerox of the original. (some people did try to build new search engines that way -- didn't really work out) 2021-07-09 05:41:25 A 100% complete and accurate recording of brain activity during an entire lifetime -- or a 100% 1:1 brain simulation -- would be like recording millions of search results for different queries. A description, not an explanation. 2021-07-09 05:35:31 You will have accumulated terabytes of data about the system, and yet you will still be infinitely far away from what you could learn from a 3-paragraph explanation written by someone who actually understands the system. 2021-07-09 05:34:40 You will also have gained no context about what surrounds the system -- what search engines are for, or what kind of cultural, technological and economic forces shape the company that developed this one 2021-07-09 05:31:09 But you will still have no idea that these results are produced via the PageRank algorithm. In fact, you won't have learned anything useful about the system at all. You won't be able to create your own search engine. You won't even know what makes a search engine good. 2021-07-09 05:30:02 Here's a simple example to capture the difference between a set of observations and an explanation. Consider Google search. 2021-07-09 05:25:08 A good model isn't a description, it's an explanation. An accumulation of observations does not explain anything. 2021-07-08 15:21:33 RT @UNFCCC: The difference between 1.5°C, 2°C or 3-4°C average global warming can sound marginal. In fact, they represent vastly differen… 2021-07-08 13:13:33 RT @nytimes: In the Trump era, Facebook struggled with the role it played in his rise and in the spread of misinformation around the world.… 2021-07-08 02:12:03 @hardmaru Hyperparameter tuning? You're thinking small. You could be doing so much more... like architecture search. /s 2021-05-23 04:56:51 Everyone replying to this tweet like https://t.co/Kbzvc4gNth 2021-05-23 01:04:26 In the future everyone will try to pump their own cryptocurrency for 15 minutes 2021-05-22 23:02:37 @SumitGup I get a mix of different kinds for variety. I also don't buy just Philz 2021-05-22 20:53:16 My pantry is stocked with everything I could ever need https://t.co/SjgXzDI9z2 2021-05-22 15:26:32 I can relate because I've sometimes had to debug being unable to find Python in an environment 2021-05-22 15:22:22 Some exciting Python news https://t.co/kPQoyhawAF 2021-05-22 03:00:18 World-changing technology usually doesn't seem very impressive in its first iteration -- merely *intriguing*. Good things happen when you follow the gradient of curiosity 2021-05-21 23:20:19 @OptimalBayes This is a good reply 2021-05-21 23:11:07 I would rather press [] 50 times than actually type a command in the terminal 2021-05-21 22:30:11 RT @martin_gorner: It's now on YouTube: "Modern Keras Design Patterns" by @fchollet and @martin_gorner https://t.co/2VkillVKcS 2021-05-21 20:25:57 @KaamRaj1 No 2021-05-21 19:53:09 See also: trading by "reading the chart"... 2021-05-21 19:52:10 Noticing a pattern isn't quite how you predict the future. A random event can follow a pattern -- until it doesn't. If there's no likely causal explanation that ties your pattern to the event, then it's not a reliable model. 2021-05-21 18:23:41 @sam__goree Deep Learning with Python, first edition, chapter 9 2021-05-21 18:03:06 @SpaceRangerWes So for any specific task you can typically come up with models that are vastly simpler than the cognitive processes humans use to do the task. However they will be accordingly vastly less flexible/adaptable (the simplicity is enabled by vertical specialization) 2021-05-21 18:01:12 @SpaceRangerWes Turns out image classification is a lot easier than we thought and can be done without a high-level abstract understanding of an image. This holds true in general: human mental models are a lot more general and powerful than what's required for any specific task. 2021-05-21 17:28:35 Human mistakes are understandable because our own mental models match the mental models operated by others humans. Meanwhile, ML algorithms share little resemblance to these mental models, even though they were trained from human-labeled data. They fail in inscrutable ways. https://t.co/QM0LXZPHym 2021-05-20 17:57:51 @alexdelapaz_ @jha01roshan If you want to do things from scratch, check this: https://t.co/tgk7o4bTkV For a ready-made solution, you can see the TF Model Garden 2021-05-20 16:55:09 New code example on https://t.co/m6mT8SrKDD: training a siamese network to learn image similarity embeddings, using a contrastive loss. https://t.co/NXQKAiqtFP 2021-05-20 15:21:34 It's kind of wild to see an entire political party so deeply committed to... combating basic public health measures? The motivation being... scoring some murky culture war wins in their minds? 2021-05-20 02:07:50 RT @sundarpichai: AI is powering a lot of the improvements we’re making in @googlemaps, including a new feature that can help reduce the nu… 2021-05-19 21:21:43 @LucasEnkrateia So generalization involves *some* compression (a lot of it actually) but also a lot of work that's opposite to compression (writing down seemingly useless info that seems interesting / salient). 2021-05-19 21:20:56 @LucasEnkrateia Such notes would have been very compressed, obviously: they'd have much less info than the original movie. But they'd have a lot more info than the shortest possible form of the notes that could answer the original questions. These notes would be *generalizable notes*. 2021-05-19 21:19:50 @LucasEnkrateia Meanwhile, what if I ask you more questions later, and you try to use your notes to answer them? If your notes were the shortest possible, you can't. Do answer you'd have needed more extensive notes about what's important in the movie, not just the answers to the original Qs 2021-05-19 21:18:43 @LucasEnkrateia Consider a simple example: I give you a set of questions about a movie, and you watch the movie and try to answer the questions. During the movie you write notes. The shortest possible notes that enable you to answer the questions are just the answers to the questions. 2021-05-19 21:10:18 @LucasEnkrateia Now, obviously, generalization requires abstraction, which requires erasing irrelevant details, so your high-generalization system will be doing *some* amount of compression. But it will be storing lots of seemingly useless info as well. 2021-05-19 21:09:13 @LucasEnkrateia The more general point is that generalization requires you to store seemingly useless information at training time (info that doesn't help your training objective), that will become useful in the future (when generalization actually happens). That's the opposite of compression. 2021-05-19 21:08:43 @LucasEnkrateia There might be, but it will be by definition less optimal when applied to just English. You won't find it by optimizing for English. 2021-05-19 21:03:26 @LucasEnkrateia The most compressed model that does X is only capable of doing X. If it could do more then you could compress further. 2021-05-19 21:02:58 @LucasEnkrateia Because by definition compression discards all information that isn't relevant to the training goal, and by definition the training goal isn't what you want to generalize to (otherwise there is no generalization happening). 2021-05-19 21:01:59 @LucasEnkrateia Compression (with respect to a fixed training dataset) is by definition opposite to generalization. If you find the optimal compression dict to compress English Wikipedia it will obviously not be optimal for Spanish Wikipedia. 2021-05-19 16:26:01 Happening in 5 min: I/O session on Keras design patterns, by @martin_gorner and myself. https://t.co/KBErgTQwym https://t.co/IE8mjLohWN 2021-05-19 16:07:51 RT @TensorFlow: Happening now: What's new in ML Keynote! Responsible AI On-Device ML Run TF Lite Models on the web Microcon… 2021-05-19 16:01:51 New code example on https://t.co/m6mT8SrKDD: keypoint detection (with data augmentation & https://t.co/7hhGV2V0Z5 2021-05-19 15:55:27 @_brohrer_ ML is cognitive automation, so it needs to be provided with an original template to automate. Can't photocopy a document you don't have. 2021-05-19 10:56:34 People who lost 80% on BTC in 2018/2019 did not buy the latest bull run. That bull run required further mainstreamification of the scheme in order to happen. The new crash will burn an extended set of people, who won't come back. Etc. 2021-05-19 10:56:33 There's one reason why it will fail in the long run: while people have an ever-renewable thirst for these schemes, they remember when they get burned by a specific scheme, so the "asset" to buy needs to change from generation to generation. 2021-05-19 10:49:04 The mainstream rise of dogecoin really exemplifies this. You shill your token of choice because you want it to go up, and that's pretty much all there is to it. It's naked pump and dump, no longer backed by any fancy narrative that involves eventual adoption or value creation. 2021-05-19 10:47:18 Accurate, but one thing has changed in recent months: the mask of techno-babble & 2021-05-19 02:30:55 @georgeavet Maybe? Try it and let us know 2021-05-19 02:10:11 @axeljeremy7 Yes, it runs directly from Colab notebooks 2021-05-19 01:30:43 @FatMoth It can include local files that you specify, as part of the Docker image it creates 2021-05-19 00:48:54 Apparent conclusion from this poll: go try it, email me your feedback 2021-05-18 23:55:24 @togelius I think that's roughly accurate, but CS basics will still be taught under that label in schools / universities (not unlike math today) 2021-05-18 23:53:57 Have you tried TensorFlow Cloud? (https://t.co/cblGqxg94L) 2021-05-18 22:45:00 MLOps, now uncomplicated. 2021-05-18 22:44:31 TensorFlow Cloud has a new documentation site on https://t.co/jZhirxlNpY: https://t.co/thgTgcXtaP Add one line to your Colab notebook / Kaggle notebook / local script and start running it on multiple GPUs/TPUs in Google Cloud. No extra configuration needed. 2021-05-18 19:32:05 RT @Google: Imagine a magic window, and through that window you see another person, life-size and in three dimensions. Project Starline is… 2021-05-18 17:19:40 New example on https://t.co/m6mT8SrKDD: node classification with Graph Neural Networks. Learn to classify the topic of a paper given its citation graph! https://t.co/1x7KZ8pvJr 2021-05-18 14:39:25 In the not-so-far future, a lot more jobs will involve programming. And, accordingly, programming will get a lot simpler than it is today. 2021-05-16 20:26:44 If you're spending your time quarrelling on Twitter trying to establish intellectual superiority, instead of building things that have a real impact in the world, then your takes are probably bad takes. 2021-05-16 20:23:48 If you care primarily about *being right*, then you'll be wrong a lot of the time. To end up getting things right, you need to care primarily about *taking the right actions* in order to reach *concrete goals*. The need to navigate reality is what guides you towards the truth. 2021-05-16 05:29:59 @jha01roshan It has image segmentation, attention, and Transformers. Not object detection though 2021-05-16 05:29:20 @its_me_niraj Late summer most likely 2021-05-16 02:00:32 Wait, what https://t.co/cVvuTVG55s 2021-05-15 21:24:39 @Sopcaja Yes, it does cover multi-GPU and TPU training! 2021-05-15 20:32:56 @Spectrusv It will go into production this summer, most likely 2021-05-15 20:22:15 @Breza I have no idea! 2021-05-15 20:13:12 When I set out to write the 2nd edition of Deep Learning with Python, I thought it would be roughly the same length, and about 50% new content. Now that the draft is done: it's almost 2x longer and it's 75% new content. Overall it's a lot more in-depth than the first edition. 2021-05-15 18:19:56 Technology doesn't matter. What matters is what you enable people to do. The solutions you unlock. And the effect they have on people's lives. https://t.co/NLn30tXW3X 2021-05-15 00:39:33 End-to-end productivity is a lot harder to achieve that just ticking feature boxes. It requires holistic product vision instead of design-by-committee, feature-driven development. It requires saying no a lot more often than you say "yes, we can add that". 2021-05-15 00:34:48 Developer tools should be about solving problems, not about features and specs. End-to-end productivity, not checkbox matrices. 2021-05-14 18:01:33 These tweets brought to you by weeks of newborn-induced sleep deprivation 2021-05-14 17:55:25 Environmental programming 2021-05-14 17:54:48 Instead of trying to steer yourself via "willpower", which is ineffective and can be quite painful, you should be able to design your everyday environment to guide you towards the goals you've set for yourself. I hope to see a new wave of products and services in this spirit 2021-05-14 17:51:16 Anyway, the fact that these services don't exist is a sign that people tend to overlook incentive engineering. Most product designers don't think about it. Most OSS maintainers don't think about it (even though it's what defines the success of a community project). Etc 2021-05-14 17:48:28 Banks, credit cards, and ISPs have the most potential here, given how much behavioral data they have and how indispensable they are. 2021-05-14 17:47:06 It doesn't have to be financial incentives, and it doesn't have to feel like a game. Just offer the user the ability to configure rewards and punishments they get to help them achieve their goals. Works best if there are no workarounds (e.g. if you just have that one credit card) 2021-05-14 17:45:04 - Credit card to regulate your spending habits or your diet - ISP to regulate your time spent on social media - Game consoles that regulate time spent - Online learning sites that make you stick to a habit - GitHub contributions 2021-05-14 17:42:08 On the topic of incentive engineering: here's an idea that needs more love: services where you can configure behaviors/habits you want to develop, that will reward you if you succeed and charge a fee if you fail. 2021-05-14 05:59:44 @NakramR Maybe some of them got deleted, I don't know. 2021-05-14 05:59:07 @NakramR It's all in quote tweets (lead dunks) and replies to them (follow-on by the audience). Dunking via reply is the poor man's dunk: you only do that if you have no audience of your own 2021-05-14 05:29:40 As for the "tech bro" part, no one who knows me would describe me as one, but having "Google" in your bio instantly earns you that label. Again, framing. 2021-05-14 05:27:20 A framing of my tweets I've seen maybe 5-6 in the past few weeks: "dumb tech bro is trying to reinvent my field, let me immediately assert intellectual superiority". It's funny to me because I'm usually just posting whatever I'm currently thinking about -- spurs of the moment. 2021-05-14 05:24:35 Many people see Twitter as a game where the goal is to dunk on others in front of an audience and get likes as reward. It almost doesn't matter what you say -- it's all about how it gets framed. 2021-05-14 05:21:40 It's kind of hard to predict *which* tweet will anger people. The best filter is to ask: what sort of people are going to want to react? You don't want the answer to be "political activists", for instance 2021-05-14 05:20:01 The other day I tweeted something p mild about perception, and instantly a dozen angry psychologists popped up in my mentions. Today it was economists, after I said that ppl didn't pay enough attention to incentive engineering (I wasn't talking abt econ) https://t.co/brZUnuvhoe 2021-05-14 05:15:43 If you want to unlock the maximum level of aggressiveness and viciousness in your mentions, you have two options. The first one is to tweet about politics. The other one is to tweet something that seems to invite reactions from people in academia. 2021-05-14 01:33:21 An AI researcher would probably observe, "that field is just Collective Intelligence!". Which it also is. Cybernetics all the way down 2021-05-14 01:19:16 Some people will say, "oh, that's just economics". Or "that's just law", "that's just management", "that's just gamification", etc. I mean, it is all of these things, obviously. But that's myopic, like saying infrastructure is just roads & 2021-05-13 21:29:08 Incentive engineering may be the most important yet underlooked field of study 2021-05-13 21:27:52 The success of your organization comes from incentivizing value creation at scale in a sustainable way. Failure comes from the opposite -- incentivizing destructive or wasteful actions. 2021-05-13 21:27:07 All value derives from people doing valuable things (modulo non renewable natural resources). Thus you could say that all value derives from *creating incentives that get people to do valuable things* (modulo intrinsic motivation, which is rare). 2021-05-13 15:18:13 @simonw @pwang The ratio is 100,000:1. It's not unhealthy for folks to want to get rich as quicky as they can, but I'd rather have folks incentivized to create value than to participate in pyramid schemes (while destroying the environment) 2021-05-12 21:02:37 RT @DeepLearningAI_: We’re thrilled to announce the launch of our much anticipated Machine Learning Engineering for Production (MLOps) Spec… 2021-05-12 18:15:58 RT @HelloPaperspace: We're so excited about AI researcher Ahmed Gad's latest video on using Mask R-CNN for object detection with Keras! Nex… 2021-05-12 17:17:22 A pattern you see at companies and in people's lives: - Identify a problem/need - Develop an inadequate solution - Never fix it or replace it because that space is now occupied ("we already have a product for that...") Keep exploring 2021-05-12 03:55:24 RT @docmilanfar: Thanks to @RisingSayak for doing this very nice implementation of our paper "Learning to Resize Images for Computer Vision… 2021-05-11 18:42:12 Created by @RisingSayak and based on a recent paper by Talebi et al. 2021-05-11 18:42:04 New code example on https://t.co/m6mT8SrKDD: learning to resize images. Use a small convnet to learn an optimal interface between a large image and a classification convnet that expects smaller images. https://t.co/4nOXX7tjRB 2021-05-11 18:24:15 The stuff *you* are made of isn't so much cells as it is information. Your cells are a disposable encoding layer, renewed over time. What gets propagated through spacetime isn't a specific set of cells, it's your information. 2021-05-11 18:19:13 The "stuff" the universe is made of isn't so much atoms as it is information. 2021-05-10 18:58:27 New code example on https://t.co/m6mT8SrKDD: semi-supervised image classification using contrastive pretraining, via SimCLR. For the semi-supervised learning enthusiasts out there https://t.co/B7brCxUjLc 2021-05-10 15:04:34 If you read something you disagree with on Twitter, consider for a moment that you're allowed to *not* hit the reply button to compose a note of insults. That's an option you have. But you do you. 2021-05-09 22:07:52 I was mostly thinking of writing and music, but I guess it applies to software, too: you won't save a mediocre product by adding more semi-adequate features. 2021-05-09 22:04:55 It's a lot easier to crank out more mediocre material than to stubbornly refine (or redo) what you have until it's great. But you don't make something great by accumulating mediocre content. 2021-05-08 22:09:37 @ludwig_stumpp @Ben75778555 You can get the regularly-updated draft version from the publisher, Manning. In practice 13 out of 14 chapters are done though they aren't all available yet 2021-05-08 07:13:54 @Ben75778555 You can check out the 2nd edition of the book (still a work in progress), which covers Transformers and other modern best practices! 2021-05-07 23:01:13 RT @CShorten30: 2021-05-07 21:18:22 New code example on https://t.co/m6mT8SrKDD: the node2vec model for graph representation learning. Very compact and easy-to-read implementation. Graphs are a hot topic these days! https://t.co/vBwiPhcrua 2021-05-07 20:28:25 On a personal note, it's been a fun week of not getting any sleep and getting profusely peed on a bunch of times, and I wouldn't trade it for anything else 2021-05-07 02:44:40 Influence is the ultimate currency, and influence requires trust, which in turn requires honesty and earnestness. Pandering & 2021-05-06 19:59:53 RT @mihaimaruseac: Also applies to APIs between components of a product. If everyone can call every other function you just get a lasagna s… 2021-05-06 17:47:05 I forgot how to write without using em dashes -- I need at least one every other sentence 2021-05-06 14:06:04 The fewer relationships in a system, the more robust & Which is why very sparse networks are so effective, and why topology-grounded abstraction has much greater generalization power than geometry-grounded abstraction. 2021-05-05 23:46:40 Important advice. You can always change people's minds, but first, you have to look at them as human beings, and stop assuming the worst. https://t.co/qivgFIYKuT 2021-05-05 22:25:51 @bradpwyble @SBMost @PessoaBrain This is like someone tweeting "wash your hands to prevent infections" and suddenly a bunch of doctors start dunking, "what an idiot, surgeons have known this for decades". Perplexing attitude... 2021-05-05 19:28:41 RT @jendeben: SCOOP: The U.S. will support a proposal to waive IP protections for Covid-19 vaccines, @AmbassadorTai tells @EMPosts and me.… 2021-05-05 18:00:46 There's a lot of power in regularly picking at the same problems for several decades. 3 traps: to get discouraged & 2021-05-05 02:21:29 @SBMost @bradpwyble @PessoaBrain This is funny because I was writing one a couple months ago for the 2nd edition of my textbook. Happy to send you the draft over email. 2021-05-05 02:14:20 @SBMost @bradpwyble @PessoaBrain When you're working w/ parametric models trained with gradient descent, it quickly becomes impossible to interpret the function of different modules via cognitive analogies. Attention makes models more expressive thru a pairwise multiplicative component. But is that "attention"? 2021-05-05 02:11:53 @SBMost @bradpwyble @PessoaBrain Curious to hear your thoughts (and happy to explain neural attention if you need). I strongly suspect that neural attention doesn't actually implement "attention" in the human sense (though almost all DL folks do believe that neural attention is in fact a model of attention) 2021-05-05 01:53:57 @SBMost @bradpwyble @PessoaBrain The fact is that hardly any ML model takes "the world" as an input (complete with time, cause & 2021-05-05 01:46:33 @SBMost @bradpwyble @PessoaBrain The current deep learning standard for implementing context-awareness is "neural attention" (cf Transformers), perhaps you know about it. It has very little in common with active perception though. 2021-05-05 01:44:57 @SBMost @bradpwyble @PessoaBrain You can take context into account without active perception. Active perception only becomes really useful in a dynamic world where it's possible to formulate & 2021-05-05 01:33:10 @SBMost @bradpwyble @PessoaBrain If your input is a static image that you're trying to classify, that's a very different setup than being an embodied agent immersed in a dynamic world subject to cause and effect. In the former case, processing all the information available in one go is actually more effective 2021-05-05 01:30:52 @SBMost @bradpwyble @PessoaBrain ML models don't attempt to emulate human cognition, and they're solving a different problem than embodied cognition in the first place, with different constraints and different degrees of freedom. 2021-05-05 01:29:02 @SBMost @bradpwyble @PessoaBrain It has simply not yet shown to be necessary, or even useful. It actually seemed like a more attractive avenue when we knew less and our models performed worse. 2021-05-05 01:18:13 @SBMost @bradpwyble @PessoaBrain Cool, and I appreciate your Neisser reference. Active perception is still not a hot topic in AI today. I used to do some research in that area (active vision with an anticipative eye saccade model) in 2012, and back then these ideas had very little traction. It's trending up tho. 2021-05-05 01:05:17 @bradpwyble @SBMost @PessoaBrain I'm sorry, I wasn't aware I was only allowed to tweet novel ideas. Of course this is old and well-known. Yet many folks still haven't fully internalized it. Does everyone in your field behave like this (in public, at that) or is it just you guys? 2021-05-05 00:46:32 RT @SBMost: @fchollet Yes! I always liked Neisser's concept of a perceptual cycle. See his 1976 "Cognition & 2021-05-04 23:43:26 A common mistake that people on Twitter make (and get relentlessly punished for): trying to express any opinion about any topic 2021-05-04 23:36:12 @SquareZollo As you say, it's a cycle. 2021-05-04 23:35:45 To perceive something, you must first learn to expect it. 2021-05-04 23:34:44 A common mental model is to see perception as passive -- light hits the retina, generates a signal that hits the brain, boom, you "perceive" an image. Just like image data getting fed into a convnet. In reality, perception is an active skill that has to be learned. 2021-05-04 20:16:55 Not sensationalism, just acknowledging longstanding trends. It's actually very similar to the role of math in particularly math-heavy fields (like quantum physics). Almost every field uses math as a tool, but not so many could be said to be "mostly made of math". https://t.co/uhM9QC3nlq 2021-05-04 18:07:33 @curiouswavefn @yudapearl @hangingnoodles Simulations will play the primary role, and ML will often be a core component of these simulations. ML may not be the primary driver of this new retooling of these scientific fields, but it will be an important part of it. 2021-05-04 18:03:59 @curiouswavefn @yudapearl @hangingnoodles It's a way of saying that most of the work of a chemistry researcher will consist of writing and using software, and that strong CS fundamentals will become essential to performing this work. Stronger statement that e.g. "CS will be a tool for chem". SW will be the entire medium. 2021-05-04 04:25:56 @xamat Rather, I mean that CS will make up a critically large fraction of all science. Not unlike how linear algebra makes up a critically large fraction of quantum physics, for instance. 2021-05-04 02:46:33 But it does mean that, if you were a business executive in 2000, you should hire people who understand tech (including at top levels), and if you're a scientist today, you should make sure that you develop your CS chops (including ML). 2021-05-04 02:45:30 It doesn't mean that chemistry will be literally classified as a subfield of CS, or that Walmart will be literally classified as a tech company. Obviously... 2021-05-04 02:43:13 In the same way that "most cos will be tech cos" means that tech proficiency will be essential to staying in business: most of your operations will critically require tech. Walmart, AXA, FedEx, etc. are "tech companies". 2021-05-04 02:39:45 Anyway, "most science will be CS", just like "most companies will be tech cos", is a prediction you should take seriously, but not literally. It means that CS proficiency will soon be indispensable to staying relevant as a scientist: most what you will do will require CS. 2021-05-04 02:04:37 Don't worry though, your domain expertise will remain very important. Just like how... uh... having a strong linguistics background is essential in natural language processing (formerly computational linguistics)... 2021-05-04 01:53:03 This tweet is infuriating many, apparently. Imagine the controversy if, in 2000, someone predicted that by 2020 most companies would be tech companies! Still true though. Good thing they didn't have Twitter back then :) 2021-05-03 21:52:29 Within 10-20 years, nearly every branch of science will be, for all intents and purposes, a branch of computer science. Computational physics, comp chemistry, comp biology, comp medicine... Even comp archeology. Realistic simulations, big data analysis, and ML everywhere 2021-05-03 05:16:44 @Plinz @Zhaey_ I see this idea as akin to the simulation hypothesis: fun, interesting, very implausible, but ultimately impossible to disprove. 2021-05-03 05:13:32 @Plinz @Zhaey_ To be clear, I don't endorse the antenna analogy (though I don't think it can be definitely discarded either). I have much mundane views of consciousness, as being generated by the brain. I shared the analogy because it was interesting, thought-provoking, not because I believe it 2021-05-03 02:01:47 @Plinz @Zhaey_ If you strongly believe you understand consciousness, to me, that's a clear sign that you understand it even less than I do (this statement not directed at the original poster, just a general fact). 2021-05-03 02:00:58 @Plinz @Zhaey_ I do have several ideas of what consciousness is and how it works, but like everyone else's, they aren't backed by solid evidence, so I don't the point in trying to assert these ideas as the one true explanation. 2021-05-03 01:59:37 @Plinz @Zhaey_ In general, though, when I engage in these discussion it is to point out mistakes or to state what we don't actually know. I do not try to push a specific theory of consciousness, because I don't have one. 2021-05-03 01:57:08 @Plinz @Zhaey_ This is funny to me, because not only do I frequently engage in debates about consciousness with people with differing opinions -- I also believe the opposite of the leading tweet. Consciousness is a natural phenomenon for which there should be a scientific explanation. 2021-05-02 20:21:12 RT @ankur310794: BART model using @TensorFlow Keras (@fchollet) from scratch in less than 100 lines. https://t.co/ZUAcUdOktv 2021-05-02 19:42:48 An individual mind is a small subroutine in a sprawling problem-solving, understanding-generation mechanism that extends across continents and centuries. Civilization, as we call it. It's this mechanism that implements human intelligence in its true form. 2021-05-02 19:38:12 Thankfully, we don't understand things solely via our individual minds, but via our "extended cognition" infrastructure, which is externalized and collective (science itself is a good example of extended cognition). We can understand systems much more complex than ourselves. https://t.co/Togt8WyLC4 2021-05-02 19:18:17 In general, humans tend to overestimate how much they understand about natural systems, because they can't perceive most of the complexity of these systems. We're surrounded by an ocean of complexity, and because we manage to paddle on its surface, we believe we're in control. 2021-05-02 19:14:11 No one today understands consciousness. It's not just that we can't recreate it, it's that we don't even understand its most basic properties -- what it is, what it isn't, what is its function. If someone tells you they understand it, they're deceiving themselves. 2021-05-01 18:36:25 My wife and I are thrilled to announce the birth of our son Sylvain, earlier this week. Sylvain and his mom are both doing great. Feeling so blessed :) https://t.co/9wVeegQHqj 2021-04-26 19:29:20 RT @sundarpichai: Devastated to see the worsening Covid crisis in India. Google & 2021-04-26 19:02:58 RT @RMac18: After we published a story on an internal report detailing Facebook's failures in preventing the Stop the Steal movement, the c… 2021-04-26 00:58:07 RT @RisingSayak: New #Keras example is up on *consistency regularization*or an important recipe for semi-supervised learning and tackling d… 2021-04-25 19:25:14 New code example on https://t.co/m6mT8SrKDD: training an image classification model with consistency regularization for robustness against data distribution shifts. Created by @RisingSayak. Super clear and concise! https://t.co/kk4fhoJbUB 2021-04-25 17:34:44 I'd guess I'd summarize it as, being a "deep learning expert" in 2021 is like being a "medicine expert" in 1800. You know a lot less than you think, and most of what you think you know is wrong. Just keep learning and experimenting, and don't play stupid status games. 2021-04-25 17:28:26 As for me, I'm just someone who's been trying to learn as much as possible (not just about AI). That's how I'd define myself: someone who gets excited about stuff and learns about it. If there's an "expert threshold", I hope I never reach it. 2021-04-25 17:27:05 If someone tells you they're a top expert, a pioneer, the main thing they're an expert at is playing status games. The same people will probably also try to demean those they feel are in competition with them, because that's how status games work. 2021-04-25 17:25:32 In general, I'm also not a fan of the idea of an "expert". It makes it sound like there's some threshold of knowledge beyond which you know it all, you've made it (perhaps the threshold is when you reach full professorship). I don't think that's how it works. 2021-04-25 17:24:23 Consider that, not long ago, most AI experts knew for a fact that neural networks were a failed avenue. Consider that, in 2013, most of the top names in computer vision were saying that the nascent success of DL might be just a fluke. And remember the debates about local minima? 2021-04-25 17:23:45 Not only that, but when I chat with experts, I'm often surprised by how few of them seem to have a clear mental model of what DL is and how it works. In fact, many big-name researchers often say things that are manifestly untrue and easy to disprove! 2021-04-25 17:22:47 Besides, I'm not sure that "deep learning experts" exist. People with the highest h-index can't write a GPU kernel or design a DL ASIC. Nor could they win a Kaggle competition. Nor, for the most part, write reusable code (which is really the core of DL). 2021-04-25 17:22:14 I don't consider myself a deep learning expert by any means. There are still a lot more things I don't know than things I know (it's not even close). I've only been working with neural networks since 2009, which is a lot less than many of you. 2021-04-24 19:27:13 New code example on https://t.co/m6mT8SrKDD: the Perceiver architecture for image classification. Super clean and readable code. https://t.co/OJXX4jHQDV 2021-04-24 17:46:21 Debugging is central to the way people experience a framework. It's what we spend most of our time doing. And the clarity of the feedback provided by the framework during debugging is what minimizes time to solution and maximizes developer happiness. 2021-04-24 17:44:21 I'm starting up an effort to improve the TensorFlow debugging experience. If you have obscure stack traces you want to share, or if you have ideas/suggestions, send them to my work email. 2021-04-24 05:38:06 Some people are like birds that were raised by mice. Only when life forces them to jump do they realize they can fly. Spread your wings and soar. 2021-04-23 21:25:35 Causation causes correlation and correlation is correlated with causation 2021-04-22 16:14:31 Whenever I come across the words "glenn greenwald" I picture a lush forested valley and I keep scrolling 2021-04-22 15:21:04 @kylebrussell Not wanting the status-signaling gaud is actually better than having it 2021-04-21 18:06:32 RT @rforcano: Interesting interview to @fchollet, creator of Keras. I agree with him that the intelligence of AI systems must be measured b… 2021-04-21 16:53:44 RT @isaacstonefish: Just how dependent is Bitcoin on Xinjiang? When a single coal mine in Xinjiang flooded last weekend, that halted more t… 2021-04-21 15:47:19 @lwhittle7 @TheSequenceAI Maybe? No immediate plans. 2021-04-21 15:20:03 I answered a few questions from @TheSequenceAI about the usual topics -- Keras, what it means for machines to be intelligent, ARC, etc. If that sounds interesting to you, check it out :) https://t.co/vrFu14dtrL 2021-04-21 04:34:51 @engexplain You don't want "no code". Code is the point. You want simpler code. 2021-04-21 04:33:59 This is probably my biggest takeaway from my time in the tech industry. There are lots of smart people out there who can make almost anything possible. But very few people can make something simple. 2021-04-21 04:32:34 Technologists shouldn't just make things possible. They should make them simple. In many ways it's far more difficult. To something possible you just need to be clever. To make it simple you need vision. Intelligence is common, vision is rare. 2021-04-21 04:29:37 There's a lot of value to be created just by uncomplicating things that people have been overcomplicating. Programming is a prime example. Code is a powerful tool, and anyone can learn to think in code. But programming today is far too complicated. 2021-04-20 20:33:42 @AdamSinger Social networks will come and go, but I expect Twitter will endure in the long term, because it's pretty close to the final form of the Internet -- a real time feed of pure information / noise 2021-04-20 20:13:06 "I Rented Every TPU on Google Cloud for a Day" "I Melted my Friend's RTX and Bought Them a DGX" "I Got 100 Reviewer 2s to Review my Paper and I Applied Their Feedback" 2021-04-20 20:06:30 We need a MrMLBeast YouTube channel. "I Trained a 10 Trillion Parameter Model to Memorize Wikipedia" "I made ResNet50 Converge on ImageNet with a 0.000001 Learning Rate" 2021-04-20 17:04:54 @carrigmat @Azarkhalili @huggingface That's great to hear! if you need tips, have a question, or would like a quick code review, please feel free to reach out :) 2021-04-17 19:50:03 Life is short, so waste it discerningly, like a lazy Saturday morning with the family 2021-04-17 17:17:18 @Rubenia_Borge You should definitely not engage with them, but the fact that haters and trolls abound should not stop you from expressing yourself 2021-04-17 16:17:33 Some people just really want to hate other people and will find any reason to get angry, no matter what was said or how little they know about it https://t.co/SFKtACem7y 2021-04-17 15:31:44 I don't normally go on podcasts, but I did an exception for @MLStreetTalk, and I wasn't disappointed. The discussion was very fun and interesting, and the production quality of the video is incredible. Check it out! https://t.co/XtSrGKBFQk https://t.co/n0n25q82Xe 2021-04-17 07:29:28 @MNoNamer For English speakers reading this: in this context, it would translate as "how can I get rid of you". 2021-04-17 07:05:56 @MNoNamer That's not what it says. It says . The picture is a joke, you see. People didn't find poor Kairu very helpful in practice. 2021-04-17 06:44:33 Very niche Microsoft fact: the Japanese edition of Office didn't have Clippy. Instead, it had a cute dolphin character named Kairu (inversion of "iruka", dolphin in Japanese) https://t.co/HtDHDUJNUF 2021-04-17 00:01:20 @ChrSzegedy @DavidSHolz That's a cool idea. Kinda like OSS donations, but linked to specific features. 2021-04-16 22:25:27 @DavidSHolz Distributed small orgs work best when each one can occupy a niche for which there is a reasonably-sized market. So you could build billion-dollar highrises with small orgs. You need a big org when you're building very large systems made of components for which there is no market. 2021-04-16 22:17:51 @benedictevans This is one of those situations where a casual consumer of technology will immediately the correct answer, but a thinker attuned to constant waves of technological disruption will be biased towards the wrong answer 2021-04-16 22:09:16 @DavidSHolz Checks out empirically for most "innovative" software products, but scale also enables new kinds of projects. Communication scales logarithmically but infrastructure scales super-linearly. You couldn't build a 5th generation fighter jet with 100 orgs of 100 people. 2021-04-16 17:16:20 I always love to see Keras getting used in service of interesting problems in biology, physics, or in this case, astrophysics. Classifying the morphology of 27 million galaxies from the Dark Energy Survey dataset: https://t.co/sa9JiwMIXw 2021-04-16 15:02:45 RT @MLStreetTalk: We spoke with @fchollet about neural program synthesis, the manifold hypothesis, type 1 and type 2 generalisation, the me… 2021-04-16 04:17:25 @wimlds I see. I'm familiar with his work, but not with the man. I fully agree that toxic and abusive people should not be given prestigious awards. That was also my first thought when I saw Aaronson had received the ACM award. Very poor judgement. 2021-04-15 23:00:19 A good framework gets the tedium out of the way when it's convenient, but gets itself out of the way when you want to step in. 2021-04-15 16:47:25 @rama100 Causal inference will always be useful because it generalizes much more broadly than input-> 2021-04-15 15:38:11 RT @mrdbourke: Big fan of @fchollet's vision for the future of deep learning frameworks Remove the barriers between researchers, data scie… 2021-04-15 15:31:32 @MikeIsaac Is there one for beans? 2021-04-14 03:26:40 As far as I can tell, the only thing preventing crane height explosion in practice is the friction caused by the extensive paperwork you need in order to authorize each stage of crane building. 2021-04-14 03:19:05 To build a highrise tower, you use a tall crane. But how do you build that tall crane? Not enough people realize this: you use a smaller, portal crane. A crane can bootstrap taller cranes, and so on, recursively, inevitably leading to a catastrophic crane height explosion... 2021-04-13 17:13:17 Many believe that the field of AI is the answer to the question "how can we create artificial (human) minds?". That's also the question that led me here. But today, I think AI is more pragmatically the answer to the question: "how can we make software do more?" 2021-04-13 00:23:06 After last year's apocalyptic wildfires, California is heading towards something even worse this year. https://t.co/nLV2gjN3YV 2021-04-12 23:00:36 New code example on https://t.co/m6mT8SrKDD: Self-supervised contrastive learning with SimSiam, created by @RisingSayak. Check it out! https://t.co/eZtKfgtOQa 2021-04-12 18:27:50 RT @WeAreInevitable: The four trends and where @fchollet sees #Keras and #Tensorflow growing in the next five years! #GTC21 #ExplainableA… 2021-04-12 18:14:24 RT @GuglielmoIozzia: @fchollet on stage now at the #NVIDIAGTC 2021 #keras #TensorFlow #DeepLearning #Python https://t.co/WwiwE0AKM2 2021-04-12 18:11:43 RT @WeAreInevitable: Up next or us at #GTC21 is "Keras and TensorFlow: The Next Five Years [S31925]" by @fchollet. https://t.co/ZPSEw3bV… 2021-04-12 16:49:22 RT @issielapowsky: I actually gasped while reading this. A deep dive with receipts looking at how Facebook overlooks inauthentic behavior a… 2021-04-12 16:28:55 Doing everything your users request is the shortest path to ruining your product -- especially when these users come from a legacy product and just want the new thing to be like the old thing 2021-04-11 06:08:54 The best accuracy achievable without looking at the inputs (i.e. just by learning the label distribution) can be a pretty good initial baseline for a difficult problem. 2021-04-11 06:06:55 The machine learning equivalent: shuffle the labels in your test dataset (so they don't match the test inputs anymore, while keeping the same class distribution) and rerun evaluation. If your accuracy is still as high as before, you have a problem 2021-04-11 06:03:51 If you modify a complex piece of code, and your tests pass on the first try, you should immediately proceed to break the code in an obvious way and rerun the tests, to check that you're actually testing what you think you're testing. 2021-04-11 00:53:46 A downside of NFTs, for me: I've had to unfollow various artists who went from posting cool art & 2021-04-10 17:18:50 @neurobongo Wait until you meet salespeople at ML startups 2021-04-10 17:14:55 I want to write code that feels like art and make art that thinks like code 2021-04-10 01:27:24 @fulhack Ensembling 2021-04-09 18:14:16 Building it was an extremely smart investment in the future, that has paid for itself countless times over via the economic growth it has unlocked & 2021-04-09 18:11:47 The Tokyo subway system, mostly built in the 1960s and 1970s, remains one of the most efficient and convenient transportation solutions you'll find anywhere -- not to mention the insane scale at which it operates. 2021-04-09 00:30:31 For any part of the human experience, you can find a large segment of people who experience it completely differently from what you take for granted. Take pop music. Most people who listen to American pop music don't understand the lyrics, and thus experience it very differently 2021-04-08 20:23:17 Great tutorial by @A_K_Nain https://t.co/gd6GTM80pd 2021-04-06 16:58:56 @shitpost9000 What would you change? How can we make it better? 2021-04-05 16:49:07 It's a good day when APIs are front page news 2021-04-05 14:45:39 RT @JuddLegum: 1. BREAKING @Facebook pledged to suspend political donations for the first 90 days of 2021, then donated $50,000 to @RSLC,… 2021-04-04 02:31:41 @mark_riedl Looks like we need to tune our learning rate 2021-04-03 19:09:55 The core of teaching is empathy: explaining well means fully putting yourself in the shoes of the learner. 2021-04-02 03:11:10 @kumar_s_s When it's ready! 2021-04-02 02:42:35 Since folks are asking: the book in question is the second edition of my deep learning textbook. It's less of a revision and more of a complete rewrite, though. Now back to writing... 2021-04-02 00:15:55 I'm now over 80% done with the draft of my book. Last mile... 2021-04-01 17:41:56 We must focus on the process more than the outcomes -- which are more often than not determined by luck and other factors outside of our control. Do the best job you can, and if it doesn't work out, move on. If that's your bar for success, then your success is in your own hands. 2021-03-31 23:13:26 Congratulations to Jeffrey Ullman and Alfred Aho on a very well-deserved prize. https://t.co/TqEzXRMoNJ 2021-03-31 17:00:16 @clearestblue Tune the dropout rate, the choice of whether to use LSTM or GRU, and whether to use a second layer or not. Don't tune the learning rate 2021-03-30 15:52:51 The two kinds of ML profiles: those who read "GA" as "Genetic Algorithm" and those who read "GA" as "Generally Available" 2021-03-30 04:12:32 Shocking and shameful. There are no words. This hits close to home, too. I once worked at an office on this street, literally on this block. I've walked on this sidewalk hundreds of times. Help those who need your help. https://t.co/LMZ9zJ5iRu 2021-03-29 20:39:48 @mark_riedl Greed + technology will suffice 2021-03-29 16:12:39 @michaelhood Yes, there is, courtesy of the former president. https://t.co/FtsvvLG8Yo Looks similar to what you'd normally get from a surveillance aircraft 2021-03-29 16:05:52 Modern military intelligence satellites have a resolution of 5-10cm per pixel (2-4 inches), so most things are big enough that you can see them from space. "Wow, that's a big tomato" "Yep, it's so big that you can see it from space" 2021-03-28 22:37:40 RT @Datagraver: Today the full bloom of the Cherry Blossom Flowering was announced for Kyoto. This day has been recorded since 812 in Kyoto… 2021-03-28 16:52:12 RT @svpino: Growing hack: contribute to open-source. @hazemessamm and I put together a cool example that shows how to use neural networks… 2021-03-28 00:53:03 RT @sacca: 15 years ago, I co-led a team trying to give 100% free Internet access to all of San Francisco starting with the poorest neighbo… 2021-03-27 19:44:36 There are two kinds of software developers: those who already use deep learning, and those who will use deep learning next. 2021-03-27 17:42:05 @kingaafaq Not at this time but this is on the roadmap for AutoKeras 2021-03-27 16:43:45 Keras turns 6! Congrats to the Keras team at Google and to the Keras community around the world. You made it what it is today! https://t.co/KU3dOGTt1U 2021-03-27 15:43:04 RT @fchollet: New code walkthrough on https://t.co/m6mT8SrKDD: using a siamese network to learn to estimate how similar two images look lik… 2021-03-26 22:32:44 New code walkthrough on https://t.co/m6mT8SrKDD: using a siamese network to learn to estimate how similar two images look like -- trained on the "Totally Look Like" dataset. Super readable and nicely explained. Created by @hazemessamm & Check it: https://t.co/9wruW7oyDL https://t.co/X7RLmsxiEC 2021-03-26 20:31:44 The "how" stay at the level of superficial observations. The "why" gets to the heart of the system. It requires a full understanding not only of the system itself, but of the context in which it lives. It requires you to follow the thread of purpose that drove its emergence. 2021-03-26 20:28:47 A classic example is neuroscience: it is very much in the business of asking, "how does the brain work?", and it has no power to answer the actually important question, "why does the brain work?"... 2021-03-26 20:26:11 In the case of a piece of music, "how does it work?" will make you look for the key, the different voices, the rules. That's the easy part. "Why" leads you to ask what exactly about the piece makes you feel the way you feel. It will require you to understand your own mind. 2021-03-26 20:24:27 In the case of deep learning, "how does it work?" will make you explain backpropagation and matrix multiplication. But "why does it work?" leads you to the structure of perceptual space. 2021-03-26 20:21:07 When smart people are presented with something new, they tend to ask, "how does it work?": how is it structured, how was it made? But the more important & 2021-03-26 19:36:47 Without endings and obsolescence, there would be no renewal, only stagnation. 2021-03-26 19:34:55 The wonder of endings mirrors the wonder of beginnings. Only a fan with no sense of storytelling and beauty would want their favorite show to keep airing new episodes forever. Artists know when to end. Which leaves room for the next story, the next generation. 2021-03-26 17:49:05 Made a new tune yesterday night. Slightly epic vibe. https://t.co/QtLRJjp2Zx 2021-03-26 03:53:25 Meanwhile, in Europe... https://t.co/TMDT7Iv7FW 2021-03-26 03:25:23 Picking new books to read is how you turn the steering wheel of your mind. Better done purposefully. 2021-03-24 22:45:24 @mrsfr0g As a techie: you start a NFT platform As a VC: you invest in NFT platforms As an artist: you get rich people to buy your NFTs As a rich person: you buy hype-y NFTs early enough that you can resell them to the next person at a profit In all cases success relies on maximizing hype 2021-03-24 22:37:58 @AndreiDeev1 This could be achieved in a much more efficient manner though. But it wouldn't work without the tech hype 2021-03-24 22:33:32 @AndreiDeev1 In a way they're a subsidy from rich people to semi well known artists 2021-03-24 22:29:43 There are two kinds of NFT takes People who stand to benefit from NFTs think they're great and promote them every way they can People who don't, think they're silly and wasteful It reminds me of takes on tax increases... 2021-03-24 16:09:25 As a result, the deception factor in deep learning papers is often quite high 2021-03-24 16:08:37 Research fields come in two flavors: those where your career progresses when you actually get something right (reality-grounded), and those where it's enough to create the appearance of getting something right (belief-grounded). Deep learning research is somewhere in between 2021-03-24 16:03:49 When your success is determined by objective reality -- user adoption, economic viability -- you're incentivized to deliver. When your success is determined by convincing people that your ideas are right and important, you're incentivized to deceive... 2021-03-23 22:31:50 @Lee__Drake You can load the output of your generator in a NumPy array then. Anything that doesn't fit in memory is likely too large for architecture search anyway 2021-03-23 22:15:44 @gusthema If you already have feature vectors, you could try StructuredDataClassifier 2021-03-23 22:14:58 @Lee__Drake Use https://t.co/oiMJsLvdrt datasets or just numpy arrays 2021-03-23 21:56:12 You get access to the resulting Keras model, so you can iterate on it manually. Great way to establish a solid baseline on a new problem (as long as your dataset is small enough for architecture search to be tractable). 2021-03-23 21:55:35 Did you know you could use AutoKeras to quickly develop simple image & It will do architecture search and hyperparameter tuning for you in the background. https://t.co/CZCuFByJfu https://t.co/3d644bgucK 2021-03-23 15:21:53 Past a certain level of complexity, every system starts looking like a living organism. 2021-03-23 02:18:34 https://t.co/p6ZQdfzJEN 2021-03-22 23:12:33 @ulusdd @Grady_Booch It's almost as if there were a metaphor for something in there 2021-03-22 22:17:52 @Grady_Booch The reason old Rome was not, in fact, eternal, is because the romans didn't think of drawing sufficiently high-resolution maps of their city. 2021-03-22 03:29:18 I'm happy with today's track as well. But unfortunately I don't feel like I'm learning fast enough. 2021-03-22 03:28:43 So far this track is the one I enjoyed making the most (this is from a few months ago). I want to make more like it. https://t.co/5Zj0n50yuQ 2021-03-22 02:13:33 A fact of more practical importance: as someone who uses YouTube daily, YouTube Premium is easily the best money I've spent in the recent past. Not sure why I never tried it before... 2021-03-22 02:03:34 @Grady_Booch I don't know what that is... 2021-03-22 01:59:44 I will try to never tweet about AGI (or the singularity, etc) again, as the topic is utterly toxic. I might as well start tweeting about religion 2021-03-22 01:58:33 While people were arguing about AGI in my mentions, I made a new tune -- first one in a while. https://t.co/mZt1QBYrcD 2021-03-21 20:52:00 @flantz @togelius @ESYudkowsky Even same demographics and similar profile pictures. Sometimes I wonder if they aren't the same people. 2021-03-21 20:51:29 @flantz @togelius @ESYudkowsky When I was 15 I used to debate creationists online to teach myself English. All of my interactions with proponents of intelligence explosion and the Singularity have been nearly identical to my past interactions with creationists. Same arguments, same rethorical style, ... 2021-03-21 20:50:24 @flantz @togelius @ESYudkowsky We don't know for sure where the universe comes from. Therefore it is more modest and by default more plausible to accept that the Great Spaghetti Monster may have created it. 2021-03-21 17:41:16 How much louder than a whisper is a conversation? 10x? No, more like 1000x. How much brighter is it outside in the sun compared to inside your house? 10x? No, more like 1000x. 2021-03-21 17:34:43 Human perception of sound, brightness, pain, etc. is always on logarithmic scale. It pays close attention to small signals and suppresses high-power signals. The value of a signal is mostly in the information it carries, not its intensity. 2021-03-21 03:25:04 I think understanding how music works and understanding how the mind works are too closely related endeavors -- understanding the mind is only slightly more general. 2021-03-20 22:42:58 Personally I don't like to make such predictions myself, since it would require extrapolating from a series of past milestones, and at this time we have no milestones that point in the direction of general intelligence. It would be data-free extrapolation, i.e. making things up 2021-03-20 22:40:08 Quite a few people who got into deep learning in 2015-2016 were thinking at the time that human-level general intelligence was 5 years away. Anecdotally, it looks like around half of them still think it's 5 years away today 2021-03-20 19:38:53 @MarioLoisG Sampling from a latent space, in itself, is cognitive automation. But you could use it to create a cognitive assistant. The distinction between the two is more one of application than a functional difference in their engine. 2021-03-20 05:39:17 @fadibadine Google search, certain recommender systems, that sort of thing. Or image captioning systems for visually impaired people. 2021-03-20 04:35:14 @khoomeik Neurosymbolic models, aka "writing software by hand to do the things DL can't" is already what everyone is doing to address this. It works to a large extent, but it is very labor intensive. 2021-03-20 04:33:14 @danielggold Approximately 0%. Deep learning models process floating point data. The fact that these floats are discretized (on 16 or 32 bits) does not account for any meaningful part of the generalization problem in deep learning. 2021-03-20 03:37:49 The reason why is that parametric models trained with gradient descent make it easy to automate something, but have little ability to deviate from the patterns they've learned. Meanwhile, the real world is full of surprises, and handling it requires the ability to adapt. 2021-03-20 03:31:11 Every app demo based on GPT-3 follows this pattern. You can build the demo in a weekend, but if you invest $20M and 3 years fleshing out the app, it's unlikely it will still be using GPT-3 at all, and it may ever meet customer requirements 2021-03-20 03:27:16 Autonomous driving is the ultimate example. You could use deep learning to create an impressive self-driving car prototype in 2015 on a shoestring budget (Comma did exactly that, using Keras). Five years and billions of $ later, the best DL-centric driving systems are still L2+. 2021-03-20 03:22:52 Deep learning excels at unlocking the creation of impressive early demos of new applications using very little development resources. The part where it struggles is reaching the level of consistent usefulness and reliability required by production usage. 2021-03-19 22:01:08 The last kind is cognitive autonomy: creating artificial minds that could thrive independently of us, that would exist for their own sake. Today and for the foreseeable future, this is stuff of science fiction. 2021-03-19 21:59:58 The second kind is cognitive assistance: using AI to help us make sense of the world. AI to help us perceive, think, understand. I believe this is where the true potential of AI lies. Today, some applications of ML fall into this category, but they're few and far between. 2021-03-19 21:58:43 There are three kinds of AI we could be making. The first kind is cognitive automation: encoding human abstractions in a piece of software and using it to automate tasks normally performed by humans. Nearly all of current machine learning & 2021-03-19 19:26:44 Submit your projects to the #TFCommunitySpotlight program! https://t.co/V8U8X66r4y 2021-03-19 18:38:17 I truly believe that most of our mistakes aren't mistakes, but learning experiences. The only real mistake you can make is failing to learn and adapt. 2021-03-19 13:52:35 New code example on https://t.co/m6mT8SrKDD, from @RisingSayak: using RandAugment from the imgaug library in a https://t.co/oiMJsLvdrt pipeline, to train more robust image classification models. https://t.co/lc7ECmQvbR 2021-03-19 02:33:20 RT @ScottDuncanWX: We just observed a staggering +20.4°C in Iceland today! The weather pattern is perfect for delivering exceptional w… 2021-03-19 00:45:59 This reminds me the time I reimplemented StyleGAN in Keras and I had to add a --make_it_more_awesome=True flag 2021-03-19 00:40:47 When you rewrite stuff in Rust, you add a --load_fast=True flag, apparently https://t.co/l4L7XgyHtt 2021-03-18 03:40:42 Spell A with a Д or R with a Я one more time 2021-03-18 03:36:15 The absolute worst typographic crime isn't Comic Sans, Papyrus, or bad kerning. It's trying to use elements of a non-latin script (like Cyrillic or katakana) to imitate latin characters 2021-03-17 13:24:24 You cannot break free from the past: it will always be there, one step away. You can only reconcile yourself with it. And you can only do that by focusing not on the past, but on the present, and accepting the forks in the road that led to it. Accepting loss, irreversibility. 2021-03-17 01:48:19 RT @ecsquendor: Does @fchollet think that deep reinforcement learning is "intelligent"? No he doesn't. And he's right https://t.co/zhB5JnO… 2021-03-16 23:59:24 @Rukawa_SlamDunk Thanks Twitter 2021-03-16 23:31:26 The evolution of an ideogram, as seen on the walls of Happy Lemon (boba shop). Traditional Chinese has the most detail, simplified Chinese has the fewest strokes, and Japanese represents an intermediate state of simplification... https://t.co/vRpp8sDfmu 2021-03-16 20:32:55 @Abel_TorresM @ecsquendor It would still not capture other aspects of human cognition, like perception, behavior, goal-setting, and so on. Any intelligence benchmark should be an ongoing effort, to be refined as the flaws of the previous iteration become more apparent. 2021-03-16 20:30:44 @Abel_TorresM @ecsquendor Solving the current iteration of ARC: not at all, as that would imply ARC is a perfect benchmark, which it isn't. Being able to solve *arbitrary ARC tasks* generated on demand: yes, that would be human-level intelligence, although only in terms of abstract fluid intelligence. 2021-03-16 15:19:26 RT @DavidZipper: BREAKING-- Congress just released text of the “EBIKE Act," which would offer a refundable tax credit of up to $1,500 for a… 2021-03-16 03:12:42 Unpopular opinion: serif fonts are more readable than sans serif fonts due to characters being more distinctive and having more visual anchor points. I real marginally faster with plain serif fonts like Times New Roman. 2021-03-16 01:20:23 @quassy7 This isn't possible at this time. 2021-03-15 23:27:36 There are multiple distribution strategies available. Most of the time, you will use MirroredStrategy (replicates your model on each available GPU, send a sub-batch to each replica at every training step, and keeps the replicas in sync after processing every step) or TPUStrategy. 2021-03-15 23:26:02 Tweetorial: high-performance multi-GPU training with Keras. The only thing you need to do to turn single-device code into multi-device code is to place your model construction function under a "distribution strategy" scope, like this. https://t.co/hfBSGwcRSm 2021-03-15 15:20:58 RT @PyImageSearch: New tutorial! Mixing normal images and adversarial images when training CNNs - #Keras and #TensorFlow implementation… 2021-03-15 05:44:55 @ChrSzegedy I don't see Inception? 2021-03-14 22:25:48 @petewarden @mat_kelcey @dansitu Makes sense that rule-based estimates would be way off. But I'd be very optimistic about the prospect of training a DL model to estimate the performance profile of a model given its graph, trained on an artificially generated dataset. 2021-03-14 18:41:32 Two words: Quantum NFTs. VCs, please form an orderly line 2021-03-14 16:45:05 There's no bug that can't be tracked down and fixed -- as long as you have the Interstellar soundtrack in the background. 2021-03-14 03:42:49 Professor Youtube is one of the best teachers in the world for self-motivated students -- in any domain 2021-03-13 23:56:47 @mat_kelcey "money to buy attention is all you need" 2021-03-13 23:43:43 If I were a billionaire CEO who happened to have sociopathic tendencies, I'd just avoid broadcasting that to millions of people on Twitter. There's no upside, just downside. In fact, most people in this situation understand this perfectly. 2021-03-13 19:44:39 @DouglasKGAraujo @yieldthought A lot of it is astroturfed. And some of it is from folks who haven't touched TensorFlow since 2017. The truth is, TF today is really good. 2021-03-13 19:25:36 @yieldthought ML people who say "i refuse to use TF" are like devs who say "i refuse to use C++". It's unserious. You may not like it, but every company out there with significant ML operations is using it. At some point you're going to have to learn it. 2021-03-13 19:22:23 @yieldthought TF isn't meant to be easy to use (that's more like Keras). It's meant to be powerful -- fast, scalable, production-grade. It's 100% the best infrastructure layer available today to build Keras. Though I like Jax as well (but it isn't nearly as mature). 2021-03-13 19:20:12 @yieldthought TF is just NumPy with gradients, TPU/GPU acceleration, and large-scale computation distribution. And the ability to export programs to mobile devices, javascript, etc. Basically, it's the layer of infrastructure you need if you're serious about ML. 2021-03-13 19:17:46 RT @fchollet: @yieldthought TensorFlow is like C++: it's complex, but the parts you actually have to use are straightforward (math ops, gra… 2021-03-13 19:16:32 @yieldthought TensorFlow is like C++: it's complex, but the parts you actually have to use are straightforward (math ops, gradient tape, https://t.co/oiMJsLvdrt, tf.distribute). And like C++, it's the fastest & 2021-03-13 18:57:13 RT @Afzali_K: Kerastuner was my last year revelation! It's incredible how many options it gives you while making the decision making easier! 2021-03-13 18:07:17 Then, instantiate a tuner and pass it your model building function. It will need an objective to optimize -- this could the name be any metric found in the model logs. For built-in Keras metrics, the tuner will automatically pick whether to maximize or minimize the metric. https://t.co/A4WKtL0TeO 2021-03-13 18:07:16 Quick tweetorial: using KerasTuner to find good model configs. Define your model as usual -- but put your code in a function that takes a hp (hyperparameters) argument. Then, instead of using values like "embedding_dim = 512", use ranges: https://t.co/LiQd3ANrBJ(...) https://t.co/6feedjVp03 2021-03-13 17:39:30 RT @glichfield: With all the pushback from Facebook against @_KarenHao's recent story, and as one of the editors on the piece, I thought it… 2021-03-13 16:15:27 Direct experience is the layer of understanding from which all the others arise. You've got to try things and see how they work out. You won't know the taste of a new dish by reading the recipe. 2021-03-13 07:41:30 @GholkarRushil You can start with the two "getting started" guides: https://t.co/ulW20ylnHE Then you can check out the examples section, where you will find most workflows covered. Read through some of them and see for yourself if you like the Keras programming style. https://t.co/eE1hRBF8Gt 2021-03-13 04:29:03 Keras has some pretty cool features (like here, train_step overriding) that you can use to implement what you need in as little code as possible. But my favorite part of Keras isn't any feature, it's the community -- folks like Sayak right here. Sounds cheesy but it's 100% true. https://t.co/8CQ1JxsGQW 2021-03-12 16:47:53 Very cool train set. I wonder if the test set looks similar? https://t.co/JSVl65WILY 2021-03-12 16:42:30 RT @TensorFlow: One year anniversary of the #TensorFlow Certificate program! We are proud to announce that 2500+ developers passed the exa… 2021-03-12 16:30:03 This is the gist of the mixup technique https://t.co/fjOoY8FPWz 2021-03-12 15:47:40 @RisingSayak Thank you for the great contribution! 2021-03-12 14:47:19 New example on https://t.co/m6mT8SrKDD: mixup, a domain-agnostic data augmentation technique. Implemented by @RisingSayak https://t.co/sLp27jsDi3 2021-03-12 13:38:45 Your user community is your brand 2021-03-12 13:37:20 When a product builds its brand on appealing to its users' sense of superiority and on disparaging the work of its competitors, the silver lining is that it becomes a magnet for assholes, and as result I don't have to deal with those in my own userbase 2021-03-12 01:56:40 It's crazy how much having competent and decent leadership can reduce your exposure to malarkey 2021-03-11 17:56:48 @kcimc Being the first name that comes to mind when thinking about 21st century artists in the year 2500 2021-03-11 06:21:07 That's my goal for 2021: maximize my intake of positive, exciting things. Then give back. 2021-03-11 06:20:25 "garbage in, garbage out" -- and its inverse "greatness in, greatness out" applies to everything we do. It's when we immerse ourselves into what we find most interesting and exciting that we develop ideas and energy that we can give back. We reflect back what we've absorbed. 2021-03-11 02:22:41 Over 500,000 dead in the US due to months of denial, deflection, and an incompetent response. I hope everyone remembers. https://t.co/hr2F0eiN2Y 2021-03-10 23:35:19 Speak the language you audience understands. 2021-03-10 23:34:51 The ability to understand mathematical processes is widespread, but the ability to parse mathematical notation isn't. Insisting on teaching deep learning (which is fundamentally software engineering, not mathematics) with equations is like insisting on teaching Hegel in German. 2021-03-09 22:54:05 TensorFlow: the choice of a new generation https://t.co/gSy1KP9Sp9 2021-03-09 22:17:06 @AdamSinger Or smartphones. When I saw the first iPhone in person in January 2008 it was immediately obvious that it was the future -- not of phones, but of personal computing. Yet so many pundits were mocking it for being a very expensive phone... 2021-03-09 22:15:01 @AdamSinger I also remember all the computer vision people saying deep learning was just a fad in 2013-2014. Most things that are hailed as revolutions are actually fads, but when the real thing arrives it's clear as day. 2021-03-09 20:18:13 RT @random_forests: Check out Google Summer of Code : https://t.co/arE6jRtfiH. It's a 10 week paid program for students to work with… 2021-03-09 17:44:41 This can be used to shill something and create the illusion of popularity, to spread rumors about someone, or create the illusion that dozens of people are haters of someone. And this can all be done by a single person in a few hours. 2021-03-09 17:42:39 The reason why anonymity on social media is toxic isn't so much that can't know who someone is, but that a single person can quickly turn themselves into a "crowd". On Reddit, there's not even an email verification step, so it's trivial to quickly create thousands of accounts. 2021-03-09 17:37:08 This resonates. Unlike casual hate, harassment is very deliberate. It represents a big investment of time and focus on the part of harasser, spanning several years. And the harasser is usually not a rando, but someone you know, who hides behind large numbers of anonymous accounts https://t.co/QZALn0xFDv 2021-03-08 16:21:07 @jimyosef I bid and a 2021-03-07 18:03:54 RT @Grady_Booch: Indeed. The first documented use of the word “software” was in an article by John Tukey. Published in 1952. 2021-03-07 17:59:17 Computers are still younger than a human lifetime. We're just getting started on this path, and its destination remains well beyond the horizon 2021-03-07 08:01:23 I'm sure the folks who made The Martian had to figure this one out 2021-03-07 07:59:17 Now, fun medical puzzle: if you took off your spacesuit on the surface of Mars, what would immediately happen to you? Would you... 2021-03-07 07:55:19 And any lower than that, it would freeze (which would be the default given that the surrounding atmosphere would be at around -60°C / -80°F) 2021-03-07 07:53:45 Fun fact: if you wanted to keep an open-air swimming pool on the surface of Mars, you'd have to keep it heated at a temperature exactly between 0°C and 0.5°C (about 32°F). Because the atmospheric pressure on Mars is so low, water would boil if its temperature got any higher. 2021-03-07 02:34:02 RT @dylanmatt: Still kind of stunned and heartened at the scale of the American Rescue Plan. The 2009 stimulus was 5.5% of 2008 GDP. The… 2021-03-06 21:11:22 @quasimondo I paid for it exactly how much it's worth: $0. Personally I'd be happy to see blue ticks become an ID verification mark open to anyone. Initially I thought that was where the system was headed. 2021-03-06 20:19:57 Creating artificial scarcity isn't creating value... removing existing scarcity barriers is creating value. 2021-03-06 18:27:49 New example on https://t.co/m6mT8SrKDD: a convolutional autoencoder for image denoising https://t.co/jpRl2u8XkF 2021-03-06 17:40:41 This moves the needle for hundreds of millions of people in a big way. We're making real progress. And the fact that the bill was passed with only a razor edge democratic majority is a big success. https://t.co/cIlqgwXGnu 2021-03-06 17:28:28 It's getting better, but there's still too much deep learning hype for deep learning to truly flourish. Best days are still ahead -- once the tech doesn't make headlines anymore 2021-03-06 05:48:29 The grass is sprouting https://t.co/yt86YjiNm3 2021-03-04 23:04:58 It's not called Keras because making it is kerja keras... but it might as well be tbh 2021-03-04 07:06:43 Most of the time, when we say something, what the other person ends up understanding is pretty different from what we intended to convey (including on Twitter, obviously). But it's still pretty amazing that there's any alignment at all. Language is a miracle. 2021-03-04 04:10:31 Take pride in what you do, not in an identity you were born into 2021-03-04 00:01:01 https://t.co/QFl5mdzgfN has now *64* code walkthroughs demonstrating common deep learning workflows -- a nice round number. I'm super proud of our contributor community that made it happen. Awesome contributions all around! 2021-03-03 21:47:21 The truth is in the code. 2021-03-03 19:08:55 Finally, it combines the encoder and decoder in a Model subclass, where training logic is packaged in the train_step() method (this enables training via fit(), which gives you callbacks and distribution support for free). Also note the generate() method for inference! https://t.co/cG6KkgUNNO 2021-03-03 18:29:15 Then it defines a Transformer encoder, which is your usual Transformer block, as well as a Transformer decoder, which is also your usual Transformer block, but with causal attention to prevent later timesteps to influence the decoding of earlier timesteps. https://t.co/Ige93alEwK 2021-03-03 18:27:15 This example was implemented by @NandanApoorv. Let's take a look at the model architecture. It starts by defining two embedding layers: a positional embedding for text tokens, and an embedding for speech features, that uses 1D convolutions with strides for downsampling. https://t.co/7bj9QtZSyV 2021-03-03 17:17:46 New code walkthrough on https://t.co/m6mT8SrKDD: speech recognition with Transformer. Very readable and concise demonstration of how to build and train a speech recognition model on the LJSpeech dataset. https://t.co/LDKhOnIBLG 2021-03-03 14:07:40 RT @issielapowsky: NEW: Between Aug 2020 and Jan 2021 far-right misinformation Facebook pages drew more engagement per follower than any ot… 2021-03-03 02:10:23 The speed at which I accumulate open tabs in Chrome is alarming. I have to declare tab bankruptcy every couple days 2021-03-02 06:05:40 @SergeBelongie A full year of nonstop marchness 2021-03-02 03:45:19 @ayyadhury Note that it's about image generation, and not, for instance, image classification 2021-03-02 03:44:31 @ayyadhury Yes, it's not quite entry level but it's well explained 2021-03-02 02:55:08 Got this very nice book in the mail. All TensorFlow/Keras, with very readable code examples. Includes a section on StyleGAN, which will come in handy since I was trying to implement it the other day https://t.co/M56DB7E0Dm 2021-03-01 18:02:37 If you want to add a new code example to https://t.co/QFl5mdzgfN, check out our repository: https://t.co/wf8Arodiwp I'll be reviewing your PR. 2021-03-01 16:59:59 You can learn more about this pattern here: https://t.co/yzFys0aqlr 2021-03-01 16:58:40 This example features a model with a custom train_step(). Overriding train_step() enables you to write arbitrary training logic (unsupervised clustering in this case) while benefiting from the features of fit(): callbacks, built-in distribution, step fusing on TPU... https://t.co/XUaIt0TFyo 2021-03-01 16:55:33 New code walkthrough on https://t.co/m6mT8SrKDD: unsupervised image clustering with a contrastive loss. https://t.co/3huVnQ9pBP 2021-03-01 16:35:54 RT @PyImageSearch: New tutorial! Adversarial attacks with FGSM (Fast Gradient Sign Method) - Implement FGSM with #Keras and #TensorFlow… 2021-03-01 05:20:37 Especially as small details interact with each other over time. It compounds. 2021-03-01 05:19:08 One of the roles of a project owner is to care *a lot* about small details that would seem completely insignificant to most people. It may not seem like a rational attitude, but it is. The small details matter -- in aggregate. 2021-03-01 05:10:55 The success of a project doesn't come so much from the "one big idea" that started it, as from the accumulation of thousands of small decisions over years of execution. 2021-03-01 00:27:22 Just because something involves technology doesn't mean it's progress. Progress involves creating value in people's lives. 2021-02-28 06:56:20 If I were famous I'd leak a video of myself saying ridiculous things to the camera, under the caption "This video isn't real! DeepFake technology is getting incredible!" 2021-02-28 06:45:13 RT @neilrkaye: What percentage of all global fossil fuel CO₂ emissions since 1751 have occurred in my lifetime. If you are 15 it is about… 2021-02-28 05:06:38 @Annaleen Memphis (not the TN one) 2021-02-27 21:45:21 It's pretty easy to tell whether something is a superficial fad or fundamentally important, but it's much harder to tell if something will get big or not. It isn't uncommon for a fad to become hugely popular for a time (sometimes for a long time) 2021-02-27 20:36:45 @giant_hornet For production-level ML ops, yes, probably 2021-02-27 19:56:07 New code example on https://t.co/m6mT8SrKDD: an explainer on how to create TFRecord files. TFRecord is an efficient binary data format for machine learning, that helps you manage your datasets at scale. https://t.co/B5ZUtiI8GY 2021-02-27 01:47:09 Really looking forward to sleeping through the weekend 2021-02-26 16:12:50 RT @modacitylife: Your regular reminder that Delft’s historic market square was being used as a surface parking lot as recently as 2004. A… 2021-02-25 19:30:07 Is there a German word for features that one only ever trigger by mistake, like the Siri button on the MacBook touchbar? 2021-02-25 04:40:27 RT @pbump: This is a graph worth thinking about. https://t.co/r7idvl4z3g https://t.co/SXqoRx4l12 2021-02-24 17:47:35 Compartmentalize risk and complexity: start a new side system. If it fails, at least it doesn't bring down the whole. 2021-02-24 17:46:35 A single system shouldn't do too much, as that ties the viability & 2021-02-23 23:17:16 If, as a by-product of the forward pass of the layer, you end up with any loss or metric you want to track during training, just use self.add_loss or self.add_metric. https://t.co/QBMSIm2hQV 2021-02-23 23:16:10 A quick tweetorial. This is how you create a new Keras layer. You can either create the weights upon layer instantiation, in __init__(), or create them lazily in build() based on the first input shape seen. https://t.co/6iBPpcpAAA 2021-02-23 22:01:07 This is definitely the optimal way for a healthy economy to allocate resources so as to maximally realize human potential in the long term. The invisible hand, efficient markets, all that. It makes perfect sense. 2021-02-23 21:50:01 At this point the difference between the S& 2021-02-23 17:35:17 Writing a book is easy: it boils down to answering a series of multiple-choice questions. Select the correct next word from a finite set of options, repeat. Until it's done. 2021-02-23 03:07:27 The organization that produces the software is more valuable than the software itself. That's what your real product is. Polish it 2021-02-22 21:49:44 RT @gadyepstein: Important story. Among the findings: Ben Shapiro was shadow-promoted on Facebook, giving him an algorithmic advantage over… 2021-02-22 18:46:10 The 2nd reason, especially valid in the US, is the influence of the oil & 2021-02-22 18:46:09 The primary reason why countries with large CO2 emissions haven't gone nuclear is economic: the upfront cost of a nuclear plant is a large multiple of that of a coal plant. That's why coal is king in India, for instance. Nothing to do with activists. 2021-02-22 18:46:08 Seeing lots of takes about nuclear power and its opponents. Yes, nuclear power could be an important element of a climate solution. Yes, the world needs to build more nuclear power plants. But it's absurd to blame environmental activists for the fact that it hasn't happened yet. https://t.co/2r8u5nh70W 2021-02-22 16:08:47 Sometimes people say, "let's catch up when the world is no longer ending" and I'm like, "you mean, once the Fall is behind us, in the 23rd century?" 2021-02-22 04:55:32 Le Grand Meaulnes is a good book, by the way 2021-02-22 04:35:36 I wonder if The Great Gatsby was inspired by Le Grand Meaulnes. It has the same larger-than-life romantic hero main character, the same "within and without" narrator, the same overall template, the same themes, and even the same title. Published 12 years later. 2021-02-22 01:15:25 For best results, fall in the love with the process, not the result 2021-02-22 00:23:52 @togelius @flantz @razsaremi @LilianTogelius Congratulations!! 2021-02-21 19:04:35 Less cognitive bias, less time wasted, more new insights that will inform your next idea. 2021-02-21 19:03:35 When we get a new idea, our impulse is to look for signs that could validate it. But the most effective way to make real progress is to look for the simplest way to prove your idea wrong. 2021-02-21 03:26:58 wow this is dark https://t.co/8CcBx7HMCr 2021-02-20 19:57:52 You can't placate extremists by agreeing to their demands, inviting them at the table, giving them more power. They will always end up explosively turning on you. They're defined by perpetual hostility, not by a specific set of reasonable goals and values you could compromise on. 2021-02-19 20:00:57 Animals evolved to fit their environment -- everything about them (us) is a product of environmental constraints. Likewise language is a construct evolved to fit a specific set of functions, and you cannot model it independently from this context. 2021-02-19 19:58:50 This is akin to modeling the appearance of animals as a statistical distribution while ignoring the environment in which they live. You could use such a model to generate plausible-looking animals, but don't expect them to be able to survive in the wild (environmental fitness) 2021-02-19 19:52:20 Reminder: language serves a variety of purposes -- transmit information, act on the world to achieve specific goals, serve as a social lubricant, etc. Language cannot be modeled as a statistical distribution independent of these purposes. 2021-02-19 19:47:48 Interesting analysis by @mhmazur. Human work is driven by clear goals and is informed by task-specific context. A model that is optimized for generating plausible-sounding text, ignoring goals and context, virtually never produces any useful answer (unless by random chance). https://t.co/QPzapZgale 2021-02-19 19:29:34 New code example on https://t.co/m6mT8SrKDD! Gated Residual Networks (GRN) and Variable Selection Networks (VSN) for structured data classification. https://t.co/1Ez9kpcxtL 2021-02-18 21:07:26 Congrats to @NASA @NASAJPL on the successful landing! 2021-02-18 20:18:45 A big reason why research labs that hype up general AI progress are irresponsible: their talking points end up shaping the worldview of decision-makers -- in insane ways https://t.co/BWakU7grNS 2021-02-18 18:56:59 New code walkthrough on https://t.co/m6mT8SrKDD! Two techniques for improving the memory efficiency of recommender systems: the Quotient-Remainder trick, and Mixed Dimensions Embeddings. https://t.co/oegi66I1QP 2021-02-18 16:49:52 RT @aureliengeron: Just finished updating all the notebooks for my book to Scikit-Learn 0.24 and TensorFlow 2.4. Phew... https://t.co/rcQ… 2021-02-18 07:10:47 RT @nytimes: Life expectancy in the U.S. fell by a full year in the first half of 2020. It was the largest drop since World War II and the… 2021-02-18 06:20:37 It is often necessary for centralized institutions to exist in order to successfully distribute power. For instance, this is exactly what unionization is about. 2021-02-18 06:18:30 Successful decentralization isn't just getting rid of the legacy centralized power structure (which simply leads to the rise of even less accountable central powers). It's developing a balanced system that can self-organize in a fair, democratic way. 2021-02-18 03:58:08 I run the neural net I run the SAT solver I run the combination neural net and SAT solver 2021-02-18 01:17:31 @smly 10!! 2021-02-17 17:48:29 New code walkthrough on https://t.co/m6mT8Sa9M5: Switch Transformers, an architecture the makes it possible to increase the representational capacity of a Transformer while keeping its computational cost low. Implemented by Khalid Salama https://t.co/nkMu0QwPuo 2021-02-17 16:40:29 Every crisis we face from this point on -- the climate crisis, the next pandemic, natural disasters, terrorist attacks, etc -- is going to be made considerably worse by viral disinformation and political polarization. We saw this first hand with Covid. https://t.co/90y12egymN 2021-02-17 06:23:45 I like machine learning algorithms with whimsical, poetic names. Even better if these properties are accidental 2021-02-17 02:45:02 They actually come in many different styles, but these three features -- cluster of triangles, columns, partial stones covers -- seem to be universal constants https://t.co/PMNwuHzD2D 2021-02-17 02:31:38 This one is pretty good but it needs more random triangles https://t.co/F4czEWrz0O 2021-02-17 02:27:19 In America, the hallmark of personal success is when your house is a cluster of triangles of various sizes, with a front door flanked by two columns, and a stone-like cover glued on some of the outer walls 2021-02-16 23:32:52 The hardest thing in machine learning is to find how to productively leverage it in your product. The second hardest thing is to collect and annotate the right dataset. Building and training models is relatively straightforward by comparison 2021-02-16 20:22:31 RT @vkhosla: As I always say, most people are limited by what they think they can do rather than what they can do. Knowing something is pos… 2021-02-16 19:09:28 To discover something, you must first expect to find it. 2021-02-16 19:05:36 Merely knowing that a different, better way is possible, causes the breakthrough to manifest into existence. Our beliefs determine our outcomes. 2021-02-16 19:03:47 One of my favorite Kaggle facts: after a long leaderboard stagnation period for a competition, seeing one team make a sudden breakthrough will often cause multiple independent teams to quickly reproduce the same breakthrough -- with no knowledge of how the first team did it. 2021-02-15 18:24:49 RT @UNFCCC: Confused about the #PolarVortex? Usually a strong jet stream confines Arctic air to the north, stabilized by a big difference i… 2021-02-15 17:42:49 Many reply tweets are the Twitter equivalent of finding a spot on a street with high foot traffic and holding a sign that says "look at me, I'm an idiot" 2021-02-15 17:31:06 The quickest way to lose the ability to do something is to stop believing you can do it. 2021-02-14 22:09:05 The side joke is that a self-styled rationalist's primary mechanism for adopting beliefs remains identity reinforcement -- just like everyone else -- rather than the actual rational value of an argument. 2021-02-14 22:06:11 People are subject to identity bias (you are more likely to believe a statement that seems to match your identity). If your identity is "I'm a rationalist", then you're more likely to fall for arguments that feature superficial scientific attributes. 2021-02-14 21:43:14 If you want to fool a nerd, make long, complex, overly abstract arguments, free from the shackles of reality. Throw equations in there. Use physics analogies. Maybe a few greek words 2021-02-14 21:39:40 The belief in recursive intelligence explosion is a good example: only someone who thinks of themselves as a very-high-IQ hyper-rationalist could be susceptible to buy into it 2021-02-14 21:37:11 There's a pretty strong relationship between one's self-image as a dispassionate rational thinker and the degree to which one is susceptible to fall for utterly irrational beliefs that are presented with some sort of scientific veneer 2021-02-14 18:58:27 Science: fuck around and find out. Then write it down and submit it for peer review https://t.co/P5Yvnk2Ckx 2021-02-14 17:03:36 Due to nuclear tests being banned, new designs of nuclear warheads are being developed entirely via simulations -- which works (probably?) because our model of physics is pretty reliable. 2021-02-14 17:03:04 But that doesn't mean your model is worthless. Surely we all have the experience of writing a large piece of code and having it work on first try. 2021-02-14 17:00:19 Of course, if the event has never happened before, that implies that your model of how it happens has never been validated in practice. You can model the uncertainty present in what you know you don't know, but you'll miss what you don't know you don't know. 2021-02-14 16:58:32 An event that only happens once can have a probability (before it happens): this probability represents the uncertainty present in your model of why that event may happen. It's really a property of your model of reality, not a property of the event itself. 2021-02-14 06:13:46 https://t.co/ClA9GBy6Ui 2021-02-14 02:55:02 We're recording all of the dots -- our successors will have currently-unimaginable technology to connect them. 2021-02-14 02:54:10 Consider the events of January 6. Future historians will likely know who was there, who said what to whom, who did what, minute by minute. The amount of information you can recover from even a single video is enormous, and we have hundreds of them. 2021-02-14 01:57:26 An under-appreciated feature of our present is how we record almost everything -- far more data than we can analyze. Future historians will be able to reconstruct and understand our time far better than we perceive and understand it right now. 2021-02-13 19:10:53 RT @A_K_Nain: Idk how many of you have tried TF cloud yet, but if you haven't you should. Highly recommended. The workflow is exceptionally… 2021-02-12 22:03:20 @hardmaru Pretty much every important thing in life is harder (and more rewarding) than making money. There is more money in the world than there is meaning. 2021-02-12 17:54:52 In case that wasn't clear, the following recurring pattern: User: *incites political violence* Platform: *bans user* Political Party: why are you silencing the voices of our movement?! ...is actually a pretty grave self-indictment on the part of Political Party. 2021-02-12 17:26:25 It would be similar to buying a soda company because you want to reuse its carbonated sugar water manufacturing plants. Strongly implies that the original business was worthless. 2021-02-12 17:22:42 RT @quaesita: A free #AI course that puts the fun in fundamentals! Sign up here: https://t.co/wWNzyJ2NvO - the first 90min arrives in a fe… 2021-02-12 17:19:56 What makes a SasS company valuable isn't really its software (which could be cloned), it's its contracts, its reputation, its customer funnel... buying a SasS company "for the software" then winding down its business is a pretty bad sign 2021-02-12 03:57:15 @Jayson_Marwaha @CHUdeLyon @HCL_research @bratogram That's a different fchollet, neat paper tho 2021-02-11 23:09:45 To be fair, I'm not the best person to give this advice, because I tend to work significantly more than I should. But at least I'm aware of it 2021-02-11 22:37:54 Beyond that, "living a life you won't regret" typically involves more than never-ending work. 2021-02-11 22:36:56 I've done 90 hour weeks on multiple occasions, and IMO it's not sustainable for more than 3 weeks in a row. You reach long-term peak performance when you balance a diverse set of activities and when you have time (and mental space) for deep thinking. 2021-02-11 22:19:34 @AutoArtMachine I haven't used Talos, so no idea! But if you're tuning a Keras model, KerasTuner would be hard to beat for user experience. And you can use it with Vizier (via cloud service), the optimization service that Google engineers use for their own models. 2021-02-11 22:18:02 RT @paul_rietschka: @fchollet A million hearts. Used to be on the fence wrt GCP, but the seamless (or mostly so) integration of TF + genera… 2021-02-11 22:09:22 https://t.co/WCws9LunBg 2021-02-11 22:09:02 TensorFlow Cloud is being adopted by ML engineers -- part of a "superpower toolkit" for MLOps :) https://t.co/WCws9LunBg https://t.co/YsSgNNoWmk 2021-02-11 21:25:17 RT @zzznah: New Neural CA article "Self-Organising Textures" is live on @distillpub !!! From @eyvindn, @drmichaellevin, @RandazzoEttore and… 2021-02-11 20:48:20 To get started with TensorFlow Cloud, first you'll need to configure a Google Cloud project -- here's a guide you can follow: https://t.co/f02Yf63Yxq 2021-02-11 20:47:38 2. Structured data classification with a Wide & 2021-02-11 20:47:10 With TensorFlow Cloud + KerasTuner, you can easily launch distributed hyperparameter tuning jobs on Google Cloud right from a Kaggle notebook or Colab notebook. Check out these examples: 1. Image classification with distributed hyperparameter tuning: https://t.co/f3y8LmCfFf 2021-02-11 02:23:01 Creating anything ambitious is hard. It means being constantly disappointed in your output, constantly being frustrated with the limits of your ability. That's normal. Important to remember that every creator goes through this, and persistence is what makes all the difference 2021-02-09 02:40:01 Reaction videos are the quote tweets of YouTube 2021-02-09 01:06:42 RT @TensorFlow: TensorFlow Everywhere is ready for take-off! With over 20 locations and 9 languages, join us for #TFEverywhere2021 - a… 2021-02-08 05:02:58 If you're building recommender systems, I recommend checking out TFRS: https://t.co/vdOudcXmqH 2021-02-08 02:41:51 @Muenchner_Junge It follows the hierarchy of semantic categories of objects in the world. It's pretty intuitive 2021-02-08 02:21:58 The first six days in Genesis follow the order of the steps a developer would take to create a game world using procedural generation. 2021-02-08 01:20:41 Can confirm. Silicon Valley has dozens of Malaysian & 2021-02-06 19:00:14 An underrated feature of Twitter is that you can use it to check whether the clever joke you just thought of is actually original or not 2021-02-06 18:27:30 Investing in human potential and human ingenuity provides exponential returns. https://t.co/SEElVPFOiT 2021-02-05 20:45:46 4. If ever called out, attack, harass other developers. Maybe also send a few anonymous insult emails? If this description fits you, please stick to investment banking or cryptocurrencies. Open-source is obviously not for you. And may not be the financial bonanza you expect. 2021-02-05 20:45:45 How not to do ML open-source library development: 1. Clone existing packages, almost feature-by-feature. Possibly copy source code. 2. Arrogantly claim your clone is better than the original. Spread FUD about other libraries. 3. Claim others are copying you (projection much?) 2021-02-05 19:38:53 You know how some software feels "enterprisey"? That has little to do with the use case of the software or who buys it. It's a symptom it was built by people who weren't using it. All software where the design process is disconnected from the UX eventually feels enterprisey. 2021-02-05 19:33:42 If you're building a product... talking to your users and using the product yourself are two of the most effective uses of your time. Executing in the right direction is more important than execution speed. 2021-02-05 04:01:11 The former copy what's already popular. The latter build that thing they need that doesn't exist yet. 2021-02-05 03:59:13 The least effective kind of technologists are the opportunists driven by greed. The most effective are driven by curiosity. Scratching their own itch, not lottery tickets. 2021-02-04 18:37:41 RT @RisingSayak: This guide details everything you need to know to create your own `fit` call - https://t.co/NqeHeLP0hF. Thanks to @fchol… 2021-02-04 18:37:40 RT @RisingSayak: When implementing a custom loop in @TensorFlow, try putting it in a `fit` call. Because when you do that you can *seamless… 2021-02-04 14:59:54 Social media is a game that plays you. 2021-02-04 03:48:36 RT @m3sibti: Reading the best book on the deep learning and tf/keras @TensorFlow. Thank you @fchollet for such an amazing resource. Waiting… 2021-02-04 03:33:41 My favorite part of my career so far has been the people I've had the chance to meet and work with. 2021-02-03 18:51:16 @ChrSzegedy Overall I'd say the pre-industrial world saw a linear rate of change, and the industrial and post-industrial worlds also see linear rates of change. What was non linear was the transition between the two 2021-02-03 18:50:02 @ChrSzegedy Yes, though the difference isn't dramatic. Peak rate of change was probably the transition from pre-industrial to industrial, though the period that spans the 2 world wars is also a strong contender. Change since WW2 has generally been a bit slower, despite the digital revolution 2021-02-03 18:44:50 @ChrSzegedy Either 1870-1920 or 1910-1960 2021-02-03 17:57:19 RT @MelMitchell1: "Be it resolved, the quest for true AI is one of the great existential risks of our time." Arguing for: Stuart Russell… 2021-02-03 17:24:18 @math3mantic_ Yes 2021-02-03 16:41:05 Tip: Keras has an einsum layer (EinsumDense, similar to Dense but backed by tf.einsum). You can also use tf.einsum in the call() method of a custom layer. https://t.co/QRQeqttCUC https://t.co/rJSIqa8onD https://t.co/8ELDJm14bI 2021-02-03 02:44:37 The user growth of Keras has been remarkable in 2021 so far. I guess lots of you had "get into deep learning" on their new year resolution list 2021-02-02 23:28:47 As far as large-scale machine learning workflows are concerned, there's no doubt in my mind that the cloud is the future. https://t.co/FXxZ23lcv8 2021-02-02 22:21:14 RT @IEEESpectrum: #AI can seem super-intelligent when confined to narrow tasks. But how smart is it really? IQ tests from @fchollet, @kaggl… 2021-02-01 16:25:27 RT @RisingSayak: Is it possible to write an extremely readable implementation of a CLIP like model in #Keras & HELL YEAH, it… 2021-02-01 01:27:52 At any given instant, the world contains an incalculable amount of both unwitnessed suffering and unwitnessed beauty. The hare devoured alive in silence, the dreamy cloudscapes no one is there to see. Crippling back pain is not even a drop in the ocean. 2021-01-31 18:29:09 RT @CShorten30: OpenAI's CLIP has been added to Keras Code Examples for Natural language image search!! This video walks through the code… 2021-01-31 07:00:54 @mat_kelcey Basically just fix your global batch size to something reasonable (each core should get at least a batch > 2021-01-31 05:54:05 @mat_kelcey Or just use a large value for steps_per_execution. With step fusing you can maintain high utilization while keeping your global batch size small. 2021-01-31 01:43:37 This is honestly a really cool demo -- if you haven't seen it, take a look now! https://t.co/14GkNDBaG8 2021-01-30 19:32:01 The increasing radicalization of a movement is a sign of its weakness. It makes it more dangerous, but it also signals that its collapse is getting closer. 2021-01-30 01:55:34 @DynamicWebPaige @CommonGrounds Hate to break it to you, but olds like us aren't supposed to stay up past 10pm 2021-01-29 19:46:58 The ability to introspect and learn is what turns a failure into a step towards success 2021-01-29 18:37:27 In programming, *what you name things* (arguments, variables...) is the first layer of documentation about what they represent and how they should be used. Use names that actually mean something, not single letters (unless that letter is already a well-established convention) 2021-01-29 18:05:12 Do you know something important that others don't? If not, the best way for you to "stick it to Wall St" is to put your money in an index fund. 2021-01-29 18:03:15 Assess risk all you want - most investors' risk analysis doesn't reflect the actual distribution of outcomes. Markets are about information arbitrage: those who make outsized returns are those who managed to obtain an info advantage -- through hard work, luck, being an insider… 2021-01-29 17:56:49 The risk vs reward spiel, a classic. Those who like to preach it rarely live by it. The fact that Shkreli is behind bars now shows that he probably wasn't applying "careful risk assessment" to his own actions. https://t.co/PoxHJzijIZ 2021-01-29 02:31:14 "Why should I go watch this movie?" "You will have *four* different opportunities to cry" "I'm in" 2021-01-29 02:27:55 I still sometimes think about this movie that had the promotional catch phrase "you'll be able to cry 4 times" (4) Implies the existence of a juicy "Crying as a Service" market (CaaS) 2021-01-28 23:33:15 @JonathanSumDL @jpatanooga Definitely port your models to TF/Keras so you can leverage TF.js. 2021-01-28 21:05:44 Josh @jpatanooga has a new book out about MLOps with Kubeflow -- check it out! https://t.co/aNWyBq0vRY 2021-01-28 20:02:56 RT @TensorFlow: Integrate TensorRT in TensorFlow 2x TF-TRT leverages TensorFlow's flexibility while also taking advantage of the optimiz… 2021-01-28 19:53:12 @DynamicWebPaige Fun fact: Frodo was originally named Bingo in the first draft 2021-01-28 19:10:28 @cwarzel sell now to lock in your personal growth 2021-01-28 16:06:53 RT @learmonth: More than half of all Robinhood users hold GameStop stock, more than 60 percent hold AMC stock 2021-01-28 02:16:48 When I saw "HOLD THE LINE" trending I initially misread it as "HOLD THE BAG" 2021-01-27 18:55:28 @AdamSinger It's mathematical. It can only go up 2021-01-27 18:54:36 (yes this is sarcasm. Unless...? Now is your last chance to buy under $400) 2021-01-27 18:52:33 People who say $GME is overvalued don't understand the value proposition. It's no longer a stock, it's a store of value. It's the new digital gold. And for now it has only 3% of the $BTC market cap, still plenty of room to grow. My price target is over $9,000 2021-01-27 17:45:22 This is pretty cool -- a dashboard to monitor your Keras Tuner hyperparameter searches https://t.co/msy5ZvrQso 2021-01-26 21:12:34 For context, conservative America was swept by a Satanic Panic in the 80s -- a wave of conspiracy theories about satanic child abuse in daycare centers, etc. That was before the web. It has become more mixed with identity politics but the content and audience are still the same https://t.co/j8kxo3Sw7O 2021-01-26 19:21:09 RT @TensorFlow: The first release (0.1) of TFLite Support! Check out this toolkit that makes deploying TFLite models on mobile devices m… 2021-01-26 19:15:25 RT @lc0d3r: Pretty extensive example with a dual encoder, custom loss and GradientTape 2021-01-26 18:07:45 If you have 2 GPUs, if will use model parallelism -- place each model branch on a different device. https://t.co/Xr8A5zSM5F 2021-01-26 18:05:59 New code walkthrough on https://t.co/m6mT8SrKDD: image search with natural language queries using a dual-encoder approach. The idea is to learn a joint embedding space for images and associated captions (inspired by OpenAI CLIP). https://t.co/fWHsv18QuL 2021-01-26 17:54:26 RT @TensorFlow: Custom hardware powered by Tensorflow.js! Join @jason_mayes and @paulrjessop to learn how he built a robotic device tha… 2021-01-25 21:39:12 RT @AndrewYNg: I'm with @fchollet on this. There're some best-practices on creating and organizing data that experienced applied ML people… 2021-01-25 20:26:50 RT @sundarpichai: To help get vaccines to more people, Google is providing $150M to promote vaccine education & 2021-01-25 19:26:33 https://t.co/mTJQfOAVzE 2021-01-25 00:46:06 Two things keep surprising me. One is how much and how quickly we can forget about the past. The other is how certain memories stay ever present, no matter how much time passes. 2021-01-24 19:22:22 In general, there is very little research done on best practices for data curation / cleaning / annotation, even though these steps have more impact on applications than incremental architecture improvements. Preparing the data is an exercise left to the reader 2021-01-24 19:17:47 ML researchers work with fixed benchmark datasets, and spend all of their time searching over the knobs they do control: architecture & 2021-01-24 17:49:24 The only "appeal" of a wannabe dictator is the power they hold and the favors they can bestow. No one likes a retired dictator. 2021-01-24 12:09:54 Before Game of Thrones, most pop stories were too reluctant to kill characters. After Game of Thrones, they tend to be too enthusiastic about killing characters. Will fade soon though. 2021-01-24 00:59:27 The complexity of the existing corpus of programs written in this language over the past 3 billion years defies the imagination 2021-01-24 00:59:04 DNA is fairly similar to a programming language, except that it programs matter, not bits, and its interpreter is far more complicated than any software interpreter we've ever built. 2021-01-23 19:38:15 @TLM_Cambridge This is flimsy distinction, that will feel increasingly prehistoric as technology advances. Even today, all technology is the outcome of an evolution process, and in the future most artifacts will be created via search/optimization without much human involvement. 2021-01-23 19:23:46 Living organisms represent a level of technological achievement that we're still far from rivaling. https://t.co/OGoH0xzMAS 2021-01-23 19:08:04 @mark_dow Now do 2006-2021 2021-01-22 22:19:34 @AdamSinger Same energy https://t.co/HjEaTfjgGX 2021-01-22 04:58:04 Who is training a StyleGAN on this dataset? https://t.co/V3D1JcjfHS 2021-01-21 23:24:25 When you have a fixed amount of modeling power, unstructured complexity and uncertainty are the same thing. 2021-01-21 19:14:51 Happy new year! https://t.co/vs3bWBjEUa 2021-01-21 18:51:59 Best time to do these things was in March/April, second best time is now. So much time wasted in inaction, denial, incompetence, magical thinking. https://t.co/Imi6cedDSd 2021-01-21 18:08:40 RT @ML_fr_company: En 2021, programmez du #MachineLearning #DeepLearning grâce #livre de @fchollet le créateur de #keras et l'auteur de ce… 2021-01-21 02:29:03 Your brand is what you do. Ability, not image. 2021-01-21 02:26:59 I think developer tools can be... cool. For example, TensorFlow is cool. Not because of anything about TensorFlow itself, but because people are using it every day to create incredibly cool products & 2021-01-21 01:59:05 It's crazy what developers can create with TensorFlow.js nowadays https://t.co/waOZDhrG9x 2021-01-20 22:07:22 @brianklaas "California is going to have to ration water. You know why? Because they send millions of gallons of water out to sea, out to the Pacific. Because they want to take care of certain little... tiny... fish... that aren't doing very well without water, to be honest with you." 2021-01-20 17:41:52 Congrats on the well-deserved award! https://t.co/twHTphHn06 2021-01-20 17:00:26 2021-01-20 15:56:52 The fire is finally out, now we can start clearing the charred debris and rebuilding. And hopefully we will rebuild something fireproof. Because the arsonists are still lurking 2021-01-20 15:35:16 Much has been lost in the past four years. Here's hoping we can recover most of it in the next four. 2021-01-20 04:23:44 To truly understand a system, you need to have one foot outside and one foot inside. You need to be an external observer with deep insider knowledge. 2021-01-20 04:11:55 You may know him as the guy on the ¥10,000 bill. 2021-01-20 04:11:09 He was instrumental in the self-Westernization of Japan during the Meiji period -- explicitly as a means to resist Western imperialism. 2021-01-20 04:09:42 Fukuzawa Yukichi thought that national self-reliance (and beyond, greatness) was a direct consequence of personal self-reliance -- that Western countries had become powerful in large part due to their individualistic culture (which encouraged competition, education, & 2021-01-19 19:21:28 RT @alexandr_wang: .@scale_AI Transform will be Scale's first ever conference on March 26th Keynote speakers: - @sama, CEO of @OpenAI -… 2021-01-19 18:45:36 RT @A_K_Nain: Keras code examples ..as always https://t.co/YQ0oWQkCXV 2021-01-19 18:34:47 New code walkthrough on https://t.co/m6mT8SrKDD: the Vision Transformer model. Perform image classification *without convolutions* by applying a Transformer to vector encodings of image patches. Created by Khalid Salama. https://t.co/EanhDyZpa5 2021-01-19 18:17:00 January. It's already starting. California has a tough century ahead. https://t.co/p30UBQaPce 2021-01-19 02:00:20 Very true. Much info still to come out. https://t.co/AQbmjVm3NI 2021-01-19 01:30:04 A eureka moment is merely the crystallization of a very long log of accumulated thoughts. 2021-01-19 01:29:49 You develop interesting ideas not by being clever, but by thinking about things for a very long time, with obsessive intensity -- by following every trail of thought, opening door after door, and not stopping until you reach a conclusion. 2021-01-18 18:58:38 RT @CShorten30: Keras Examples are amazing! One of the most underrated resources for studying Deep Learning out there! Currently working… 2021-01-18 17:52:19 "An individual has not started living until he can rise above the narrow confines of his individualistic concerns to the broader concerns of all humanity." (Martin Luther King Jr) 2021-01-18 17:12:43 New example on https://t.co/m6mT8Sa9M5: Bayesian neural networks. Use TensorFlow Probability to create Keras models that predict a *distribution* of possible outcomes for a given sample, rather than a single prediction score. https://t.co/duAlCy7xS0 2021-01-18 16:47:26 RT @PyImageSearch: New tutorial! Contrastive Loss for Siamese Networks with #Keras and #TensorFlow - Understand contrastive loss - Imple… 2021-01-18 01:35:13 What would you change in your life if you wanted to double the amount you're learning every day? What kind of person would you be now if you made these changes 10 years ago? 2021-01-17 22:29:40 Every system that possesses property X must be encoded in a lower layer of abstraction, which often does not feature property X. Pointing this out does not negate that X is real, it indicates at which level of abstraction one should be looking for the origin of X. 2021-01-17 22:26:19 Conscious thought is naturally a product of unconscious thought processes -- much like life is a product of chemical processes that are clearly not alive. But that doesn't mean conscious thought doesn't exist, or that life doesn't exist. https://t.co/zMsSnX6Ljy 2021-01-16 20:04:27 I think the coming hardships are more likely to become catalysts of progress than triggers of collapse 2021-01-16 20:03:40 Hardships and setbacks can be catalysts of progress -- progress doesn't happen without challenges. Decline happens when we lose the ability to respond to challenges. Collapse happens when decline accelerates past a point of no return 2021-01-16 19:58:54 RT @washingtonpost: Misinformation dropped dramatically the week after Twitter banned Trump https://t.co/3zn6pJH6Q1 2021-01-16 18:47:14 RT @wjarek: . @TensorFlow is the #Python package with the largest number of unique OSS contributors around the world over the last 12 month… 2021-01-16 18:00:25 Our particular civilization, as a system, features significant structural risk factors that could enable collapse, but it also has important collapse-preventing characteristics. I think the latter factors will win out 2021-01-16 17:59:38 For the record, I don't think civilization will collapse in the near future (within the next 400 years). Not even as a consequence of catastrophic climate change over the next two centuries. But we will go through some pretty rough patches 2021-01-16 07:26:30 Factors of decline are multiplicative. E.g. cultural & 2021-01-16 07:13:32 2020 was definitely a step backwards. If you're wondering how great civilizations can end up collapsing: they just have many 2020s in a row over several decades, with exponentially compounding cascade effects at each new development. 2021-01-15 21:47:46 We thought that in our dystopian future we'd be wearing N95s because of the air quality, but it turns out we're wearing them because of the pandemics 2021-01-15 18:25:06 I don't want us to just hit our OKRs. I want us to make something we're all proud of. 2021-01-15 16:36:41 Insightful thread about the emergence of new Covid variants. We're likely to see more of them in the near future. https://t.co/SDsPfT5rPr 2021-01-15 02:58:37 @DynamicWebPaige Oh no... Hang in there! 2021-01-14 19:58:19 @FlyPage What did you find painful, and how can we improve it? 2021-01-14 19:03:48 I love hearing people talk at length about something they're passionate about. Doesn't matter what. I just vicariously enjoy the passion & 2021-01-14 16:28:24 RT @fchollet: One thing we've shipped last year and that I'm looking forward to iterating more on later this year: TensorFlow Cloud. Train… 2021-01-14 16:12:27 RT @cwarzel: New: We mapped one part of an online phenomenon, which is when people go from normal poster to radical influencer almost overn… 2021-01-14 16:07:11 @saunterer_ No, no plans. 2021-01-14 16:06:55 @amogh7joshi Yes, it has been available since last summer. 2021-01-14 05:44:04 @HelloFillip I'm curious, what did you build? 2021-01-14 05:27:45 Would you use this to train your models? Why / why not? Have you tried it? How did it go? 2021-01-14 05:23:07 If you have any feedback on this product, or if you face any issue / friction setting it up, let me know. We're going to do our best to make this seamless and delightful. More info: https://t.co/jKmfyIUNRi 2021-01-14 05:21:39 One thing we've shipped last year and that I'm looking forward to iterating more on later this year: TensorFlow Cloud. Train your TF/Keras model in a distributed way on GP by just adding one line to your local script / Colab notebook / Kaggle notebook. https://t.co/vXpD0J4GLo 2021-01-14 02:41:25 @W7VOA @POTUS @WhiteHouse Those staffers who haven't resigned have completely given up 2021-01-14 01:14:06 RT @DeepLearningAI_: Generative Deep Learning with TensorFlow, course 4 of the TensorFlow: Advanced Techniques Specialization, is now avail… 2021-01-14 01:07:03 I think creating something that is heartfelt and sincere is more important than creating something that people like. Creative freedom is an end in itself 2021-01-14 01:00:11 RT @MaxCRoser: This study estimates the cost to produce the missing vaccines to protect *the entire world* from COVID. https://t.co/c1oTe5P… 2021-01-12 01:09:30 There's nothing Python can't do https://t.co/EylSc2HnZJ 2021-01-11 20:03:47 RT @TensorFlow: Due to a number of vulnerabilities, we have released patches for TensorFlow on all versions from 1.15 to 2.3 inclusive. Fu… 2021-01-11 16:49:06 @jugurthahadjar @benedictevans My tweet above is my main conclusion: these attacks may at first seem like they come from many different random faceless people, but in fact they come from a very small number of very motivated people using anonymity as a shield and a multiplier 2021-01-11 16:36:59 @ugurkanates Read the tweets yourself and tell me who you think could be writing them. 2021-01-11 16:35:29 @jugurthahadjar @benedictevans I don't think there's anything we can do about it, unfortunately. 2021-01-11 16:16:29 @benedictevans It's sad. One thing I've found out is that "anonymous" attacks are often not from "randos", but from folks who have a specific reason to hide their affiliation / identity. 2021-01-11 16:15:15 The disgraceful behavior of some of the PyTorch devs is a rare but glaring exception. 2021-01-11 16:15:07 I want to stress that the vast majority of people in the Python ML open-source community are good people. Over the years, I’ve interacted (and occasionally collaborated) with people from MXNet, sklearn, CNTK, Caffe, Theano, JAX, etc -- all fantastic people… 2021-01-11 16:14:49 I’ve had to put up with this stuff since 2017. In 2017 I had to block one of these guys on Twitter, and it only got worse from there. It’s sad and exhausting, to say the least. 2021-01-11 16:13:08 So apparently a PyTorch dev has created (+ regularly updated for half a year) a "parody" account to attack me. I assume this is done by the same people who send me anonymous emails of insults on a regular basis, and who create sockpuppet accounts to bash me on Reddit etc. https://t.co/mBgSlzw5NS 2021-01-11 06:24:46 It's hard to overstate the considerable impact of NumPy on both the scientific community and the destiny of Python over the last 14 years. Thank you! https://t.co/XvXnRSqoze 2021-01-11 04:37:43 RT @EvanMcMullin: Let last week's insurrection be a lesson to all of us. When a demagogue spends years cultivating a culture of indecency a… 2021-01-11 03:41:15 Agreed. With everything that's going on, it's ok to be softer with yourself, and it's essential to be considerate with everyone you interact with. We're all going through a lot. Also, thank you Paige for being awesome, as usual :) https://t.co/OefX2to3fE 2021-01-10 20:54:06 @jesper_wulff @cj_battey Not yet, but it's definitely a possibility 2021-01-10 20:50:33 @MarkusM99098101 When it's ready! 2021-01-10 20:50:23 @JonathanGarvey It's an introduction book, so the first chapters are dedicated to teaching the basics. Python is enough 2021-01-10 20:43:12 @lightroasted You can already, but it will be some time until it's released https://t.co/yGjEGr7jhh 2021-01-10 20:42:01 @KamilTamiola 2nd edition of Deep Learning with Python 2021-01-10 20:40:08 @cj_battey 2nd edition of Deep Learning with Python 2021-01-10 20:39:52 @TimIles_ Yes 2021-01-10 20:39:30 RT @Schwarzenegger: My message to my fellow Americans and friends around the world following this week's attack on the Capitol. https://t.c… 2001-01-01 01:01:01

Découvrez Les IA Experts

Nando de Freitas Chercheur chez Deepind
Nige Willson Conférencier
Ria Pratyusha Kalluri Chercheur, MIT
Ifeoma Ozoma Directrice, Earthseed
Will Knight Journaliste, Wired