Content Recommendations- 6/6/2025 [Updates] - by Devansh
It takes time to create work that’s clear, independent, and genuinely useful. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction. —so if you believe in the mission, there’s likely a plan that fits (over here).
Every subscription helps me stay independent, avoid clickbait, and focus on depth over noise, and I deeply appreciate everyone who chooses to support our cult.
– Supporting this work doesn’t have to come out of your pocket. If you read this as part of your professional development, you can use this email template to request reimbursement for your subscription.
If you’d like to meet other members of our community, please fill out this contact form here ()- https://forms.gle/Pi1pGLuS1FmzXoLr6
A lot of people reach out to me for reading recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week. Some will be technical, others not really. I will add whatever content I found really informative (and I remembered throughout the week). These won’t always be the most recent publications- just the ones I’m paying attention to this week. Without further ado, here are interesting readings/viewings for 6/6/2025. If you missed last time’s readings, you can find it here.
Reminder- We started an AI Made Simple Subreddit. Come join us over here- https://www.reddit.com/r/AIMadeSimple/. If you’d like to stay on top of community events and updates, join the discord for our cult here: https://discord.com/invite/EgrVtXSjYf. Lastly, if you’d like to get involved in our many fun discussions, you should join the Substack Group Chat Over here. Working on something fun and want to meet people from our cult? Fill this form out form.
Ivan Landabaso shares excellent insights for founders, investors, and general technologists. He shares a wide variety of interesting tidbits here and there, so I’ve really loved following his stuff. Absolutely recommend checking him out, both on LinkedIn and on Substack here.
If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.
Curious about what articles I’m working on? Here are the previews for the next planned articles-
— -
AI Market Report for May.
I provide various consulting and advisory services. If you‘d like to explore how we can work together, reach out to me through any of my socials over here or reply to this email.
These are pieces that I feel are particularly well done or important. If you don’t have much time, make sure you at least catch these works.
Absolute must-read by James Wang , one of the clearest thinkers on Substack. His perspective as a deep tech investor is absolutely crucial. These reading recs are not ranked, but me putting this first on our recommendations is deliberate. Look at this absolutely chilling conclusion he ends with-
“I’ve already been somewhat put off by how much my job in terms of deep tech investing has taken on a tinge — or much more than that — of national security and power. I’m not directly in DefenseTech, but it’s kind of hard to avoid any advanced technology, AI included, falling into that category. As this week showed, however… that likely isn’t going away anytime soon. Maybe not even within my lifetime.”
The next time I meet Ryan K. Rigney , I’m going to deck him in the schnoz. STOP MAKING SO MANY INTERESTING PIECES ABOUT GAMING. I’m already following too many random industries. Rigney charts Nintendo’s four-decade love affair with hardware exclusivity. Read it, then ask: how long can one company keep selling silicon-locked fun while the rest of the industry goes platform-agnostic, and what does that mean for AI’s own walled gardens?
“With the Switch 2 launching this week, I’ve been thinking a lot about the unique place Nintendo has in the games industry.
The Japanese games corp is one of just a few surviving companies that have played a steady role in the video game business since it began 50 years ago. And ever since the release of the NES, Nintendo has been running basically the same playbook. That is: Nintendo produces video game consoles, and those consoles are the only place you can play games made by Nintendo.
In a world where everything is an Xbox and even PlayStation exclusives like The Last of Us and Ratchet & Clank are showing up on Steam, Nintendo is now just about the only company that can afford to sell its games exclusively on its own hardware.”
Speaking of someone who drops masterpieces like he’s an Inter Fan trying to cope with reality- meet Austin Lyons . Man gives a clear, data-driven walkthrough of the “AI-factory” moment: why reasoning models are pushing enterprises from pilot chatbots to real on-prem racks, how Nvidia/Dell are capturing that spend, and where AMD’s HBM play could slot in. It’s grounded in earnings numbers rather than slide-deck hype, making the belly of the story easy to trust and even easier to act on. Questions for you- can anyone outside Nvidia can generate tokens at scale without tripping over power, cost, or software-stack gaps?
“A year ago, if you asked me whether enterprise GenAI was real, I’d say maybe. There were surely many proofs of concept, but feasibility and value weren’t clear.
Now? It’s real.
Let’s walk you through what changed and what it means for Nvidia, Dell, and AMD.”
I’ve already told y’all that I’m a huge fan of
. Here’s a grounded teardown of the viral “15 tons of CO₂ per ton of lithium” meme. He tracks the citation chain, contrasts hard-rock, brine, and DLE pathways, and reframes the debate in kg CO₂ per kWh so you can judge battery emissions clearly, minus the clickbait.
“I glanced at the lithium extraction using sodium hydroxide described in this article. It seems interesting, but like every other process announced each week, it’s years away from even reaching the pilot stage, if it ever gets there at all.
Now, what I really want to look at is this:
“According to MIT, the process of mining one metric ton of lithium releases 15 metric tons of carbon dioxide.”
This 15-ton figure shows up a lot in anti-EV posts, which is usually a red flag that the person posting has learned most of what they know from a meme. And if you actually try to track down how they arrived at this 15 tons per 1 ton of lithium carbonate, you’ll quickly hit a dead end when it comes to data that support this.
The MIT article that’s often cited in posts and memes doesn’t include the math behind the number. Instead, it points to a 2020 BBC Future article titled “The new gold rush for green lithium.” But all that article does is cite a report created to promote a Direct Lithium Extraction (DLE) company, Vulcan Energy. The problem is there is no real data available from this data, no real comparisons, and no context.”
Covers what has been one of my biggest gripes with the AI Safety movement- they refuse to address any of the real problems with AI today.
“Sometimes there may be an impression that the drivers of AI progress care only about its positive effects, and aren’t cautious enough about potential harm. Actually, at least on the face of it, this is not true. There is really a lot of talk about AI safety, it has become one of the big subject areas in AI conferences and most of the high-profile tech leaders, as well as governments, speak all the time about how they’re working to make sure AI is safe. Very few people would openly say that we should continue to develop it without any fear of negative consequences. On one hand, this is encouraging: most people seem to have a, mostly genuine, ethical sense. On the other hand, the safety efforts this work leads to do next to nothing to reassure me about the future. I don’t think anyone (or almost anyone) will deliberately or carelessly build things that are unsafe–the issue is not a lack of effort from the research community. The issue is that the effort has been misappropriated by a mix of causes of secondary importance, and borderline fantasies. As happens in many parts of life, the ‘what’ stage was rushed through, and the field devoted all its energy to the ‘how’ stage. That is, a lot of work has gone into solving problems under the umbrella of AI safety, without properly considering what the big problems are. The result is that the most dangerous applications of AI have been largely ignored by the AI safety movement.”
Very interesting read. Back the drawing board with LLMs.
“Intermediate token generation (ITG), where a model produces output before the solution, has been proposed as a method to improve the performance of language models on reasoning tasks. These intermediate tokens have been called “reasoning traces” or even “thoughts” — implicitly anthropomorphizing the model, implying these tokens resemble steps a human might take when solving a challenging this http URL this paper, we present evidence that this anthropomorphization isn’t a harmless metaphor, and instead is quite dangerous — it confuses the nature of these models and how to use them effectively, and leads to questionable research.”
Very interesting watch on FDR. As a non-American, I have no real emotions towards him, but he seems to be an interesting person. I found the concluding sentences by Jack to be very interesting- FDR’s vision for the future failed b/c he assumed that the world would be run by men like FDR. Worth thinking about. Also he did some very interesting financial engineering, which makes him a top politician in my books.
“Franklin D. Roosevelt was born with a silver spoon — then used it to pry America out of the Great Depression and slam the door on rising fascism. With a little economic hocus-pocus, a new deal, and a tree-planting army of broke twenty-somethings, this man pulled America out of two back-to-back once-in-a-century crises and rebuilt the country — and the world — in a way that lifted almost everyone up. And he only locked up US citizens without a trial about 80,000 times”
Very good set of experiments for OCR. Very useful to my work- and likely will be for you guys as well. Don’t confuse difficult AI Research with useful AI Research! Great work Jason Dulai
“ChatGPTs camera functionality has always interested me. AI’s vision of vision is to be much grander (think robot eyes), but the fact that right now we have AI assistants that can look through your camera and perceive the world around you is already a massive step for technology.
With this interest, I wanted to expand into handwriting. You’ll see mine is trash to put it lightly, so I wanted to see if ChatGPT could A: Interpret it and B: Execute the prompts.
It’s time to ‘‘write’’ history with this experiment.”
Don’t care about the NBA, but this pattern is happening everywhere, including MMA. Given how much sports transcends boundaries for people, I think more should be done to keep sports accessible to the Fans that built it. We can all learn from the Bundesliga International GmbH here.
“In today’s NBA, the game is starting to feel less like a community and more like a luxury product. This video breaks down why NBA ticket prices are skyrocketing, how premium seating and VIP lounges are reshaping arenas, and why the average fan is getting priced out — especially during the NBA Playoffs. We dive into: The rise of courtside lounges like those in Chase Center and Little Caesars Arena How NBA teams are removing regular seating to add high-end suites The impact of resellers and dynamic pricing on ticket accessibility Why Mark Cuban sold the Dallas Mavericks and what it says about the future of NBA ownership How small-market teams like OKC Thunder still maintain real fan energy We also compare the NBA’s evolving business model to the airline industry’s first-class strategy, and examine how arenas are moving away from community hubs toward high-end commercial spaces. Whether you’re a longtime fan or just wondering why playoff tickets now cost as much as rent — this is the video for you.”
Substack decided to surface this masterpiece from Abhinav Upadhyay recently, and I’m really glad they did. So much good stuff.
“Most programmers work comfortably inside layers of abstraction: writing code, calling APIs, using tools, without needing to know what happens underneath. But systems-level thinking is about lifting the hood. It means understanding how things actually work, from source code all the way down to silicon.
This article is where that starts. We’ll build a concrete mental model of how a computer executes instructions, beginning at the hardware level with logic gates and circuits. From there, we’ll step through how those circuits form an ALU, how data moves through registers, and how a CPU follows instructions.
But that’s only part of the story. We’ll also look at how this hardware model shapes everything above it. How compilers turn code into machine instructions. How executables are structured. How the OS lays out a process in memory. None of these make full sense without the layer below.
If you want to understand systems, this is the foundation. You don’t need to memorize how every part works. What matters is building a model that helps you reason through the system when things break or behave in unexpected ways. That’s what systems-level thinking is about.”
A great overview by Sahar Mor . Don’t fully agree with the premise (I think Google is missing a lot before we can claim a true lead), but the information is great.
“This post captures my takeaways from attending Google’s flagship event, I/O 2025. It’s not a comprehensive announcement round-up. Instead, I’ve focused on the launches that matter most to anyone building or working with AI. I also share my perspective on what these moves mean for the broader AI ecosystem and founders, developers, and researchers alike.”
Interesting writeup by Augment Code . Would love to dig into this more.
“TL;DR: At Augment, we built a secure, personalized code indexing system that:
The result: Context-aware AI that actually keeps up with real development workflows while protecting code security.”
“Who was Nimrod, the Bible’s mighty hunter and empire-builder? Some scholars think he’s more than myth — maybe even Sargon of Akkad, one of the great kings of Mesopotamia. In this video, we explore the mystery behind one of Genesis’s most enigmatic figures.”
If you liked this article and wish to share it, please refer to the following guidelines.
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
My grandma’s favorite Tech Newsletter-
My (imaginary) sister’s favorite MLOps Podcast-
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819