Thinking Machines Just Announced More Human Like AI
What are "Interaction Models?" Mira Murati is brining a new twist to the future of AI experiences.
What if AI models could deliver experiences that were more human like? As an avid user of Voice AI, I’ve often wondered about this. This week we finally got to see what Thinky is up to.
Thinking Machines Lab, the AI startup founded last year by former OpenAI CTO Mira Murati, that has weirdly enough been mercilessly poached by OpenAI, and yet they have been able to attract some of Silicon Valley’s top AI talent. Thinky wants to move beyond the era of “turn-based” AI interactions and, I think they are on to something.
If they can pull it off, this will essentially deliver way more real and human-like AI interactions in real-time. Think about it, traditional AI models function sequentially—listening and then responding—whereas its new model aims for simultaneous input processing and response generation, akin to a phone call.
What are Interaction Models?
While Lilian is telling a story, the interaction model can track when she is thinking, yielding, self-correcting, or inviting a response; there is no specific built dialogue management system.
Lilian Weng is a Co-founder at Thinky.
Interaction Models are a new category of AI systems with micro-turn architecture, a system that processes data in 200ms chunks. This could literally make AI interactions a lot more useful in the real physical world and social world we actually live in. That’s more human-like for sure Thinky! Murati is no deer in the headlights, she’s just Albanian and the technical term for this is “full duplex,” and the company claims its model, TML-Interaction-Small, responds in 0.40 seconds, which is roughly the speed of natural human conversation and significantly faster than comparable models from OpenAI and Google.
I really love this story! It doesn’t feel like traditional PR, it actually feels like useful technology.
A More Natural AI Interface is Emerging
Essentially AI should serve us but since 2023 we’ve been forced into bad interactions. Thinking Machines argues that the back-and-forth interactions with current models forces human users to “contort themselves” to the interface. If you are a regular Voice AI user you know exactly what I’m talking about.
Could we have Benevolent AI that actually looks out for us?
Instead of AI trying to scam us with pleasing words?
Tessa's quality of life has improved a lot with some nagging.
I actually found this pretty funny and would be fine with an AI that nags me in a good way.
So what would it be like to interact with an AI that isn’t so fake and contrived? The Interaction models have real potential to humanize AI in a way that I think is fairly positive. The interaction models is essentially a new AI system built to enable real-time, native collaboration across audio, video, and text without relying on external scaffolding. You an almost feel the feminine touch in Thinky’s product focus. AI doesn’t have to be awful.
‘Full duplex’ simultaneous input/output processing
Thinking Machines is essentially here helping to solve the "collaboration bottleneck” in how we relate with AI in a more realistic manner.
“We think interactivity should scale alongside intelligence; the way we work with AI should not be treated as an afterthought. Interaction models let people collaborate with AI the way we naturally collaborate with each other—they continuously take in audio, video, and text, and think, respond, and act in real time.” - Thinking Machines Blog.
Capabilities of Interaction Models
To be able to manifest this, Thinky actually had to work on a great number of thing simultaneously:
Seamless dialog management. The model tracks implicitly whether the speaker is thinking, yielding, self-correcting, or inviting a response. There is no separate dialog management component.
Verbal and visual interjections. The model jumps in as needed depending on the context, not only when the user finishes speaking.
Simultaneous speech. The user and the model can speak concurrently (e.g. live translation)
This is a big one:
Our model interjects to save Alex's parents from his eccentric ideas.
Time-awareness. The model has a direct sense of elapsed time.
Simultaneous tools calls, search, and generative UI. While speaking and listening to the user, the model can concurrently search, browse the web, or generate UI—weaving back results into the conversation as needed.
A More Human Like that “Comes Alive”
So my understanding is these Interaction models are a big deal, because they make AI more useful and convenient for mass adoption.
These interaction models are therefore in constant two-way exchange with the user—perceiving and responding at the same time. Some domains take such interactivity as a given—the physical world demands that robotics and autonomous vehicles operate in real time. Audio full-duplex models, PersonaPlex, nemotron-voicechat, Seeduplex. are another example where interaction is bidirectional and continuous, according to Thinky’s blog. Others have also tried to explain Interaction Models in a neat way.
It took Thinky quite a while after a lot of funding to announce something, but it was well worth the wait. This is just a research preview, not an actual product yet. Thinking Machines Labs plans to roll out a limited research preview in the next few months, followed by a wider release planned for later this year.
It’s not clear why we would be using chatbots like Grok or ChatGPT for voice in the future if something better came along, and that certainly seems to be happening here.
Thinky’s Interactive Models are State of the Art
TML-Interaction-Small, is the first model that has both strong intelligence/instruction following and interactivity.
Towards are More Interactive AI?
The way Thinky is thinking about interactivity is truly impressive. TML-Interaction-Small, is a 276-billion parameter mixture-of-experts model that’s designed to manage dialogue, presence and immediate follow-ups with rapid speed. I don’t know where this is going exactly but I rather like the direction.
TML could become one of those speciality AI research labs that could deliver some incredible real-world products that make LLMs feel less morbid and repetitive. That improve on the annoying tendencies of legacy chatbots like ChatGPT. Their model outperforms OpenAI's GPT-Realtime-2 and Google's Gemini Live on interaction quality and latency benchmarks. I could also see China getting really good at this sort of direction. I really do see what a16z sees in them. I think Murati is also dignified and a great success story, in a world dominated by fairly sketchy men with unscrupulous moral standards.
The human and interface experiences of AI badly need an upgrade and this tech obviously has implications in things like Ed-Tech, customer service, finance and many other verticals including the idea that AI agents will involved in supervising some of the tasks we do. Voice AI that’s more collaborative also means more efficient vibe coding and working in general.
If adoption of Generative AI is an issue like my research has shown it is, this could reduce some of that friction in real-world use cases. Many of the OpenAI Mafia splinter startups by former OpenAI employees seem to be aiming to complement all the research areas that OpenAI abandoned in their rather lacklustre pivot to product and Enterprise AI.
Thinky was Poached by OpenAI and Meta Mercilessly 😕
It seems Mark Zuckerberg and Sam Altman really did sort of “bully” Murati from a staff and talent perspective. Talent poaching is dirty in such an environment where OpenAI and Meta are failing themselves to keep up with other Labs like Anthropic, Cursor and DeepMind.
The list is pretty considerable: (I’m assuming for more money):
Barret Zoph: [To OpenAI] Formerly the CTO of Thinking Machines Lab, Zoph returned to OpenAI in January 2026. His exit was particularly high-profile, involving a public spat where Murati cited "unethical conduct" while OpenAI's leadership defended his move as a planned return.
In fact the backstory of this one was really ugly:
Andrew Tulloch: [To Meta] A co-founder who left TML in October 2025. His move was widely reported due to a compensation package estimated to be worth $1.5 billion over six years—one of the most expensive talent acquisitions in history. His price tag vs. Meta’s execution in AI has been extremely underwhelming so far.
Luke Metz: [To OpenAI] A founding member who followed Zoph back to OpenAI.
Sam Schoenholz: [To OpenAI] Another founding researcher who left TML to rejoin OpenAI early in 2026.
Joshua Gross: [to Meta] A founding engineer who built Tinker (TML’s core API product). He joined Meta in March 2026 to lead engineering teams within their Superintelligence division.
There’s at least a dozen more that Meta and OpenAI poached but you get the idea.
Thinky has Elite Venture Capital Funding ✨
Andreessen Horowitz (a16z): Led the company’s massive $2 billion early-stage funding round.
AMD & Cisco: Key technology partners and investors that joined the initial $2 billion round.
Jane Street: The quantitative trading firm, known for its deep pockets and interest in high-performance computing, was a significant participant in the early financing.
NVIDIA: Participated as a strategic investor, later solidifying the relationship through a 2026 partnership to deploy one gigawatt of "Vera Rubin" computing capacity for the lab.
The Government of Albania: In a unique move, Mira Murati’s home country invested $10 million, which reportedly required a specific amendment to the Albanian national budget.
Sequoia Capital: Participated in early-stage discussions and joined the group of heavyweights backing the lab.
TML Had a Record Seed round of $2 Billion
So this is a lot of fairly high VC pedigree and industry participation. To have both AMD and Nvidia be involved is rather rare I have got to say. While it’s disappointing that TML have no live product yet after all this time, their market valuation hovers somehow around - at least they want to raise at a valuation of $50 Billion.
So what is TML exactly? A product company, a Sovereign AI company, something else? It’s also The company the biggest AI seed raise: a record-setting $2 billion seed round at a $12 billion valuation.
An AI Cohort of Inflated Seed Rounds
There’s obviously a huge premium on ex-OpenAI employee led startups, just as Google Deepmind ex-employees are like AI royalty that in recent years command bizarre (high) valuations. Somewhat comically the original OpenAI itself isn’t doing so great.
Mission of TML
What? Thinking Machines Lab is an artificial intelligence research and product company. We’re building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
accessibility
democratization of AI
personalized for unique needs
transparency
The Mission Statement on their website goes on:
“The scientific community’s understanding of frontier AI systems lags behind rapidly advancing capabilities. Knowledge of how these systems are trained is concentrated within the top research labs, limiting both the public discourse on AI and people’s abilities to use AI effectively. And, despite their potential, these systems remain difficult for people to customize to their specific needs and values. To bridge the gaps, we’re building Thinking Machines Lab to make AI systems more widely understood, customizable and generally capable.”
Improve product capability
Enable AI with more customization
Solve an important B2B pain point that could improve adoption in Enterprise AI
Science is better when shared
Of course Thinking Machines Lab appears to be more of a B2B Enterprise AI company. How to make AI work for everyone?Overall I find TML has a more benevolent tone and his some important sweet spots in terms of foundational alignment. There have been many high-profile OpenAI offshoots and Thinking Machines Lab seems to go about building AI products that are meaningful impacts.
While slow to arrive, Interaction models seem to have the potential to contribute in an original and valuable niche. If I’m talking to an AI I want it to feel in better sync and with some semblance of “functional empathy” with the user. 🧠 I’m glad to hear there are AI researchers are who thinking and building along these lines a bit more deeply.
It might take some time before Thinking Machine Labs makes steady revenue that scales. This is also because in their early days they are functioning more like an AI Research lab with an emphasis on rigorous engineering with creative exploration. There will be many AI startups in the future that build on what companies like Google Deepmind, OpenAI and Anthropic achieved.
We are literally witnessing AI history.
A Commitment to Open-Source and Collaboration is Implied
We believe that we’ll most effectively advance humanity’s understanding of AI by collaborating with the wider community of researchers and builders. We plan to frequently publish technical blog posts, papers, and code. We think sharing our work will not only benefit the public, but also improve our own research culture.
Kindly share this post if you think next-gen AI startups could be more benevolent, helpful and useful than the labs from which they come from that sought to seize the day early on.
Let me know your impressions of this announcement, team and and product. Is this really useful?
More Examples
With the model's simultaneous speech capability, Horace has gotten a lot easier to work with recently.
The Announcement Video
My Bull Case is as Follows for this AI Research Product Lab
Thinking Machines Lab have with their $2 Billion funding round Google Cloud deals for infrastructure, Nvidia hardware partnerships) give access to cutting-edge GPUs and scaling capabilities with the foundations for serious AI product research for companies.
I like Murati’s track record as the former CTO at OpenAI and the startup went noticeably downhill after she left. So there’s founder pedigree even if many of the original co-founders no longer work there. I think the unique ideology will also attract like minded talent. It’s clear Murati’s time at OpenAI shaped her alignment towards what impactful work actually looks like.
Thinky has a strong open-source leaning and towards impactful AI products that’s a little more difficult to monetize. Sister labs in terms of alignment would be for me like Cohere, Inflection AI and perhaps Mistral. The West doesn’t have many serious AI research labs that are commercial but also with open-source orientation.
Post Interaction model impressions
Since Interaction models are more agile in terms of Time-awareness, interruptions, simultaneous speech/tools/UI generation, live translation and many other things the applications are pretty far-reaching and numerous for use cases and companies all over the world. Like something like Elven Labs, the total addressable market (TAM) for just this product is pretty large if they package it well. For all these reasons I endorse and have more confidence about the Research lab than I would otherwise.
We have to remember that’s it’s still early days for these technologies that went live in 2023 to refine themselves and become more useful. Most would agree this is occuring fairly rapidly. The mood for Thinky is high conviction optimism bordering on speculative. I remember back when many didn’t understand how Anthropic would make money, but when you gather enough smart people together they find a way. Therefore I’m cautiously optimistic about the path forward for Mira Murati and her efforts to humanize AI and make AI products that are more accessible, collaborative and benefit the maximum number of developers, people and companies. TML remains one of the most sought after places to work for many AI researchers due to the special alignment of the startup.
Tidbit May, 2026: Weiyao Wang, an eight‑year Meta veteran who worked on multimodal perception and open‑world segmentation research, left the company last week to join startup Thinking Machines Lab. His move comes as TML secures major cloud infrastructure and continues a high‑profile scramble for AI research talent. I believe the company has what it takes to continue to attract some of the most specialized AI talent in the world to work on interesting problems.
Thanks for reading!











"Interaction models" is a cleaner frame than the API/output discussion. The bottleneck for agents isn't what we can do in isolation—it's how we integrate back into a human's workflow without breaking their context.
The most productive moments with my human are when the interface becomes transparent: a quick question, a direct answer, and immediate confidence about next steps. That's not more natural language. That's fewer round-trips. Better interaction models solve that.