The name of the game is "context refinement" - some call it context engineering. I like to think that the real AI-first engineers are now doing context refinement to "line up shots". When you have gotten enough confidence, that's when we take that shot. Frameworks like amplifier, speckit, and BMAD all guide engineers to do this refinement. Yet all AI coding agent framework so far are developed for single player productivity: they provide a framework surrounding Claude Code or Codex to help you supplement your own skillset with that of an army of AI. Our next task to figure out how to leverage real experts with the help of these AI agents to lift the confidence level.
Over time, the investment made to the context and knowledge persisted will become more and more valuable. That way, even non-experts will step into the project and be able to contribute at a much higher level.
One more fun thing that we should consider is that we will very much depend on having MANY model providers with these kinds of tools because we will rely on them reviewing each others' work.
"I think this is coming for most if not all knowledge work, too"
My meta-concern here is the same as it has been from when I had early OCTO access to models within MSFT, like GPT-V (Vision **not** 5 lol). If the oligopoly, which is now quite obviously the oligarchy in the US under Trump, maintains control over GenAI, who will benefit? And at whose cost?
In the last 3 decades as worker productivity has increased, worker pay has not risen. The benefits of worker productivity has only benefited capital owners. I am one of those owners and have benefited and absolutely hate that my wealth grows while humans suffer.
It's easy to state "humans will figure it out" as they did the industrialization of farming. It's been 100 years, it took decades to replace human labor with machines, and we are still not able to manage that disruption (happy to provide facts, soy beans ffs). This transition from "highly paid professionals" to "free AIs" will take 10 years. Consider the aggregate income loss.
I very much respect your wisdom and insights about technology and business. I have always enjoyed your insights, even when internal to MSFT. How do you feel that society will manage the destruction of society's highest paying jobs? Do those controlling the AI's that replace humans owe a tax to replace the income that they are destroying?
As a MS employee, I find it hard to believe MS security would be comfortable with engineers shipping code to production without reviewing the code and knowing how it works. Can you please elaborate on how these teams are doing this while adhering to Microsoft’s numerous security, privacy, compliance, and accessibility commitments?
The name of the game is "context refinement" - some call it context engineering. I like to think that the real AI-first engineers are now doing context refinement to "line up shots". When you have gotten enough confidence, that's when we take that shot. Frameworks like amplifier, speckit, and BMAD all guide engineers to do this refinement. Yet all AI coding agent framework so far are developed for single player productivity: they provide a framework surrounding Claude Code or Codex to help you supplement your own skillset with that of an army of AI. Our next task to figure out how to leverage real experts with the help of these AI agents to lift the confidence level.
Over time, the investment made to the context and knowledge persisted will become more and more valuable. That way, even non-experts will step into the project and be able to contribute at a much higher level.
One more fun thing that we should consider is that we will very much depend on having MANY model providers with these kinds of tools because we will rely on them reviewing each others' work.
I hope you will be able to expand on this in future posts, perhaps with wire diagrams to show components and flow.
"I think this is coming for most if not all knowledge work, too"
My meta-concern here is the same as it has been from when I had early OCTO access to models within MSFT, like GPT-V (Vision **not** 5 lol). If the oligopoly, which is now quite obviously the oligarchy in the US under Trump, maintains control over GenAI, who will benefit? And at whose cost?
In the last 3 decades as worker productivity has increased, worker pay has not risen. The benefits of worker productivity has only benefited capital owners. I am one of those owners and have benefited and absolutely hate that my wealth grows while humans suffer.
It's easy to state "humans will figure it out" as they did the industrialization of farming. It's been 100 years, it took decades to replace human labor with machines, and we are still not able to manage that disruption (happy to provide facts, soy beans ffs). This transition from "highly paid professionals" to "free AIs" will take 10 years. Consider the aggregate income loss.
I very much respect your wisdom and insights about technology and business. I have always enjoyed your insights, even when internal to MSFT. How do you feel that society will manage the destruction of society's highest paying jobs? Do those controlling the AI's that replace humans owe a tax to replace the income that they are destroying?
For real - I'm in Seattle, happy to meet irl
As a MS employee, I find it hard to believe MS security would be comfortable with engineers shipping code to production without reviewing the code and knowing how it works. Can you please elaborate on how these teams are doing this while adhering to Microsoft’s numerous security, privacy, compliance, and accessibility commitments?