Ai

AI employees in the future

Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.

Remind me to check in on Anthropic, who currently has over 100 open positions on their careers page how this is going.


Quote Citation: Sam Sabin, “Exclusive: Anthropic warns fully AI employees are a year away”, Apr 22, 2025, https://www.axios.com/2025/04/22/ai-anthropic-virtual-employees-security

AI all the way down

Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists.

I think this is the most responsible take on AI. Not everyone needs to build it to leverage it. Also amazing post title.

No horse, all cart

Whenever a new technology is invented, the first tools built with it inevitably fail because they mimic the old way of doing things.

I think I read elsewhere, that the true power of AI will be when it finds its application niche. Not in writing emails.


Quote Citation: Pete Koomen, “AI Horseless Carriages”, April 2025, https://koomen.dev/essays/horseless-carriages/

AI driven economy - but not fewer hours

AI is creating new work that cancels out some potential time savings from using AI in the first place.

Adoption of AI is going gangbusters, but results in the marketplace aren’t dramatic. Best case I’ve seen is that AI is like a lot of other automation, it free’s time for more work; not less.


Quote Citation: Thomas Claburn, “Generative AI is not replacing jobs or hurting wages at all, economists claim”, Tue 29 Apr 2025, https://www.theregister.com/2025/04/29/generative_ai_no_effect_jobs_wages/

AI use amongst non-technology people

A real-estate lawyer might have provided a better analysis, I thought—but not in three minutes, or for two hundred bucks. (The A.I.’s analysis included a few errors—for example, it initially overestimated the size of the property—but it quickly and thoroughly corrected them when I pointed them out.)

There’s a lot going on in this article, but the point is that ChatGPT and its ilk can summon up whatever you guide it to answer. Whether or not this is actionable I don’t think should be compared to relative value, but perceived expertise. To rely on LLM on the question of whether or not to sell a home feels foolish to me. Regardless AI is here to stay.

Takes more than 'just use AI' to build software

Developer frustrations with AI mandates often surface due to their being handed down by company leaders who don’t have close visibility into engineering workflows. Developers describe executives instituting OKRs and tracking AI usage without any regard for whether it’s actually helping, let alone where it may be making things worse. Code acceptance rate (how often developers accept the code suggestions an AI tool makes) is a popular adoption metric, but some argue it’s a poor measure because it counts people accepting suggestions that may be problematic.

Copyright law, AI Prompting and output ownership

Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce.

Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Publisher Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water on an overheating planet.

Business Adoption of LLM for writing

By the end of the period we analyzed, in the financial dataset we estimate about 18% of the data was generated by LLM, around 24% in company press releases, up to 15% for young and small companies job postings, and 14% for international organizations.

Hard to say how accurate this is, as I don’t know that AI detection models are that accurate. But regardless of adoption rate, there is a surge of usage followed by plateauing reflecting not everyting can be solved by AI.

AI Agents rely on expertise planners

The core idea is to separate the process into distinct components: a Planner, an Evaluator, and an Executor. The Planner generates a plan based on the user’s query. The Evaluator validates the generated plan. The Executor only executes plans that have been validated, ensuring that only sound plans are carried out.

And I guess the human rubber stamps it? No where is mentioned controlling for mistakes.


Quote Citation: Cedric Chee, “The DNA of AI Agents: Common Patterns in Recent Design Principles”, Dec 24, 2024, https://cedricchee.com/blog/the-dna-of-ai-agents/