AI's Impact on the Workplace: Are Our Fears Justified?

As AI technology evolves, concerns about job replacement and digital replicas of employees grow; this article explores the validity of these anxieties.

AI’s Impact on the Workplace: Are Our Fears Justified?

Many white-collar workers are increasingly anxious about the possibility of being replaced by AI, fearing they might be transformed into a digital version of themselves, referred to as “××.skill”, and subsequently optimized out of their jobs.

Recent reports highlight a game media company in Shandong that trained a former employee into an AI digital twin, allowing it to continue working under the employee’s name (with consent). This AI, dubbed a digital twin, has sparked considerable debate. The rise of the “Colleague.skill” project, which claims to mimic employees’ work styles and communication tones, has intensified these fears among professionals.

However, such anxieties may be unwarranted.

Digital twins refer to skill packages. In late 2025, AI development company Anthropic launched the Claude skills feature, enabling Claude to perform specific tasks in a repeatable manner. By early 2026, OpenClaw gained global popularity, further popularizing the concept of skill packages. Essentially, a skill is an engineered folder containing instructions, scripts, and resources aimed at enhancing AI performance on specific tasks.

For example, consider a scenario where a merchant wants to integrate a payment platform. Previously, this process was cumbersome, requiring developers to select compatible products, read technical documentation, and write and debug code, often taking days or weeks. With large model programming, the process has improved, but still requires instruction. Now, with the skills feature, payment platforms are standardizing this complex integration process into skill packages, encapsulating security rules and business logic in a way that AI can directly understand and utilize. Developers can quickly and safely implement payment functions by simply describing their needs in natural language.

Following the rise of OpenClaw, the concept of “refining colleagues” gained traction. This originated from an open-source project on GitHub called “Colleague-Skill,” which claims to turn cold farewells into warm skills, welcoming participants into a cyber afterlife. Users commented, “Your colleague has been optimized, but their skill remains.”

The process involves collecting workplace data such as messages, documents, and emails from colleagues, combined with subjective descriptions to generate an AI digital twin that mimics the employee in two ways: work mode (coding style, code review habits, common practices) and personal habits (communication style, interpersonal behaviors). It’s important to note that while the coding style may belong to the colleague, the actual coding ability depends on the underlying model’s capabilities.

Using the example of generating content for social media, the workflow includes: first, communicating with the client about product features, selling points, and target audience; second, searching for competitor information on platforms like Xiaohongshu, Weibo, and Douyin; third, generating various styles of titles and content. Previously, this was done by humans, limited by time and energy, but now AI can produce numerous versions quickly and efficiently.

The essence of skills is not merely technical knowledge or cognitive ability, but rather a set of experiences and workflows that can be translated into written instructions. While AI can follow templates to produce similar outputs, the quality remains a question.

This brings us to the concepts of tacit and explicit knowledge. In 1958, scholar Michael Polanyi introduced the idea of explicit knowledge, which can be fully articulated through language and symbols, contrasting it with tacit knowledge, which is difficult to express. Tacit knowledge is inherently tied to personal experience and judgment. For instance, while swimming tutorials break down the strokes into steps, mastering swimming requires personal experience that cannot be fully conveyed through words.

To some extent, large models have begun to tap into the realm of tacit knowledge. However, whether AI can surpass human capabilities remains uncertain. Currently, not all job positions can be easily replaced by AI.

In fields like finance, politics, and technology commentary, readers who regularly engage with this content can easily distinguish AI-generated articles from those written by experienced authors. Similarly, while AI can produce seemingly competent social media content, it often falls short compared to skilled individuals.

Furthermore, projects like “Zhang Xuefeng.Skill” and “Buffett.Skill” claim to embody real human knowledge and logic, yet they cannot replace the nuanced understanding derived from tacit knowledge.

Recognizing this distinction can alleviate concerns about being “refined” or “optimized.”

One primary concern is whether employees have the right to refuse their chat records, emails, and documents being used as training data for creating digital twins. If employees do not publicly display their work techniques and private messages, companies cannot claim to “refine” them. Conversely, publicly accessible content, such as submitted work documents and group chat records, should not be subject to refusal for refinement.

These materials are part of the employee’s work content, physically residing on company servers and colleagues’ devices. The ownership of this content is generally accepted as belonging to the company, as it is produced in exchange for salary, akin to copyright law’s concept of “work for hire.” If companies do not own the rights to work documents, market stability could be jeopardized.

Moreover, this touches on the understanding of large model operations. The skills and processes recorded during work serve merely as inspiration for the models. Skills act as prompts for large models, and the content produced is based on all the training data. For instance, a person educated for 12 years writing a reflection on a book does not solely derive from that book but is a culmination of their entire educational experience. The book may inspire the reader, who only needs to pay for the book but not for the inspiration. Similarly, one cannot prevent colleagues from learning skills from publicly available work documents and chat records, and the same applies to AI.

Can companies require employees to submit their private work experiences? This remains a request for work content rather than an invasion of privacy, as long as both parties agree.

In reality, individuals rely on tacit knowledge to resist “refinement.” This tacit knowledge accumulates over years of experience, and even if AI analyzes all work records, it may struggle to fully grasp and replicate it. If a job does not require tacit knowledge, it is inherently easy to replace, whether by AI or someone familiar with explicit knowledge.

This presents a paradox: if someone fears being “refined,” their concern should not be about AI replacement but rather that any individual could easily replace them.

We are in an era of rapid AI proliferation. On one hand, AI indeed impacts the real world. On the other hand, we are also in a self-media era where novelty, fear, and anger drive engagement, often exaggerating situations. The novelty of refining colleagues, fear of AI replacement, and potential infringement evoke strong reactions. Thus, to a large extent, the concept of “refining colleagues” is a tempest in a teacup—at least in the short term, it has been exaggerated. As for the long term, this may be the butterfly effect that triggers a storm. Who knows?

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.