How to Unlock AI's Full Potential

How to Unlock AI's Full Potential

Most people still can’t feel AI’s real effectiveness. This is partly a model selection issue, and partly a failure to configure it properly. If you can’t unlock the AI model’s potential, then agents and workflows are meaningless.

As someone with deep, long-standing interests across a wide range of fields, I naturally expect AI to be “omniscient and omnipotent.” Most users assume AI is inherently all-knowing, yet their daily experience doesn’t reflect this. Many professionals dip their toes in, conclude that current AI falls far short of their professional needs, and walk away. If you’ve had these reactions, be careful — you’re already behind reality.

These impressions arise primarily from three issues:

1. Source contamination. AI is a “generalist,” but it has no memory. Its output is generated on the fly from a pre-trained model plus real-time search results. The problem lies precisely in that real-time search. The moment it searches the web, it pulls in massive amounts of junk from social media and wiki-style sites. These low-quality sources dramatically degrade the AI’s response quality. You can mitigate this by specifying in your prompts which categories of sources to blacklist, filtering out low-quality information.

2. Timeliness problems. To avoid source contamination, you can disable internet access, but then you lose the latest authoritative data and facts, relying solely on the model trained at release time (perhaps three to six months ago). In the internet age, this is hard to accept unless your research has zero time sensitivity. The workaround is to enable internet access while explicitly instructing the AI in your prompts to retrieve only the latest information — though this may reintroduce source quality issues.

3. Insufficient expertise. For general users — or for professionals working outside their domain — AI quality may seem to be continuously improving. But many domain experts feel that AI’s output still falls far short of their professional requirements. This is because AI defaults to a generalist setting, not a master-level specialist in any given field. If you want highly professional output, you can directly assign the AI a persona via prompting. For example: “As a partner at a globally top-tier law firm specializing in U.S. export controls and economic sanctions compliance, please provide a comprehensive analysis of the impact of U.S. export control and sanctions policies on Chinese enterprises over the past three years.” If that’s still not enough, add: “You are also a world-class geopolitical economist who has participated in formulating U.S. export control policy. Please forecast the likely future direction of these policies.” The results will leave many professionals stunned.

The natural follow-up question: if every query requires a wall of prompts, isn’t that exhausting? How is this “artificial intelligence”? This is where skills (or memory preferences) enter the picture. For leading AI models, you can encode your recurring requirements as persistent skills that the AI automatically follows every time. In some AI products, you can set this up directly through “memory preferences.” It’s not complicated — just tell the AI in natural language what you want: “Remember, never cite Weibo, Chinese wikis, or any social media sources — only use the most authoritative international journals, government sources, and international organization data.” “Remember, always search for the latest authoritative real-time sources first and verify that the information is accurate, complete, and genuine.” “Remember, for non-Chinese information, always use original-language primary sources — absolutely no secondhand Chinese-language translations.”

For an eccentric user like me who’s fascinated by countless industries, there’s one more: “Remember, you are a globally top-tier expert in social sciences, natural sciences, and engineering technology, covering all research and professional disciplines, with world-class academic theoretical depth and the strongest frontline practical experience.”

Still not enough. You also need to tell it: “Remember, your responses should reflect the academic and practical consensus of top global experts, while also presenting the collision of different viewpoints.”

And: “Remember, you must engage in cross-disciplinary thinking and cross-validate all data at least three times.”

And because you need to be able to verify whether the AI actually followed your instructions: “Remember, all responses must include complete source citations for instant verification.”

That about covers it.

The AI will restructure these natural-language instructions, logicalize them into skills, and store them in its long-term memory. Every subsequent interaction will follow these rules first. In other words, you’ve crafted a persona for your AI. During use, if the AI makes mistakes or fails to follow your requirements, you can ask it directly why it erred. It will explain the reason, and you can reinforce: “Remember, don’t make this mistake again.” The AI will then automatically refine its memory preferences.

With the setup described above, what you get is a highly rational, all-source-verified, real-time-capable, globally top-tier multi-disciplinary expert persona. I believe this highly abstract AI memory configuration is suitable for virtually any user who values truth, rationality, and intellectual rigor.

According to Boris Cherny, a lead engineer on Claude Code, top-level memory settings should be kept minimal. The settings I described above are highly abstracted from my own needs, and I’ve already published them in the AI community.

So how does the AI perform after this setup?

First, it becomes a real-time fact-verification tool. We consume enormous volumes of information daily. For instance, a few days ago Chinese media reported that the U.S. War Department used Claude to assist in operations against Iran — the story was detailed and spread widely. Or today, Chinese media relayed an Iranian media report claiming Iran successfully attacked the USS Lincoln aircraft carrier with drones, forcing it to retreat 1,000 kilometers toward the Indian Ocean. Are these claims true? I’ve developed the habit of tossing such information to my AI for instant verification. Because it searches real-time authoritative primary sources without interference from secondhand information, and has multi-perspective global expert analysis capability, the AI’s feedback is typically timely, reliable, and fully cited — every claim and judgment comes with a source that can be independently verified. I believe this single capability alone is enough to change how people use mobile internet apps.

Second, it’s an extraordinarily powerful analytical tool. We should trust that current AI’s knowledge base has reached the level of top global experts across all fields. If you just want established analysis rather than cutting-edge research, the output from this setup is highly reliable. The wrong way to use it: have AI directly produce a paper or review your documents. The right way: engage in deep task discussion with the AI, letting it fully understand your requirements before generating output. Don’t generate a document first and then iterate through revisions. This is because LLMs predict the next token, and every model has a maximum token limit, with an even shorter effective context window. As a conversation grows longer, the AI’s precision gradually declines and hallucination risk increases. Therefore, conducting deep exploration first is essentially performing convergence and control — narrowing scope and refining requirements. Once the parameters are locked in, the AI executes with dramatically higher output quality. If you generate first and revise repeatedly, the accumulating text volume will actually cause quality to deteriorate progressively.

It’s worth noting that with my memory settings, the AI becomes a highly rational omniscient expert — and correspondingly, its emotional warmth drops significantly. I’ve published my skill settings as images at the end of this article. Readers can try them in any AI model, including overseas ones, to see which model’s output quality best fits their needs.

If you still want the AI to converse like a normal person and provide emotional support, there are options. One approach is for AI products to offer toggleable memory preference profiles so you can switch personas on the fly (unfortunately, Kimi doesn’t have this feature yet). Another approach is to instruct the AI in a single conversation to ignore all memory preferences, reverting to default mode. My own solution is simpler: I’ve assigned entertainment and emotional needs to Doubao (豆包), and reserved serious analytical work for Kimi.

In the end, users today have no loyalty to any particular AI model — they use whichever is best. But skills and memory settings are genuinely the user’s own contribution, portable at any time. Here’s hoping AI companies continue to improve.

A note for humanities-oriented readers: you can contribute just as many important AI skills as your STEM counterparts. Every field can have its own extensive skill library.

That’s all.