Publié le 06 mars 2026

Artificial intelligence (AI) has rapidly become an everyday companion for professionals across all sectors. Whether drafting reports, analyzing data, preparing presentations, or sparking new ideas, AI now supports a wide range of tasks. Because these systems respond directly to the prompts we give them, the way we formulate our questions, structure our context, and guide the model increasingly determines the quality, fairness, and reliability of the outputs we obtain. In reality, prompting has become a new, yet essential professional skill that influences the effectiveness and quality of the work we deliver.

With AI so deeply embedded in daily workflows, these issues become organizational risks, not just individual mistakes. Inconsistent prompting practices can lead to operational inefficiencies, loss of trust, reputational damage, and even compliance challenges. These risks are not inevitable. They can be reduced through simple, structured prompting techniques that every professional can learn.

While AI usage has grown dramatically, understanding of the technology has not kept the same pace. What stands out is that people genuinely care about using AI responsibly and want to learn how. According to the Trust, attitudes and use of artificial intelligence - A Global Study 2025 from KPMG, the majority of people have not received a formal AI train ing. Only 39% report having any training at all; nearly 48% say they have limited knowledge of AI, and just 21% feel highly knowledgeable. At the same time, interest is high: in many emerging economies, over 90% of people want to learn more about AI.

This gap between heavy usage and low literacy contributes to widespread worries about AI’s reliability. The same study  shows that 68% of people are concerned about bias, 69% worry about the environmental impact of AI, 82% fear misinformation, and 77% are concerned about inaccurate outcomes. These concerns show up in daily work. There are now numerous cases of fabricated citations appearing in corporate documents, AI-generated statements being mistaken for fact, or public figures sharing hallucinated content.

This is the purpose of the Responsible Prompting framework: a practical, accessible approach that supports people in producing AI generated outputs that are accurate, fair, and resource-efficient.  At KPMG, we care about building the skills and confidence people need to use AI responsibly. 

Our tips to reduce the limitations of AI systems through prompting

To appreciate why responsible prompting matters, it is important to understand why (Generative) AI behaves the way it does. AI does not “think” like humans. It does not know and retrieve facts like a database, neither does it understand meaning or have emotions. Instead, AI generates predictions based on patterns. Large Language Models (LLMs), a subset of AI, has been trained on huge amounts of data and uses patterns to predict the words most likely to follow. These create three recurring risks:

  • Hallucinations (confident but false answers)
  • Bias (reproduction of unfair patterns)
  • Ecological impact (high computational and energy usage)

These risks and the good practices that address these are visualized in the figure below, which introduces the three pillars of our Responsible Prompting framework: Design for AccuracyDesign for Fairness, and Design for Efficiency.

ch kpmg

Hallucinations: When AI sounds confident, but is completely wrong

Hallucinations occur when AI produces information that looks polished and credible but is factually incorrect or entirely invented. Because AI predicts the likely next word rather than retrieving facts, it can generate convincing explanations, invented statistics, or fabricated citations.

ch kpmg

Note: This section has solely focused on improvements through prompting techniques; to more drastically reduce hallucinations, you must look for more complex set-ups including finetuning models, RAG systems, or data agents.

Bias: Patterns in data that reinforce unfair outcomes

AI systems learn from large datasets filled with human language, culture, and history, and therefore inherit the biases present in these sources. This implies that, without guidance, AI may default to skewed perspectives, reinforce stereotypes, or overlook underrepresented groups.

ch KPMG

Ecological impact: AI is powerful but energy-intensive

Every AI interaction requires computational power and advanced models consume significant energy. Although invisible to the user, prompting behavior directly affects how much energy the system uses.

Energy inefficient behaviors include:

  • starting new chats every time or sticking too long in one chat
  • requesting extremely long outputs
  • using the most powerful model for simple tasks

The solution is the Design for Efficiency pillar:

  • Choose when to start a new chat to use threads effectively
  • Specify output constraints such as length or format
  • Choose the right model for the job

These simple actions support more sustainable AI usage without compromising output quality.

ch KPMG

While small in isolation, these behaviors quickly add up across bigger organizations. Aside from the sustainability impact, this also makes a big difference financially. Let’s look solely at the use case of email generation. With GPT-5 nano being 25x cheaper than the default GPT-5, an organization of 1,000 employees can save between US$ 2,500-10,000 every year by choosing the nano model and adding a 150-words limit (assuming 10 generated emails per employee every workday, typically each having a length of 400 words). Numbers used from OpenAI’s publicly listed API prices.

Let’s start your AI literacy journey

Responsible prompting is just one part of a broader shift toward AI literacy. At KPMG, we believe that equipping professionals with the right knowledge and habits is essential to thrive in an AI-driven environment. Our AI Literacy program empowers individuals to use AI more confidently, more effectively, and more responsibly.

By integrating responsible prompting into your daily work, you not only enhance your own productivity, but you also help build a culture of ethical, consistent, and trusted AI use across the organization.

Ready to continue your journey?

If you're interested in strengthening your AI literacy or want support applying Responsible Prompting within your team, feel free to reach out. We’re here to help you take the next step toward making AI work responsibly and confidently across your organization.