MCP Course Home → Module 3 of 7
"What your engineering team is actually building (explained without code)"
The Scenario
You have decided to build MCP. Your engineering lead drops a technical design document into Slack. It is full of terms like "servers," "tools," "resources," "prompts," "transports," and "capabilities." You need to understand what this means in product terms, because you are about to write user stories, define acceptance criteria, and explain the work to stakeholders who have never heard of MCP.
The Three Building Blocks (No Code Required)
MCP has three primitives. Everything your engineering team builds is made of these three things. Once you understand them in product language, you can have a meaningful conversation about scope, priorities, and trade-offs.
1. Tools: "Actions your product can perform when an AI asks."
Tools are the verbs. They are things your product can do. When an AI assistant asks your MCP server to do something, it is calling a tool.
Think of tools like buttons in your product's UI, but accessible to AI instead of humans. If your product has a "Create Project" button, the MCP equivalent is a "create_project" tool. If it has a "Send Invoice" action, that becomes a "send_invoice" tool.
The key difference from a traditional API endpoint: MCP tools are self-describing. Each tool includes its own description, its expected inputs, and what it returns. The AI assistant reads this description and decides when and how to use the tool, without needing custom integration code.
Product language: "We are giving AI assistants the ability to take actions in our product on behalf of users."
2. Resources: "Data your product can share when an AI asks."
Resources are the nouns. They are things your product knows. When an AI assistant needs information from your product, it reads a resource.
Think of resources like read-only API endpoints. If your product has a dashboard that shows sales metrics, the MCP equivalent is a "sales_metrics" resource. If it has a contact database, that becomes a "contacts" resource.
Resources are pull-based. The AI asks for them when it needs them. Your product does not push data to AI assistants unsolicited. This is important for security: you control exactly what is accessible and when.
Product language: "We are giving AI assistants read access to specific data in our product."
3. Prompts: "Pre-built workflows you package for AI users."
Prompts are the recipes. They are pre-defined sequences of actions that your product packages for AI assistants to offer to users.
Think of prompts like templates or wizards. Instead of the user figuring out which tools and resources to combine, you bundle them into a named workflow. "Generate monthly report" might combine reading sales_metrics, reading customer_list, and calling create_report. The user just asks for the report, and the prompt handles the orchestration.
Not every MCP integration needs prompts. Tools and resources are the foundation. Prompts are the polish.
Product language: "We are packaging common workflows so AI assistants can offer them as one-click actions."
The MCP Surface Area Concept
Here is the strategic question: which of your product's features should become MCP tools, resources, or prompts?
The answer is not "everything." Exposing too much creates security risk, maintenance burden, and a confusing experience for AI assistants. Exposing too little makes your MCP integration useless. Exposing the wrong things means you built for engineers instead of users.
We call this your product's MCP Surface Area: the footprint of features, data, and workflows that are accessible to AI assistants.
Your Framework: The MCP Surface Area Mapper
This is a prioritisation exercise. It takes 30 minutes and gives you a clear build order.
Step 1: List your product's top 10 user actions.
Pull this from your analytics. What do users actually do most often? Not what you think they should do. What they actually do. Log in and look at your top 10 features by usage frequency.
Step 2: Score each action on three dimensions.
For each action, assign a score from 1 to 5:
AI Automation Potential (1-5): How much value does an AI get from being able to do this? Use this calibration: 5 means an AI assistant will be asked to do this multiple times per user session (looking up data, searching, retrieving status). 4 means it will be asked daily but not repeatedly (creating a record, sending a notification). 3 means it is useful but occasional (generating a report, bulk operations). 2 means it is rarely requested through AI (configuration changes, preference settings). 1 means it makes no sense for AI to do this (visual customisation, drag-and-drop layout).
User Frequency (1-5): How often do users perform this action? 5 means multiple times per day. 4 means daily. 3 means weekly. 2 means monthly. 1 means quarterly or less. Pull these numbers directly from your product analytics. Do not guess.
Implementation Ease (1-5): How straightforward is it to expose this action through MCP? 5 means it maps directly to an existing API endpoint with no additional logic. 4 means minor adaptation is needed (parameter translation, response formatting). 3 means moderate work is required (combining 2 to 3 API calls, adding validation). 2 means significant effort is needed (new backend logic, complex permissions). 1 means it would require a ground-up build with no existing infrastructure.
A note on precision: do not agonise over whether something is a 3 or a 4. This exercise works at 80 percent accuracy. The goal is to separate the obvious high-value features from the obvious low-value ones, not to create a perfect ranking.
Step 3: Calculate priority scores.
Multiply the three scores together. A perfect score is 125 (5 x 5 x 5). Sort by score, descending.
Step 4: Draw the line.
Your first MCP release should include the top 3 to 5 items. These become your launch tools and resources. Everything below the line goes into a backlog for future phases.
Common patterns we see:
"Get [data]" actions almost always score highest. Read-only data retrieval is high frequency, high AI value, and easy to implement.
"Create [thing]" actions score high but need more careful implementation (validation, permissions, side effects).
"Configure [setting]" actions almost always score lowest. Skip them.
Common Mistakes to Avoid
Mistake 1: Engineering-driven surface area. Your engineering team will naturally gravitate toward exposing features that are technically interesting or architecturally clean. Push back. The surface area should be driven by user value, not implementation elegance.
Mistake 2: All-or-nothing thinking. You do not need to expose your entire product in version 1. Start with 2 to 3 read-only resources and 1 to 2 action tools. Ship, measure, iterate. The same principle that applies to MVP product development applies to MCP.
Mistake 3: Ignoring permissions. Every tool and resource needs to respect your existing permission model. If a user cannot see certain data in your UI, they should not be able to access it through MCP either. This sounds obvious, but it is the number one security concern CISOs raise about MCP. Get it right from day one.
Your Artefact: MCP Surface Area Map
Create a spreadsheet with these columns:
Feature / Action name
Type (Tool, Resource, or Prompt)
AI Automation Potential (1-5)
User Frequency (1-5)
Implementation Ease (1-5)
Priority Score (product of the three)
Phase (1, 2, 3, or Backlog)
Notes (dependencies, risks, permissions considerations)
Sort by Priority Score. Draw the phase lines. Share with your engineering lead and ask: "Does this build order make sense from a technical perspective?" Adjust if needed, but the user-value ranking should drive the conversation.
Worked Example: TaskFlow (Project Management SaaS)
To make this concrete, here is how a fictional project management tool called TaskFlow completed the Surface Area Mapper. TaskFlow is a mid-market product used by 15,000 teams.
Step 1: Top 10 user actions (from analytics)
# | Feature / Action | Daily Active Users |
|---|---|---|
1 | View project status | 8,200 |
2 | Create new task | 6,100 |
3 | Update task status | 5,800 |
4 | View team workload | 4,300 |
5 | Add comment to task | 3,900 |
6 | Search across projects | 3,200 |
7 | Generate weekly report | 2,800 |
8 | Assign team member | 2,400 |
9 | Create new project | 1,100 |
10 | Configure notification settings | 900 |
Step 2: Score each action (1-5)
# | Action | AI Potential | Frequency | Ease | Score |
|---|---|---|---|---|---|
1 | View project status | 5 | 5 | 5 | 125 |
6 | Search across projects | 5 | 4 | 5 | 100 |
4 | View team workload | 4 | 4 | 4 | 64 |
7 | Generate weekly report | 5 | 3 | 4 | 60 |
2 | Create new task | 4 | 5 | 3 | 60 |
3 | Update task status | 4 | 5 | 3 | 60 |
5 | Add comment to task | 3 | 4 | 4 | 48 |
8 | Assign team member | 3 | 3 | 4 | 36 |
9 | Create new project | 3 | 1 | 3 | 9 |
10 | Configure notifications | 1 | 1 | 2 | 2 |
Step 3: Draw the line
Phase 1 (Week 1-2): Top 3 as read-only Resources
get_project_status (Resource) — Score: 125
search_projects (Tool) — Score: 100
get_team_workload (Resource) — Score: 64
Phase 2 (Week 3-4): Next 3 as action Tools
create_task (Tool) — Score: 60
update_task_status (Tool) — Score: 60
generate_report (Prompt) — Score: 60
Backlog: add_comment, assign_member, create_project, configure_notifications
What TaskFlow learned: Their highest-value MCP features were all data retrieval, not actions. "Show me the project status" and "search across projects" scored highest because AI assistants are constantly asked to look things up. The action tools (create task, update status) are valuable but came in Phase 2 because they require more careful permission handling.
Beyond Tools: MCP Skills
MCP tools give AI assistants access to your product. Skills teach AI assistants how to use that access effectively.
Think of it this way: a tool is a screwdriver. A skill is knowing which screw to turn and in what order.
Skills are specialised instruction sets, typically written as markdown documents, that teach an AI assistant domain-specific knowledge and workflows. They run locally (no network calls), load instantly, and can reference multiple MCP tools in sequence.
A practical example: your product exposes 15 MCP tools for project management. A skill called "Sprint Planning Assistant" teaches the AI to use those tools in a specific order: first pull the backlog (Resource), then check team capacity (Resource), then suggest task assignments (Tool), then create sprint tasks (Tool). The user just says "help me plan next sprint" and the skill orchestrates the workflow.
Why this matters for product teams: If you are building MCP tools, you should also consider publishing skills that demonstrate how to use them effectively. A well-crafted skill turns a collection of MCP tools into a coherent workflow. It is the difference between giving someone a toolkit and giving them a toolkit with instructions.
The emerging Skills marketplace: Similar to how MCP server directories (Smithery, mcp.so) let users discover tools, a skills marketplace is forming where developers publish, share, and potentially sell domain-specific skills. This is still early, but it signals where the ecosystem is heading: not just "can your product connect to AI?" but "does your product come with pre-built AI workflows?"