Getting Started with SpecKit and Azure AI Foundry: Build Your First AI Agent

Building an AI agent from scratch might sound daunting, but with the right tools, it becomes a manageable (even enjoyable) project. SpecKit and Azure AI Foundry are two powerful tools that, together, can take you from zero to a working AI agent quickly. In this guide, we’ll walk through creating your first agent step-by-step — assuming you have nothing set up on your machine.

  • What is SpecKit? SpecKit is an open-source toolkit (backed by GitHub and Microsoft) for specification-driven development. It lets you write out what you want your software to do (the specification) and uses AI to turn those specs into working code. In other words, SpecKit makes your specifications executable by leveraging AI coding assistants to generate the implementation automatically. This shifts your focus to what you want to build rather than worrying about how to code it, resulting in faster development and less boilerplate.
  • What is Azure AI Foundry? Azure AI Foundry is Microsoft’s cloud platform for designing, managing, and deploying AI applications and agents. It provides a unified environment (web portal, SDKs, and REST APIs) where you can access large language models, define agent behaviors, integrate tools (like web search or code execution), and maintain conversation state, all with enterprise-grade security and scalability. In short, Foundry handles the heavy lifting of hosting and running AI models/agents so you can focus on your agent’s logic.
  • Why use them together? SpecKit and Azure AI Foundry complement each other perfectly. SpecKit speeds up development by auto-generating the agent’s code from high-level specs, while Foundry provides a ready-to-use cloud infrastructure to run and scale that agent (with features like built-in tools, secure deployment, and monitoring). Microsoft even provides best-practice guidelines integrating SpecKit with Foundry’s agent framework as part of an internal “Azure Development Constitution,” underscoring how these tools are meant to work hand-in-hand.

By the end of this tutorial, you will have a simple AI agent running locally (but powered by cloud-based AI) that you built from scratch. We’ll cover everything from setting up your development environment to writing your first specification, generating code with SpecKit, testing the agent, and finally tips for debugging and deploying it. Let’s get started!

Setting Up Your Development Environment 🛠️

Before we dive into coding or specs, we need to set up a few tools. Since we’re starting with nothing installed, we’ll go step-by-step. Here’s a quick summary of what you need to install and prepare:

In table form, the essential setup steps and tools are:

Step/ToolPurpose
💻 Python 3.11+Core runtime for SpecKit and the agent’s code.
☁️ Azure Subscription & FoundryAccess to Azure AI Foundry (you’ll create a project in the Foundry portal).
🔑 Azure CLICommand-line tool to authenticate with Azure (we’ll use it to log in and give our code access to Foundry).
🤖 VS Code + GitHub Copilot (optional)IDE and AI pair programmer to use with SpecKit’s commands.
🧰 Astral “uv” ToolPackage manager for installing SpecKit easily.
🔧 SpecKit CLI (specify)Command-line tool for SpecKit (to initialize projects and manage spec-driven development).
📦 Azure Foundry SDKPython libraries (azure-ai-agents and azure-identity) to interact with the Foundry Agent service from code.

Now let’s walk through these steps in detail:

  1. Install Python: If you don’t have Python installed, download and install the latest Python 3 version (3.11 or above is recommended). Ensure you can run python –version in your terminal and get a 3.X result.
  • Sign up for Azure and enable AI Foundry: Create a free Azure account if you don’t have one. Azure AI Foundry might require specific preview access depending on when you’re reading this, but by now (2026) it’s generally available. In the Azure portal (or the Foundry web portal), create a Foundry project (your sandbox for agents). For example, in the Foundry portal, click Create an agent and follow the steps to provision a Foundry project. This will set up the necessary cloud resources (and likely deploy a default model like GPT-4 for you). Take note of your Project Endpoint URL – you’ll find this in the project’s details or overview page once it’s created. It looks like https://<your-project-name&gt;.services.ai.azure.com/….
  • Install Azure CLI and log in: Download and install the Azure CLI for your operating system. Once installed, open a terminal and run:
1     az login

This will open a browser for you to authenticate with Azure. Use the account that has your Azure subscription. After logging in, verify you have access to the subscription where the Foundry project lives. If you have multiple subscriptions, ensure the correct one is set as default using:

1     az account set -s “<Your Subscription ID or Name>”

This Azure CLI login is important: our code will use it to obtain credentials for Foundry.

  1. Prepare an AI coding assistant: SpecKit doesn’t magically generate code on its own – it uses an AI “assistant” under the hood. The most straightforward choice is GitHub Copilot in VS Code, which can respond to SpecKit’s special commands. Ensure you have VS Code installed and the Copilot extension enabled (with an active subscription or trial). Alternatively, you can use Claude Code (Anthropic’s CLI tool) or an open-source assistant like CodeBuddy CLI – but for this guide, we’ll assume Copilot. (If you use another supported AI tool, adjust SpecKit’s –ai setting accordingly.)
  2. Install SpecKit CLI (“specify”): Now install the SpecKit command-line tool (specify). The recommended method is using the Astral “uv” package manager (you can install Astral’s uv via pip install uv). Using uv ensures all dependencies are handled smoothly. Run the following in your terminal:
1     pip install uv               # If uv is not already installed 2     uv tool install specify-cli –from git+https://github.com/github/spec-kit.git

This installs the latest SpecKit CLI from GitHub. After installation, verify it works by running specify –version. You should see a version number output (for example, specify 1.x.x).

  1. Set up environment variables for Foundry: Remember that Project Endpoint URL from step 2? Let’s configure it so our code can find it. In your terminal, set an environment variable for the endpoint. On Windows PowerShell, you can do:
1     $env:PROJECT_ENDPOINT=”https://<your-project-endpoint-url>&#8221;

On Mac/Linux (bash):

1     export PROJECT_ENDPOINT=”https://<your-project-endpoint-url>&#8221;

Also decide which model deployment you want to use in Foundry. If you just created a project with a default model, find its deployment name in the Foundry portal (under Models or in the project’s details). Set that as well, for example:

1     export MODEL_DEPLOYMENT_NAME=”gpt-4″

Foundry needs to know which model the agent should use. By default, a new project often auto-deploys a model for you (the portal usually shows which one), so you can use that name.

  1. Install Azure Foundry SDK packages: Finally, install the Python libraries we’ll use in the project to interact with Foundry. In your Python environment (it’s a good idea to use a virtual environment for your project), run:
1     pip install azure-ai-agents azure-identity

The azure-ai-agents package is the Azure AI Foundry Agents SDK, and azure-identity provides the DefaultAzureCredential we’ll use for authentication. These libraries let our Python code create and communicate with the agent in Foundry.

Initializing a New SpecKit Project

SpecKit will bootstrap our project structure and get the spec-driven workflow going. We’ll use the specify CLI to do this.

  1. Create a project folder: Choose a directory where you want your project to live. You can navigate there in a terminal or open it in VS Code.
  2. Run specify init: Initialize a new SpecKit project by running:
1     specify init my-first-agent –ai copilot

Here, my-first-agent is the name of your project folder (feel free to name it as you like). The –ai copilot flag tells SpecKit that you’ll be using GitHub Copilot as the AI assistant for this project (so it can tailor some settings accordingly). If you were using a different AI assistant, you’d put –ai claude or –ai codebuddy, etc., but we’ll assume Copilot.

This command creates a new folder (named my-first-agent) with some starter files and configuration. Notably, it sets up a hidden folder .specify/ with SpecKit scripts and possibly a baseline “constitution” (project guidelines), along with a README or example spec. It also integrates with your AI assistant: for example, if you open this folder in VS Code with Copilot, SpecKit’s special commands will be recognized.

Tip: During init, SpecKit might ask to confirm which shell to use for its scripts (Bash or PowerShell). On Windows, choose PowerShell (the default); on Mac/Linux, it will use Bash. Also, if you don’t have Git set up, you might see it initialize a git repository — that’s fine, it’s just preparing version control for you.

After initialization, you should see a success message. Now, let’s move on to defining what our agent should do!

Writing the Specification (Defining What to Build)

In SpecKit’s world, everything starts with a specification. This is where you describe the requirements and behavior of your application (in our case, the agent) in plain language. Since we’re building an agent in Azure Foundry, our specification will outline what the agent’s purpose is and what features it should have.

  1. Open your AI assistant chat: If you’re using VS Code with Copilot, open the Copilot Chat panel. Make sure you have a file from your project open (even the README or a blank file) so that Copilot is “aware” of your project context.
  2. Use the /specify command: In the chat, start a message with the SpecKit command /specify followed by your project requirements. For example:
1     /specify Build a conversational math helper agent that can answer basic math questions and geography queries. 2     – The agent should greet the user and then await questions. 3     – It should handle simple arithmetic problems (like solving equations or calculations). 4     – It can also answer geography questions (e.g., the capital of a country, population stats). 5     – If asked something outside its knowledge, it should respond that it doesn’t have that information. 6     – Ensure the agent uses the Azure AI Foundry model for its responses. 7    

Feel free to write this in a natural, informal way – SpecKit (with Copilot’s help) will turn it into a structured spec document. The key is to explain what the agent should do, not how to code it. You can include bullet points or numbered requirements as shown. When you send this message, Copilot will respond by creating or updating a spec file (likely something like specs/001-spec.md in your project) with the content you described, formatted as a set of requirements or user stories.

SpecKit’s /specify step might also implicitly run a /constitution step if this is the first spec in the project (to establish any project-wide standards). The constitution in SpecKit is an optional high-level set of guidelines (for instance, you could have a constitution that says “all code must use Azure best practices”). For our simple project, we won’t worry about this; the default is fine.

At this point, you’ve defined what your agent should do. The next steps are about figuring out how it will do it (planning the implementation) and then actually building it. SpecKit will help with those as well, using more slash commands.

SpecKit’s Key Commands Recap:

SpecKit uses special slash commands in the AI chat to guide development. The main ones are:

  • /specify – Define what you want to build (your requirements and user stories).
  • /clarify – Resolve ambiguities by letting the AI ask you questions about the spec (use this if your spec is unclear or incomplete).
  • /plan – Propose a technical plan (architecture and approach) for the implementation.
  • /tasks – Break down the plan into a list of tasks or steps to code.
  • /implement – Generate the actual code for each task according to the plan.

Planning the Implementation

Now that the specification is in place, it’s time to decide how to implement the agent. SpecKit’s planning phase uses the /plan command to let the AI suggest an architecture and approach. This is where we ensure our project uses the Azure AI Foundry SDK in the solution.

  1. Run /plan: In the Copilot chat (or whatever AI assistant interface you use), type something like:
1     /plan We’ll use Python for the implementation. Leverage the Azure AI Foundry Python SDK (azure-ai-agents) to create the agent. 2     Design the agent as a simple console application that: 3     – Authenticates using DefaultAzureCredential (so it picks up our Azure CLI login). 4     – Connects to the Foundry project via an AgentsClient with the endpoint and model name. 5     – Creates an agent instance with the instructions defined (from our spec). 6     – Continuously reads user input from the console and uses the agent to get answers, until the user types “exit”. 7    

This instructs SpecKit to come up with a technical plan given our requirements, focusing on using Python and the Foundry SDK. The AI (Copilot) will respond with a proposed plan, likely enumerating things like “Use the azure-ai-agents library to create an agent client; set up a loop to interact with the agent; etc.” It may also mention preparing environment variables (which we’ve already done) or suggest a file structure.

Review the plan that comes back. It should align with what we described. If something is off or missing (for example, if it didn’t include using the Foundry SDK or skipped a feature), you can adjust your prompt or send a follow-up message to refine it. The planning step is collaborative – you and the AI work out the blueprint for the code.

  1. Run /tasks: Once the plan looks good, the next step is breaking the plan into concrete tasks. Simply send:
1     /tasks 2    

SpecKit will list out tasks based on the plan. For example, the tasks might be:

1     1. Set up project dependencies \(ensure `azure-ai-agents` and `azure-identity` are installed\)\. 2     2. Write a Python script to authenticate and connect to Foundry\. 3     3. Initialize the agent with the specified instructions\. 4     4. Implement a loop to handle user input and get responses from the agent\. 5     5. Handle the exit condition and any error cases \(e\.g\., if the agent fails to respond\)\.

These tasks should be detailed and actionable. Review them to make sure nothing is overlooked before generating code. If a task looks incorrect or incomplete, you can adjust it. Often, SpecKit gets it right, especially for straightforward projects.

Implementation: Generating the Code

Finally, the exciting part: having the AI generate our code! SpecKit’s /implement command will take the tasks and produce the actual code for each one.

  1. Run /implement: In the chat, enter:
1     /implement 2    

This triggers the AI to go through each task and create the corresponding code. Since we’re using Copilot, you’ll see it start making new files or updating files in your project. Watch your file explorer or editor tabs: likely a new file like main.py (or something similar) will appear, and maybe a requirements.txt will be created or updated.

SpecKit usually sets up a directory structure under a src/ folder or directly in the project root for your code. It might also update the README with usage instructions. Let’s break down what to expect in the generated code (focusing on the critical parts):

1     – **Authentication & Client Setup:** The code will import the Azure Foundry SDK and set up authentication\. For example:
1     import os 2     from azure.ai.agents import AgentsClient 3     from azure.identity import DefaultAzureCredential 4     5     # Initialize the Foundry Agents client 6     client = AgentsClient(endpoint=os.environ[“PROJECT_ENDPOINT”], credential=DefaultAzureCredential())

This uses the environment variable for the endpoint and your Azure CLI login token for authentication.

1     – **Agent Creation:** The code will create the agent in Foundry, using the model and an instruction prompt\. For example:
1     # Create the AI agent with the desired model and instructions 2     agent = client.create_agent( 3         model=os.environ[“MODEL_DEPLOYMENT_NAME”],  # e.g., “gpt-4″ 4         name=”my-first-agent”, 5         instructions=”You are a helpful agent that can answer math and geography questions.” 6     )

This calls Foundry to spin up an agent instance in your project using the specified model and instructions. The instructions string here should capture the behavior we described in the spec (SpecKit might have combined our spec into a prompt, or you might tweak it yourself).

1     – **Main Loop for Interaction:** Since we want a console app that interacts with the user, the code will likely have a loop like:
1     print(“Hello! I am your math and geography helper agent. Ask me a question (or type ‘exit’ to quit).”) 2     while True: 3         user_input = input(“> “) 4         if user_input.lower() in [“exit”, “quit”]: 5             print(“Goodbye!”) 6             break 7     8         # Send the question to the agent and get a response 9         response = client.run(agent, user_input)  # (Pseudo-code; actual SDK usage might differ) 10         print(response.content)  # Print the agent’s answer

The exact implementation may vary. The Azure AI Agents SDK may not have a direct run() method as shown above; it might require creating a chat thread and sending a message to the agent, then receiving a response. However, for simplicity, SpecKit might use a helper function or a simplified approach. The main idea is: for each user input, your code sends it to the agent and then prints out the agent’s reply.

1     – **Cleanup or Extra Details:** The code might include some extras, like error handling \(e\.g\., if agent creation fails or if no response is received\) and ensuring the program exits gracefully\.

Once /implement finishes, save all the files (if your IDE hasn’t already). At this stage, you have the code for your agent application!

Now, let’s run it and see how it works.

Testing the Agent 🏃‍♂️

Time to take your new agent for a spin. Open a terminal, make sure you’re in your project directory (the one containing your main.py or equivalent), and run the application. If SpecKit created a main script, it might be named after your project or just main.py. Let’s assume it’s main.py for now:

1     python main.py

When you run this, the program should authenticate (using your Azure CLI credentials), connect to Foundry, create the agent, and print the greeting from your code. For example:

1     Hello! I am your math and geography helper agent. Ask me a question (or type ‘exit’ to quit). 2    

Now you can type a question at the prompt. Let’s test both of the agent’s capabilities:

  • Math question: Type something like solve 3x + 11 = 14 and press Enter. The agent should consult the language model and respond with the solution, for example: “To solve 3x + 11 = 14, subtract 11 from both sides to get 3x = 3, then divide by 3, so x = 1.”
  • Geography question: Next, ask What is the capital of France? The agent should respond: “The capital of France is Paris.”
  • Unknown question: Try a question outside the agent’s knowledge, such as Who won the World Series in 2025?. Since we didn’t design it for sports trivia, it might either make a guess or (ideally, if it follows our instructions) respond that it doesn’t have that information.

Continue interacting as you like. When you’re done, type exit and the program will quit with a polite goodbye.

If everything works as expected – congratulations! 🎉 You’ve built and run your first AI agent using SpecKit and Azure AI Foundry. The agent’s intelligence comes from the Azure-hosted model (via Foundry), and the structure and code were largely generated by SpecKit following your high-level instructions.

Debugging and Troubleshooting Tips

Every new project runs into a few bumps. Here are some common issues and how to resolve them:

  • Azure authentication or permission errors: If the program fails with an authentication error (e.g. a 401 Unauthorized) or says you lack permission to create the agent, it’s likely an Azure role issue. Make sure your account has the necessary role for Azure AI Foundry (for example, the Azure OpenAI User or similar role on the resource). You can set this in the Azure Portal under your project’s Access Control (IAM). Also ensure you ran az login with the correct account. If you have multiple Azure accounts or subscriptions, verify that the subscription with Foundry is the one your CLI is using.
  • Environment variables not set: If you see an error like KeyError: ‘PROJECT_ENDPOINT’ (or ‘MODEL_DEPLOYMENT_NAME’ not found), it means the code can’t find your environment variables. Double-check that you set PROJECT_ENDPOINT (and the model name) in the terminal session where you’re running the app. Remember that setting an env var is session-specific—if you open a new terminal window, you may need to set it again unless it’s in your shell profile. Alternatively, you could hard-code the values in the script for quick testing (not recommended for production), or use a .env file and load it in your Python code for convenience.
  • SpecKit commands not recognized or AI not responding: If typing /specify (or other commands) in the chat doesn’t do anything, make sure:
    • You have the project open and the AI assistant is operating in that context (e.g., Copilot is open in the project folder).
    • SpecKit CLI was initialized with the –ai copilot (or appropriate) option.
    • You’re using the correct syntax (/command at the start of a message).
    • Your AI assistant is actually running and has access to the context (for Copilot, ensure the extension is logged in and working).

If all else fails, you can use SpecKit’s fallback: run the commands in the terminal. For example, specify plan will output the plan to your console. It’s less interactive but achieves the same result.

  • Code generation issues: Sometimes the AI might produce code that isn’t exactly what you envisioned. Think of the AI’s output as a first draft. You can and should tweak the code to fit your needs. SpecKit aims to get you 80-90% there; the rest might require a bit of manual polish. For instance, if it didn’t use the PROJECT_ENDPOINT variable correctly or the loop logic isn’t quite right, feel free to edit the code. It’s important to understand what the generated code is doing so you can fix any small issues. Checking Azure Foundry’s SDK documentation can help if you need to adjust how you call the agent or handle responses.
  • Agent responses not as expected: If the agent’s answers are off (e.g., too verbose, or not sticking to math/geography), you can refine the instruction prompt. In the code where the agent is created, adjust the instructions parameter to be more explicit about the agent’s behavior. For example, you might add something like, “If the user asks a question outside of math or geography, respond with a brief apology that you don’t have that info.” Then rerun the agent. Small changes in the prompt can influence the responses significantly.

Most issues can be solved by reading the error messages and maybe doing a quick web search or checking the official docs. Don’t hesitate to consult SpecKit’s documentation or Azure AI Foundry’s guides if you get stuck on something specific — but the above tips should cover the common hurdles for this project.

Deploying the Agent (Next Steps)

Our agent is currently running locally, which is perfect for a first project. But what if you want to deploy it so others can use it, or integrate it into a larger application?

Since the core “brain” of the agent is in Azure (via Foundry), deploying the client app is relatively straightforward:

  • Containerize or Package the app: You could containerize this Python application using Docker or turn it into a small web app. For instance, you might wrap the agent in a simple Flask API, so that users can interact with it via HTTP requests (e.g., a webpage or messaging interface calls your API, which in turn sends queries to the agent).
  • Azure App Service or Functions: You can deploy the container or code to Azure App Service for an easy-to-manage web app, or use Azure Functions (with an HTTP trigger) for a serverless approach. In both cases, you’d configure the environment with the PROJECT_ENDPOINT and credentials. In a cloud deployment, you might use a Managed Identity or a service principal for authentication instead of relying on the developer’s Azure CLI login.
  • Leverage Foundry’s interfaces: Azure AI Foundry itself provides ways to expose your agent. In the Foundry portal, you can test the agent in a built-in chat interface. For production integration, Foundry offers REST APIs to directly query your agent or even embed it in a Teams bot or web chat widget. This means you could have clients (web apps, chatbots, etc.) call the Foundry service directly, without needing your own middle-tier running continuously.

For learning purposes, you’ve already accomplished a lot: you specified, built, and ran an AI agent end-to-end on your machine. As next steps, you might explore adding more capabilities or making improvements:

  • Add tools to the agent: Foundry supports augmenting your agent with tools (for example, a web search tool, calculators, or custom APIs). Adding a tool can enable your agent to answer a wider range of questions by fetching external information or performing actions.
  • Iterate on the spec: Expand your agent’s knowledge or rules by updating the SpecKit specification and regenerating the code. You could, for instance, add a new domain of questions it can handle (like history trivia) and let SpecKit help incorporate that into your project.
  • Explore more SpecKit features: Try the /clarify command if you have complex specs to see how the AI can help refine requirements. If you start a larger project, consider writing multiple spec files for different components, and use SpecKit’s ability to maintain a project “constitution” to enforce coding standards or architectural patterns.

And of course, as you delve deeper, keep an eye on the official documentation and community resources for both SpecKit and Azure AI Foundry. Both tools are actively developed, and new features or best practices might emerge over time, which can help you build even more sophisticated AI projects.

Conclusion

In this tutorial, we introduced SpecKit and Azure AI Foundry and used them together to build a simple AI agent from scratch. Here’s a quick recap of what we did:

  • Set up the development environment with Python, Azure access, SpecKit, and the necessary libraries.
  • Initialized a SpecKit project and wrote a clear specification for our agent’s behavior.
  • Went through the planning phase to design the solution, then implemented it automatically to generate a functional codebase for our agent.
  • Tested the agent locally to make sure it works as expected.
  • Covered debugging tips and how to handle common issues (authentication, environment setup, etc.).
  • Discussed potential next steps for deploying the agent and adding enhancements.

By leveraging SpecKit’s AI-powered development workflow, we dramatically reduced the amount of manual coding needed – and by plugging into Azure AI Foundry, we gained immediate access to powerful language models and a robust agent framework without setting up any AI infrastructure ourselves. This combination allows developers (even those new to AI) to create sophisticated applications quickly and securely.

You’ve just completed your first SpecKit + Foundry project! From here, you can explore more complex ideas: perhaps a multi-turn customer support agent, or an agent that connects to your company’s internal knowledge base. As you grow more comfortable, SpecKit’s methodology will scale with you – you can maintain specs for larger projects, enforce coding standards via constitutions, and integrate various AI services as needed. And Azure AI Foundry will be there to host and manage your creations, whether for personal demos or production use.

Happy coding, and welcome to the future of spec-driven development! 🚀

Leave a comment