September 29, 2025 by
I’m starting a new role in a few weeks and in the interim I wanted to structure some learning around product and AI alongside making some time for gardening and learning the piano. I asked Claude to generate a plan that I can follow and this is what it provided. I expect to loosely follow the approach and document it here on the blog.
The Initial Setup
For this project, I tried using Warp. I had this installed on my machine, but up until today, I’ve been using Cursor for development. So wanted to try something new. Warp is geared towards being an AI assisted terminal replacement, but more recently has extended into full featured AI dev processes. I expressed my intention to the Agent interface in Warp to build a chat agent with Python. The project began with a clean, minimal structure:
claude-chatbot/
├── README.md
├── requirements.txt
├── chatbot.py
├── .env
└── src/ # (empty for now)
Dependencies
First, I set up the required dependencies in requirements.txt
:
```txt path=null start=null anthropic>=0.3.0 python-dotenv>=1.0.0
It just added requirements for the Anthropic SDK and environment variable management, suggested I use a .env file for my API key.
### The Core Implementation
I then created chat.py. This code creates a simple command-line chatbot that lets you have a conversation with Claude in your terminal. It keeps track of your entire conversation history, sends your messages to Claude's API, displays the responses, and continues until you type "exit".
```python path=/Users/Mark/Desktop/Code/claude-chatbot/chatbot.py start=1
import os
import anthropic
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
# Initialize the Anthropic client
client = anthropic.Anthropic(
api_key=os.getenv('CLAUDE_API_KEY')
)
conversation = []
while True:
user_message = input("You: ")
if user_message.lower() == 'exit':
print("Chat ended. Goodbye!")
break
# Add user message to the conversation
conversation.append({'role': 'user', 'content': user_message})
response = client.messages.create(
model='claude-3-5-sonnet-20241022',
max_tokens=1024,
messages=conversation
)
bot_message = response.content[0].text
print(f"Claude: {bot_message}")
# Add chatbot response to the conversation
conversation.append({'role': 'assistant', 'content': bot_message})
I then ran the code with python chat.py
Here’s where things got interesting. I have an active Claude subscription that I use regularly for coding assistance, document analysis, and general AI tasks. Naturally, I assumed this would grant me API access to build applications.
Plot twist: It doesn’t.
The Subscription vs API Confusion
It turns out there are two completely separate Anthropic services:
- Claude.ai PRO Subscription (~£20/month)
- Access to claude.ai web interface
- Access to claude-code development agent
- Anthropic API (Pay-per-use)
- Programmatic access for developers
- Separate pricing structure
- Required for building applications
- Different billing from the subscription
This separation means that even with a Claude subscription, you still need to pay separately for API usage when building applications. Effectively the Claude web browser is similar to what we are building here and uses Anthropic’s claude AI, which kind of makes sense. But at this stage I can’t afford to pay out for more AI services, so I stopped here.
Day 2’s approach
Tomorrow I intend to look at free tier cloud approaches to doing this, for example via Google Colab.