Building an AI-Powered Chatbot with Django, React, and OpenAI's GPT API

Seasoned software engineer, technical founder, and mentor with deep expertise in web and mobile development, enterprise applications, DevOps, and modern AI/LLM technologies. I build robust, scalable platforms using Python (Django, FastAPI), JavaScript/TypeScript (React, Next.js), and Dart (Flutter).
With a strong academic background and extensive real-world experience, I’m passionate about delivering impactful solutions and guiding the next generation of developers. I love exploring where software, AI, and technology intersect with everyday life. Outside of tech, I enjoy quality time with family, learning new things, and helping others grow.
Data is changing how we interact with technology, and chatbots are at the forefront of this transformation. Today, I’m excited to share a hands-on guide for building an AI-powered chatbot using Django for the backend, React for the front end, and OpenAI’s GPT API for generating intelligent responses. We’ll also cover best practices, such as managing sensitive API keys with a .env file. Let’s dive in!
Project Overview and Motivation
Chatbots are more than just a buzzword—they’re essential tools for enhancing customer service, support, and user engagement. We can create chatbots that understand context and generate human-like responses by integrating AI. In this project, we’ll combine the robustness of Django, the dynamic interactivity of React, and the natural language processing power of OpenAI’s GPT API. Additionally, we’ll secure our API keys using a .env file to ensure the safety of our sensitive data.
Setting Up the Django Backend
Creating the Project and App
First, make sure you have Django, OpenAI and Django REST Framework installed:
pip install django djangorestframework openai python-decouple
Now, create a new Django project and an app for our chatbot:
django-admin startproject ai_chatbot
cd ai_chatbot
python manage.py startapp chat
Next, open ai_chatbot/settings.py and add 'rest_framework' and 'chat' to your INSTALLED_APPS:
INSTALLED_APPS = [
# ... other installed apps
'rest_framework',
'openai',
'chat',
]
Managing Sensitive Data with .env
Storing sensitive data like your OpenAI API key in your code is a big no-no. Instead, we’ll use a .env file and the python-decouple package to manage these secrets securely.
Create a
.envfile in the root of your Django project:OPENAI_API_KEY=your_openai_api_key_hereUpdate your Django settings to load the API key:
At the top of your
settings.py, add:from decouple import config OPENAI_API_KEY = config('OPENAI_API_KEY')
Now your API key is securely managed and will not be hardcoded into your project. Create a .gitignore file in the root directory and add .env.
Creating a Chat API Endpoint
Let’s build a simple API endpoint in our chat app that will accept a user’s message, forward it to the GPT API, and return the generated response.
Create the View:
Open
chat/views.pyand add:import openai from rest_framework.decorators import api_view, permission_classes from rest_framework.response import Response from rest_framework import status, permissions from django.conf import settings # Retrieve OpenAI API key from settings API_KEY = getattr(settings, "OPENAI_API_KEY", None) @api_view(["POST"]) @permission_classes([permissions.AllowAny]) # Adjust permissions as needed def chat_view(request): # Extract user message from request.data user_message = request.data.get("message") if not user_message: return Response({"error": "Message field is required."}, status=status.HTTP_400_BAD_REQUEST) if not API_KEY: return Response({"error": "OpenAI API key is missing in settings."}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) try: # Set OpenAI API key openai.api_key = API_KEY # Call OpenAI API response = openai.ChatCompletion.create( model="gpt-4-turbo", messages=[ {"role": "system", "content": "You are a helpful AI assistant. Answer concisely and clearly."}, {"role": "user", "content": user_message} ], max_tokens=150, temperature=0.7, ) # Extract chatbot response chatbot_reply = response["choices"][0]["message"]["content"].strip() return Response({"reply": chatbot_reply}, status=status.HTTP_200_OK) except openai.error.OpenAIError as e: return Response({"error": f"OpenAI API error: {str(e)}"}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) except Exception as e: return Response({"error": f"Unexpected error: {str(e)}"}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)Wire Up the URL:
Create a
urls.pyfile in thechatdirectory with the following content:from django.urls import path from .views import chat_view urlpatterns = [ path('chat/', chat_view, name='chat'), ]Then, include this URL in your main project’s
urls.py:# ai_chatbot/urls.py from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('api/', include('chat.urls')), ]
Integrating OpenAI's GPT API
Our Django view is already set up to call the GPT API. When a POST request is made to /api/chat/, the view extracts the user’s message, constructs a prompt, and sends it to the GPT API. By securely managing our API key with a .env file, we ensure that our sensitive credentials remain protected.
Creating the React Frontend
Now, let’s set up a simple React frontend that lets users chat with our AI.
1. Setting Up the React App
If you haven’t created a React app yet, use Create React App:
npx create-react-app ai-chatbot-frontend
cd ai-chatbot-frontend
2. Building the Chat Interface
Replace the contents of src/App.js with the following code:
// src/App.js
import React, { useState } from 'react';
import './App.css';
function App() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const sendMessage = async () => {
if (!input.trim()) return;
// Add the user's message to the chat log
const userMessage = { sender: 'user', text: input };
setMessages(prev => [...prev, userMessage]);
// Send the message to the backend
try {
const response = await fetch('http://localhost:8000/api/chat/', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: input }),
});
const data = await response.json();
const botMessage = { sender: 'bot', text: data.reply };
setMessages(prev => [...prev, botMessage]);
} catch (error) {
console.error('Error sending message:', error);
}
setInput('');
};
return (
<div className="App">
<h1>AI Chatbot</h1>
<div className="chat-window">
{messages.map((msg, index) => (
<div key={index} className={`message ${msg.sender}`}>
<p>{msg.text}</p>
</div>
))}
</div>
<div className="input-area">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type your message..."
onKeyDown={(e) => e.key === 'Enter' && sendMessage()}
/>
<button onClick={sendMessage}>Send</button>
</div>
</div>
);
}
export default App;
3. Styling the Chat Interface
Create or update src/App.css with some basic styles:
/* src/App.css */
.App {
font-family: Arial, sans-serif;
text-align: center;
margin: 20px;
}
.chat-window {
border: 1px solid #ccc;
border-radius: 8px;
height: 400px;
width: 80%;
margin: 20px auto;
overflow-y: scroll;
padding: 10px;
background-color: #f9f9f9;
}
.message {
margin: 10px 0;
padding: 8px 12px;
border-radius: 8px;
max-width: 70%;
display: inline-block;
text-align: left;
}
.message.user {
background-color: #daf1da;
align-self: flex-end;
}
.message.bot {
background-color: #e0e0e0;
align-self: flex-start;
}
.input-area {
width: 80%;
margin: 0 auto;
display: flex;
justify-content: center;
gap: 10px;
}
input[type="text"] {
flex: 1;
padding: 10px;
border-radius: 8px;
border: 1px solid #ccc;
}
4. Running the React App
Before starting, ensure your Django backend is running on port 8000. Then, start your React app:
npm start
Your browser should open at http://localhost:3000, and you’ll see your chat interface ready for some conversation with your AI-powered chatbot!
Enhancing User Experience
To make your chatbot even better, consider these enhancements:
Loading States: Add a spinner or loading indicator while waiting for the backend’s response.
Error Handling: Display friendly error messages if something goes wrong.
Session Management: Use cookies or local storage to maintain conversation history.
Responsive Design: Optimize the interface for mobile users with responsive CSS media queries.
These improvements can help deliver a smoother, more engaging user experience.
Deployment and Scaling Considerations
When it’s time to take your chatbot to production, keep these points in mind:
Backend Deployment: Services like Heroku, AWS, or DigitalOcean work great for deploying your Django application. Remember to set your environment variables on the hosting platform.
Frontend Deployment: Deploy your React app using platforms like Netlify or Vercel.
CORS Management: If your backend and frontend are on different domains, configure Django’s CORS settings appropriately.
Scaling: As your chatbot grows, consider using background task queues (e.g., Celery) to manage API requests and prevent blocking the main thread.
Conclusion and Next Steps
Congratulations on building your own AI-powered chatbot! In this post, we combined Django, React, and OpenAI’s GPT API to create a responsive and intelligent chat interface, while also ensuring our sensitive API keys remain secure with a .env file. This project not only deepens your understanding of full-stack development but also opens the door to exploring more advanced AI features.
Next steps could include:
Integrating user authentication for personalised conversations.
Adding more advanced AI features like sentiment analysis.
Deploying your application and monitoring its performance in production.
I hope you enjoyed this tutorial and feel inspired to expand on it. Happy coding, and keep pushing the boundaries of what’s possible with modern web technologies!





