Chanakya AI Intelligent Solutions for a Smarter Future

Object-Detection-with-yolov5

A cutting-edge project that combines state-of-the-art YOLOv5-based object detection with an intelligent chatbot backend to provide real-time interactive feedback and automation.

Language: Python

Stars: 0

Forks: 0

Watchers: 0

View on GitHub

README

Object Detection with Yolo11 and Chatbot Integration

A cutting-edge project that combines state-of-the-art Yolo11-based object detection with an intelligent chatbot backend to provide real-time interactive feedback and automation.

Table of Contents


Project Overview

This project leverages the powerful Yolo11 deep learning model to perform real-time object detection on video streams or images. The detection results are then integrated with a chatbot system, enabling intelligent interaction based on detected objects.

This unique fusion allows users not only to detect and track objects but also to query and receive contextual responses or automate workflows through the chatbot interface.


Key Features


Technology Stack


Architecture

mermaid flowchart LR Camera/Video -->|Frames| Yolo11-Detector Yolo11-Detector -->|Detection Results| Data-Processor Data-Processor -->|Formatted Output| Chatbot-Backend Chatbot-Backend -->|User Queries| Chatbot-Interface Chatbot-Interface -->|Response| User


Installation

Clone the repository:

bash git clone https://github.com/nabeelalikhan0/Object-Detection-with-Yolo11.git cd Object-Detection-with-Yolo11

Create and activate a virtual environment:

bash python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate

Install dependencies:

bash pip install -r requirements.txt

Download Yolo11 weights:

You can download pretrained weights from the official Yolo11 repo or automatically via scripts included.


Usage

Run the main script for object detection and chatbot integration:

bash python main.py

Options

Use command-line flags or config files to specify:


How It Works

  1. The system captures video frames in real-time.
  2. Yolo11 processes each frame to detect objects, outputting bounding boxes, labels, and confidence scores.
  3. Detected object data is sent to the chatbot backend.
  4. The chatbot can:
  5. Answer user queries related to detected objects.
  6. Provide automated actions or information.
  7. Store or export detection logs.
  8. User interacts with the chatbot via a graphical interface, allowing queries based on the visual scene.

Customization


Use Cases


Contributing

Contributions are welcome! Please fork the repo and submit pull requests for bug fixes, feature additions, or improvements.


License

This project is licensed under the MIT License. See the LICENSE file for details.


Contact

Created by Nabeel Ali KhanGitHub
Feel free to reach out for questions, feedback, or collaboration opportunities.