Projects
StackPilot - Self-Hosted DevOps Control Plane
RAG-Powered Email Personalization System
Content extraction using web scraping for recipe dataset
IOT-Based Inventory Management System
Facial Expression Detection using CNN
Read More about Facial Expression Detection using CNN
IOT based Inventory Management System and Recipe Recommendation System
Automated Video Generator with Workflow Orchestration
MarketIQ - AI Marketing Tool
La Creation - Business Portfolio Website
Recipe Recommendation System
Personal Portfolio Website
DashTro - Headless CMS
Recipe Finder
Game 2048 - Desktop Tile Merging Puzzle
Android Application Development using Kivy Python
AI-Powered GroupMe Chatbot for Residential Management
StackPilot - Self-Hosted DevOps Control Plane

System Overview
A full-stack platform for managing application deployments, secrets, and build configuration—built as a self-hosted alternative to Render or Railway. StackPilot provides encrypted secrets management, role-based access control, and Git webhook integration enabling teams to deploy and manage services without expensive managed platforms.
Architecture
Three-Level Deployment Hierarchy
Organized structure modeled after modern deployment platforms: Project → Environment → Service
- Projects: Top-level organizational units (e.g., "E-commerce Platform")
- Environments: Deployment targets (development, staging, production)
- Services: Individual components (web servers, databases, static sites) with independent secrets and build configurations
Secrets Management
AES-256-GCM Encrypted Storage
Production-grade encryption protecting sensitive credentials:
- Secrets encrypted before database storage using AES-256-GCM
- Encryption key auto-generated at first startup, stored separately from database
- List endpoints return metadata only (names, timestamps)—no values exposed
- Individual fetch decrypts on demand—values returned only on explicit request
- Git integration for sourcing secrets from encrypted repository files
Build & Deploy Configuration
Per-Service Build Management
Each service maintains independent configuration:
- GitHub repository URL with encrypted personal access token
- Branch specification and root directory (monorepo support)
- Build and start commands
- Auto-generated UUID webhook URL for GitHub integration
GitHub Webhook Integration
- Unique webhook secret per service (UUID auto-generated)
- Receive push events triggering automated deployments
- Secure trigger mechanism preventing unauthorized deploys
Authentication & Authorization
Multi-Layer Security
JWT Authentication:
- 24-hour token expiry with HTTP-only secure cookies
- HS256 signing preventing token tampering
- Owner: Full platform control
- Admin: Project-level management
- User: Read access to assigned projects
Project-Scoped API Keys:
- 32-byte hexadecimal keys for CI/CD integrations
- 7-day expiration with project-level scoping
- Revocable tokens for instant access termination
- Secure token generation via Resend API
- bcrypt-hashed tokens with 1-hour expiry
REST API
25+ Endpoints Across Permission Tiers
- Public: Authentication, registration, webhooks
- User-level: View projects, environments, services, secrets (metadata)
- Admin: Create/modify/delete resources with cascading cleanup
Cascading Deletion: Delete Project → removes environments → services → secrets → configs. API-layer enforcement ensuring data consistency.
Frontend Dashboard
React Management Interface
Full CRUD operations for the entire hierarchy:
- Create/view/edit/delete projects with environment overview
- Add environments with service listings
- Deploy web servers, databases, static sites
- Manage secrets with show/hide toggle for sensitive values
- Edit build configuration with real-time validation
State Management:
- Zustand stores scoped by feature domain
- Context-keyed stores (projectId/envName) supporting multiple concurrent contexts
- Optimistic updates with devtools integration
Technical Stack
- Backend: Go, Gin, GORM, SQLite, AES-256-GCM, JWT (HS256), bcrypt
- Frontend: React 18, TypeScript, Vite, Zustand, Tailwind CSS, shadcn/ui, Radix UI, react-router-dom v7
- Component System: Custom UI library (Button, Input, Card, Modal, Toast) with Storybook documentation and Vitest tests
Technical Highlights
- Security-First API Design: List endpoints return metadata only—secret values never exposed in bulk operations. Individual GET requests decrypt specific secrets on demand. GORM Find + RowsAffected pattern suppresses noisy "record not found" error logs.
- Cascade Delete Logic: API-layer enforcement with explicit ordering (services → secrets → configs). Transaction-wrapped preventing partial deletions. Audit logging of each deletion step.
- UUID Webhook Secrets: Auto-generated on config save (never user-supplied). 128-bit entropy preventing prediction attacks. Encrypted storage with separate encryption key.
Self-Hosting Benefits
Cost Efficiency:
- Render/Railway: $20–50/month per user
- StackPilot self-hosted: $6–12/month total
- Annual savings: $200–600 for small teams
- Complete control over deployment credentials
- No third-party access to source code
- Compliance-friendly for regulated industries
Key Challenges & Solutions
- Preventing Secret Exposure in Logs: Metadata-only list endpoints, explicit decrypt-on-demand pattern, sanitized error messages never including secret values.
- Cascade Deletion Complexity: API-layer enforcement with explicit ordering, transaction wrapping enabling rollback on failures, clear error messages about deletion impacts.
- Webhook Security: UUID-based auto-generated webhook URLs (128-bit entropy), encrypted storage, never user-supplied preventing weak values.
- Frontend State Across Nested Hierarchy: Zustand stores keyed by context enabling simultaneous multi-project management, optimistic updates with rollback on API failure.
Key Takeaways
StackPilot demonstrates building production-grade DevOps tooling with modern security practices (AES-256-GCM, JWT, RBAC) while maintaining developer-friendly workflows. The platform proves self-hosted alternatives to expensive managed services are viable—strategic architecture delivers enterprise features at a fraction of SaaS costs.
Security: AES-256-GCM encryption-at-rest, JWT (HS256), RBAC, bcrypt
API: 25+ endpoints, 3-tier hierarchy, cascading deletes, metadata-only list endpoints
Economics: $6–12/month self-hosted vs. $20–50/month/user managed platforms
Status: Active development, core features complete
Automated Video Generator with Workflow Orchestration

System Overview
A fully automated content creation and streaming platform that transforms job listings from major employment portals into professionally narrated video streams. Operating 24/7 with zero manual intervention, the system processes 120 job listings daily, generates 12 half-hour videos (6 hours of fresh content), and maintains continuous YouTube streaming—all on a $6/month infrastructure budget achieving 98% cost savings vs. commercial alternatives.
Content Pipeline Architecture
Multi-Stage Automation Flow
Stage 1: Content Aggregation
Automated extraction from six major platforms: LinkedIn, Indeed, Naukri, Internshala, Foundit, and Monster via Google Custom Search API (100 free requests/day). Strategic batching and platform rotation consistently delivers 120 jobs daily within free-tier limits.
Daily Volume: 120 jobs → 12 videos (10 jobs each) → 6 hours new content + 6 hours replay = 12-hour streaming cycle
Stage 2: AI Script Generation
GPT-4o Mini transforms raw job data into engaging 30-minute narratives with engineered prompts ensuring consistent tone, structure, and information density. Automated validation checks length, coherence, and quality before proceeding.
Stage 3: Audio Synthesis
Kokoro TTS (Hugging Face open-source) converts scripts to natural narration—self-hosted eliminating the $300-500/month cost of commercial TTS services while providing unlimited generation capacity.
Stage 4: Visual Pipeline
Three-step conversion ensuring compatibility and quality:
- PowerPoint automation creates slides from job data with dynamic layouts
- LibreOffice headless renders PPTX to high-fidelity PDF
- WebP export produces optimized video frames (30-40% smaller than PNG)
Stage 5: Video Composition
FFmpeg merges audio with visuals creating 30-minute segments (1080p, H.264, optimized for streaming). Parallel processing generates 3-4 videos concurrently reducing total cycle time.
Stage 6: Stream Assembly
Videos combine into daily schedule: 12 videos (6 hours new + 6 hours replay) maintaining continuous viewer engagement and maximizing content utilization.
Live Streaming Infrastructure
OBS WebSocket Control
Python services programmatically manage OBS Studio via WebSocket:
- Generate unique stream IDs and configure scenes dynamically
- Set encoding parameters (1080p, 6000kbps, x264)
- Add overlays, backgrounds, and branded graphics per session
- Establish YouTube RTMP connection with authentication
Priority Queue System
Intelligent playback management:
- High Priority: Breaking opportunities, urgent hiring
- Standard Priority: Daily new content
- Fill Priority: Replay content preventing dead air
YouTube Integration
Simultaneous live streaming and video archival with automated metadata generation (titles, descriptions, tags, thumbnails) derived from content data.
Monitoring & Alerting
Failed Node Detection
Real-time monitoring tracks every workflow node with immediate email alerts on failure including:
- Failure context, error diagnostics, impact assessment
- Affected content and recommended remediation actions
- Alert routing to appropriate teams (data, content, infrastructure, operations)
Recovery Workflow
- Automatic retry with exponential backoff (3 attempts)
- Alternative approaches (skip item, use cached content, fallback generation)
- Human escalation with pre-compiled diagnostics if automated recovery fails
Reliability Metrics: 99.5% uptime, 85% automated recovery rate, <2 min alert latency, <15 min MTTR
Orchestration Architecture
n8n Workflow Engine (self-hosted on Contabo VPS $5-6/month)
Coordinates scheduled triggers, data transformations, conditional logic, error handling, state management, and webhook integrations—all through visual workflows with comprehensive monitoring.
Custom FastAPI Microservices
Extend n8n capabilities:
- TTS Service: Kokoro inference, voice profiles, audio normalization
- FFmpeg Service: Video encoding, concatenation, format conversion
- Visual Pipeline Service: PPTX → PDF → WebP orchestration with validation
- OBS Control Service: WebSocket communication, stream configuration, health monitoring
Cost Efficiency
Total Monthly Cost: $10-15 (98% reduction vs. commercial)
- Contabo VPS: $5-6/month (8 vCPU, 16GB RAM, 400GB NVMe)
- GPT-4o Mini API: ~$3-5/month (3,600 scripts monthly)
- Kokoro TTS: $0 (self-hosted vs. $300-500/month commercial)
- Google Search API: $0 (100 free requests/day, optimized)
- YouTube Streaming: $0 (free platform)
Performance Characteristics
Daily Processing:
- 120 job listings extracted and processed
- 12 videos generated (30 minutes each)
- 6 hours new content produced
- 98% success rate extraction → streaming
- 4-6 hour end-to-end latency
System Efficiency:
- Parallel processing: 3-4 concurrent video generations
- 99.5% uptime with automated failover
- <15 minute recovery time for failures
- 360 videos/month (3,600 jobs presented)
Technical Innovations
- API Optimization: Batch requests, platform rotation, smart caching, and deduplication extract 120 jobs from 100 free API requests daily
- Quality Assurance: Multi-stage validation (script, audio, visual, video) maintains 98% success rate with automatic retry and fallback strategies
- Resource Optimization: Batch processing, intelligent caching (30-40% speedup for repeated content), off-peak scheduling (overnight processing)
- Streaming Reliability: Health monitoring every 30 seconds, automatic restart on degradation (2-3 min recovery), queue persistence preventing data loss
Key Challenges & Solutions
- API Rate Limits → Batch requests (10-15 jobs per call), platform rotation, supplemental RSS feeds
- Script Quality → Engineered prompts, multi-stage validation, automatic regeneration (3 attempts)
- Visual Pipeline → Three-step conversion with validation, PDF intermediary ensuring consistency
- Stream Continuity → Priority queue with intelligent replay preventing dead air during failures
- Cost Management → Self-hosted Kokoro TTS, Contabo VPS, free-tier APIs (98% savings)
- Monitoring Visibility → Comprehensive error handlers, email alerts with diagnostics, 85% automated recovery
Technical Stack
- Orchestration: n8n (self-hosted), Contabo VPS ($6/month)
- Backend: FastAPI, Python
- AI: GPT-4o Mini, Kokoro TTS (Hugging Face)
- Processing: FFmpeg, LibreOffice (headless), PowerPoint automation, WebP
- Streaming: OBS Studio, OBS WebSocket, YouTube APIs
- Data Sources: Google Custom Search API, LinkedIn, Indeed, Naukri, Internshala, Foundit, Monster
- Monitoring: Email alerts, health checks, automated recovery
- Storage: PostgreSQL, Docker containerization
Business Impact
Production Scale:
- 3,600 jobs presented monthly (120/day × 30)
- 360 videos produced monthly (12/day × 30)
- 180 hours new content generated monthly
- Fully autonomous 24/7 operation
Operational Excellence:
- 98% automation success rate
- $540-685 monthly savings (98% cost reduction)
- 99.5% uptime with <15 min MTTR
- Zero manual intervention required
Future Enhancements
- Content Intelligence: Trend analysis, personalized streams by job category, A/B testing, real-time adjustment based on viewer metrics
- Advanced Features: Multi-language support, dynamic thumbnails, automated chapters, real-time subtitles, voice cloning
- Infrastructure: GPU acceleration (50-70% faster processing), distributed queue system, CDN integration, multi-stream support
- Monitoring: Predictive failure detection, automated remediation, Slack/Discord integration, performance degradation alerts
Key Takeaways
This project demonstrates production-grade automation orchestrating AI content generation, open-source TTS, media processing, and live streaming into a fully autonomous platform operating at 1/50th the cost of commercial solutions.
Achievement: Built a system handling production workloads (360 videos/month, 99.5% uptime) on a $6/month VPS—proving sophisticated automation doesn't require enterprise budgets when properly architected.
Value: Transforms hours of manual work into automated processing delivering 3,600 job opportunities monthly with 98% success rate and comprehensive monitoring—serving as a blueprint for cost-effective, large-scale content automation.
Scale: 120 jobs/day, 12 videos/day, 6 hours content/day, 360 videos/month
Economics: $10-15/month (98% savings vs. $550-700/month commercial)
Reliability: 99.5% uptime, 98% success rate, <2 min alerts, <15 min MTTR
Status: Production deployment, actively streaming, continuous optimization
DashTro - Headless CMS

System Overview
A headless CMS that lets users define custom data schemas, create named collections based on those schemas, and manage documents within each collection. DashTro separates content structure from content delivery—schemas describe the shape of data, collections namespace instances of that shape, and documents hold the actual content, all served through a clean REST API.
Architecture
Monorepo — Frontend + Backend
The project is split into two independent packages:
- cms-frontend: React 18 + TypeScript SPA (Vite), communicating with the backend over Axios
- cms_backend: Django 5 + Django REST Framework API with PostgreSQL JSONB storage
Schema Builder
User-Defined Data Structures
Users define schemas with typed fields before creating any content:
- Supported types: String, Number, Boolean, NestedDoc, ReferenceDoc
- Schema names are validated as PascalCase; field names are enforced as snake_case — constraints applied at the serializer level so invalid shapes are rejected before reaching the database
- Schemas are stored in the
cms_schemaJSONB table, making the field definitions themselves queryable data
Collections & Documents
Namespaced Content Management
Collections link a named workspace namespace to a schema, providing isolation between content types:
- Each collection is tied to a schema, constraining which fields its documents can contain
- Documents are created with auto-generated IDs; their forms are dynamically generated at runtime from the linked schema definition—no hardcoded form fields anywhere in the frontend
- All document content is stored in the
cms_workspace_dataJSONB table, allowing arbitrary schema evolution without database migrations for individual field changes
API Design
RESTful Endpoints Across Three Resource Tiers
- Auth:
/api/cms/auth/— JWT login and token management - Schema:
/api/cms/schema/(list/create),/api/cms/schema/<id>/(retrieve/update/delete) - Collections:
/api/cms/collections/(list/create),/api/cms/collections/<id>/(update/delete) - Documents:
/api/cms/workspace/<ws>/collection/<col>/(list/create),/api/cms/workspace/<ws>/collection/<col>/document/<id>/(retrieve/update/delete)
Data Layer
PostgreSQL JSONB for Schema-Flexible Storage
All four core tables use JSONB columns:
- cms_schema — field definitions per schema
- cms_schema_collections — collection configs linking workspaces to schemas
- cms_workspace_data — document content
- cms_realtime — real-time data channel
postgres_client.py utility module handles all JSONB read/write operations, keeping raw SQL out of view logic and making storage behaviour easy to test in isolation.
Frontend
React 18 + Redux Toolkit + Material-UI 7
The SPA is structured around page-level components and a shared component library:
- Pages: Login, Schema builder, Collection content, Document content, Settings
- 13 reusable components including SchemaComponent, DocumentList, PageForm, and LinkDrawer
- 5 Redux slices (schema, collection, document, schemaPreset, rootPath) providing predictable global state
- Custom hooks —
useSchema,useCollection,useDocument,useSchemaMetaData— encapsulate data-fetching logic and keep pages thin
Technical Stack
- Frontend: React 18, TypeScript, Vite, Redux Toolkit, Material-UI 7, React Router 7, Axios, SASS
- Backend: Django 5, Django REST Framework, PostgreSQL (JSONB), JWT auth
Key Takeaways
DashTro demonstrates the power of JSONB-backed dynamic schemas—content structure is data, not code, so new content types require zero backend changes. The pattern of generating serializers and forms at runtime from schema definitions keeps the system genuinely headless: the API contract is driven by user configuration, not hardcoded models.
Schema: User-defined typed fields (String, Number, Boolean, NestedDoc, ReferenceDoc), PascalCase/snake_case validation
Storage: PostgreSQL JSONB — schema-flexible, no migrations for field changes
API: Dynamic DRF serializers generated from schema definitions at request time
Status: Active development
RAG-Powered Email Personalization System

System Overview
An intelligent email automation platform that generates personalized weekly newsletters by extracting relevant information from company documents and tailoring messages to individual recipients at scale. The system eliminates manual email writing, reduces campaign preparation from hours to minutes, and enables a small team to manage mass outreach with comprehensive approval workflows and delivery tracking.
The Problem
Managing weekly customer outreach newsletters presented significant bottlenecks:
- Manual email writing consuming hours per campaign researching company updates from scattered documents
- Personalization at scale requiring individual customization for different recipient segments
- Small team capacity limiting outreach volume and consistency
- Data scattered across PDFs, Word docs, Excel sheets making relevant information hard to extract
- Quality control needed before sending to customer base
Solution Architecture
RAG-Based Intelligence Pipeline
The platform implements Retrieval-Augmented Generation ensuring emails contain accurate, relevant company information rather than AI hallucinations.
Document Knowledge Base
Ingests and processes company documents creating a searchable semantic database:
- Supported formats: PDF, DOCX, Excel (product updates, case studies, announcements, internal reports)
- Vector storage: Supabase pgvector for efficient semantic search
- Embedding generation: Google Gemini text-embedding-004 model
- Chunking strategy: Intelligent document splitting preserving context and relationships
Personalization Engine
For each recipient, the system:
- Retrieves context from knowledge base based on email topic and recipient profile (industry, past interactions, interests)
- Generates content using Google Gemini with retrieved documents as grounding context
- Personalizes based on recipient data from Excel sheets (name, company, role, previous engagement)
- Validates output for tone consistency, length constraints, and factual accuracy
Workflow & Approval System
n8n Orchestration
The entire process runs through custom n8n workflows providing visual oversight and control.
Email Generation Workflow:
- Recipient import from Excel sheets (names, emails, companies, segments)
- Topic definition specifying email purpose and key messages
- RAG retrieval pulling relevant company information from vector database
- Gemini generation creating personalized drafts for each recipient
- Preview compilation assembling all emails for review
Telegram Approval Integration
Rather than email-based review, the system uses Telegram for mobile-friendly approval.
Approval Flow:
- Draft notification sent to approver via Telegram with campaign summary
- Gmail draft automatically created in approver's inbox for detailed review
- n8n preview panel displays all generated emails with recipient details
- Inline approval buttons in Telegram (Approve All, Review Individual, Reject)
- Single approver reviews and authorizes before mass sending
Average approval time: <30 minutes from generation to authorized send
Gmail Integration & Delivery
Mass Email Distribution
Once approved, the system orchestrates personalized sending via Gmail API.
Gmail Node Features:
- Personalized sending: Individual emails to each recipient (not BCC mass mail)
- Read tracking: Gmail read receipts monitoring email opens
- Follow-up triggers: Automated reminders for unopened emails after 3-5 days
- Rate limiting: Respects Gmail sending limits (500 emails/day) with queue management
- Error handling: Failed sends automatically retry, log issues, alert team
Recipient Management: Excel sheets track contact details, send history, open status, and follow-up needs. n8n nodes update sheets post-send with delivery status and engagement metrics. Follow-up workflow automatically identifies unopened emails triggering gentle reminder campaigns.
Engagement Visibility: Real-time dashboard in n8n showing open rates, pending follow-ups, and campaign performance.
Technical Architecture
n8n Workflow Orchestration
Custom nodes coordinate the entire pipeline:
- Excel integration importing recipient data, updating delivery status
- Supabase vector queries retrieving relevant document sections
- Gemini API calls generating personalized content with context
- Gmail operations creating drafts, sending emails, tracking reads
- Telegram webhook handling approval interactions
- Conditional logic routing based on approval status, send quotas, error conditions
Supabase pgvector Implementation
PostgreSQL with pgvector extension provides scalable semantic search:
- Document chunks stored with embeddings (1536 dimensions, Gemini model)
- Similarity search finding top-k relevant sections for each email topic
- Metadata filtering by document type, date, department, relevance
- RLS policies ensuring secure document access control
Google Gemini Integration
Dual usage for efficiency:
- Embeddings (text-embedding-004): Convert document chunks and queries to vectors
- Generation (gemini-pro): Create email content with retrieved context as grounding
Key Features
Intelligent Content Creation
- Context-aware emails: System retrieves relevant company updates, product launches, case studies matching recipient industry and interests
- Factual grounding: RAG architecture prevents hallucinations by anchoring generation in actual company documents
- Tone consistency: Maintains professional brand voice across all personalized variations
- Length optimization: Targets ideal newsletter length balancing information density and readability
Approval & Quality Control
- Visual preview: n8n interface displays all generated emails before sending
- Gmail draft review: Approver sees exact email format and content in familiar Gmail interface
- Telegram mobile workflow: Quick approval from anywhere without desktop access
- Edit capability: Approver can modify drafts in Gmail before final authorization
Scalability
- Mass personalization: Generate unique emails for 100+ recipients in <10 minutes
- Queue management: Respects Gmail limits while maximizing throughput
- Error resilience: Failed sends retry automatically without losing data
- Weekly cadence: Supports consistent newsletter schedule with minimal manual effort
Business Impact
Time Efficiency:
- Before: 3-4 hours per weekly newsletter (research updates, write emails, personalize, send)
- After: 30-45 minutes per campaign (define topic, review drafts, approve, automated send)
- Time savings: ~80% reduction in campaign preparation effort
Team Productivity:
- Small team enablement: 1-2 people manage weekly outreach to 100+ customers
- Consistent cadence: Reliable weekly newsletters previously impossible with manual process
- Reduced bottlenecks: Automation eliminates research and writing delays
Communication Quality:
- Improved personalization: Each email tailored to recipient context (vs. one-size-fits-all template)
- Factual accuracy: RAG grounding ensures company information correctness
- Professional consistency: Maintained brand voice across all communications
Technical Challenges & Solutions
- Document Knowledge Accuracy → RAG architecture retrieves actual document sections rather than relying on LLM memory. Gemini generates emails grounded in retrieved text preventing factual errors or outdated information.
- Personalization at Scale → Batch processing generates 100+ unique emails in parallel while maintaining individual context. Excel integration provides recipient data (industry, past interactions) informing personalization strategy.
- Approval Workflow Speed → Telegram integration enables mobile-first approval without desktop email access. Interactive buttons provide instant authorization while Gmail drafts offer detailed review when needed. Average approval time reduced from hours to <30 minutes.
- Gmail Sending Limits → Intelligent queue management respects 500 emails/day limit, distributes large campaigns across multiple days, implements exponential backoff for rate limit errors, and provides clear progress visibility.
- Follow-up Management → Automated read tracking updates Excel sheets with open status. n8n workflow identifies unopened emails after 3-5 days, generates gentle follow-up content, and queues reminder campaigns—eliminating manual tracking burden.
Technical Stack
- Workflow Orchestration: n8n (self-hosted, custom nodes)
- Vector Database: Supabase (PostgreSQL + pgvector extension)
- AI Models: Google Gemini (text-embedding-004, gemini-pro)
- Email Platform: Gmail API (sending, drafts, read tracking)
- Approval Interface: Telegram Bot API (webhooks, interactive buttons)
- Recipient Management: Excel integration (n8n nodes, automated updates)
- Document Processing: PDF parsing, DOCX extraction, Excel reading
- Infrastructure: Contabo VPS (self-hosted n8n instance)
- Deployment: Docker containerization, automated workflows
Operational Metrics
Weekly Newsletter Cadence:
- 100+ personalized emails generated per campaign
- 30-45 minute total campaign time (generation + approval + send)
- <30 minute average approval turnaround
- 80% time reduction vs. manual process
System Performance:
- Email generation: ~5-10 minutes for 100 recipients
- RAG retrieval: <2 seconds per query (vector search + embedding)
- Approval workflow: Mobile-accessible, real-time status updates
- Delivery tracking: Automated read monitoring, follow-up identification
Future Enhancements
- Content Intelligence: A/B testing different subject lines, content structures, CTAs; engagement analysis identifying high-performing topics; sentiment analysis on recipient replies
- Advanced Automation: Multi-language support, dynamic content blocks per recipient, automated scheduling based on recipient timezone, smart follow-up sequences with varying content based on engagement level
- Enhanced Personalization: CRM integration pulling richer recipient context, behavioral triggers (product usage, renewal dates) initiating targeted emails, industry-specific content recommendations
- Analytics Dashboard: Real-time campaign performance metrics, historical trend analysis, recipient segmentation insights, ROI tracking connecting outreach to business outcomes
Key Takeaways
This project demonstrates intelligent automation combining RAG architecture, workflow orchestration, and thoughtful approval processes to solve a real business problem: enabling small teams to maintain personalized, consistent customer communication at scale.
Technical Achievement: RAG implementation ensures emails contain accurate company information grounded in actual documents rather than generic AI-generated content—critical for maintaining professional credibility and brand trust.
Operational Value: 80% time reduction transforming weekly newsletter preparation from 3-4 hour manual effort to 30-45 minute supervised automation—enabling consistent outreach previously impossible with team capacity.
User-Centric Design: Telegram approval workflow recognizes that mobile-accessible, instant authorization matters more than elaborate review interfaces—pragmatic engineering serving actual team workflows rather than technical complexity for its own sake.
The system proves that thoughtfully designed automation doesn't replace human judgment—it amplifies it, handling tedious research and writing while preserving quality control and strategic oversight.
MarketIQ - AI Marketing Tool

Overview
My first hackathon — HackUTA 6 at The University of Texas at Arlington, built alongside teammates Akshay Daundkar, Nachiket Gawali, and Nikhita Kalburgikar. I led the frontend for this AI-powered marketing assistant designed for small businesses that lack a dedicated marketing team. MarketIQ automates industry research, generates personalized marketing flows, schedules content, and maintains consistent brand messaging—all through a conversational AI interface backed by GPT-4.
The Problem
Small businesses face a compounding set of marketing barriers:
- Professional SEO tools and campaign platforms are cost-prohibitive on limited budgets
- Owners juggling multiple roles rarely have time for in-depth strategy or market research
- Personalization at scale—analysing customer data and crafting tailored messages—requires expertise and tooling most small teams don't have
- Maintaining a consistent brand voice across social channels without dedicated staff leads to inconsistent, low-impact campaigns
What It Does
- Industry & competitor research: Automatically gathers insights into customer preferences, competitor strategies, and market trends
- Personalised marketing flows: Generates customised campaign sequences tailored to individual consumer segments
- Post scheduling & campaign automation: Reduces repetitive manual work and ensures consistent content cadence
- Customer behaviour prediction: Identifies optimal times for campaign launches and email sends to maximise visibility
- Brand-consistent content generation: AI-generated copy that maintains a unified voice across all channels with minimal human input
- User persona creation: Builds detailed audience profiles from market data to sharpen targeting
Prompt Engineering
The core technical investment was in prompt engineering. 10 domain-specific prompts were designed, iteratively tested, and refined to cover distinct marketing use cases—research, personas, content generation, scheduling, campaign analysis, and more. Careful prompt construction ensured GPT-4 produced reliable, specialised outputs rather than generic responses, which was critical for giving small businesses actionable, trustworthy recommendations.
Technical Stack
- Backend: Python
- AI: GPT-4 (OpenAI API)
- Frontend: Streamlit
- Database: MongoDB (prompt storage, conversation history)
Key Takeaways
MarketIQ demonstrated how prompt engineering is the central lever of quality in LLM-powered products—not just model selection. Iterating on 10 specialised prompts until each one reliably produced expert-level marketing output showed that thoughtful AI design, not raw compute, is what makes a tool genuinely useful. The project also proved that a small hackathon team can build a production-viable AI product in a single weekend when the architecture is kept lean (Python + Streamlit + MongoDB) and the AI integration is purpose-built.
Event: HackUTA 6 · The University of Texas at Arlington (first hackathon)
Role: Frontend
Team: Akshay Daundkar, Nachiket Gawali, Nikhita Kalburgikar
AI: GPT-4, 10 fine-tuned domain-specific prompts
Stack: Python, Streamlit, MongoDB
Recipe Finder

System Overview
A full-stack recipe platform with personalised recommendations, JWT-based authentication, and a research-backed multi-factor scoring engine. Recipe Finder lets users save recipes, track cooking history, and receive recommendations tailored to their habits—while an admin panel provides full content management through env-var-secured credentials.
Recommendation Engine
Multi-Factor Scoring
Recommendations are computed using a composite score built from four independent signals:
- Day-of-week affinity: Identifies which recipes the user historically cooks on the current day of the week, surfacing contextually relevant suggestions
- Frequency weighting: Recipes cooked more often receive a higher base score, reflecting genuine preference
- Recency penalty: Recently cooked recipes are down-ranked to promote variety and prevent repetitive suggestions
- Preference bonuses: Explicit dietary preferences stored in the user's profile boost matching recipes
Authentication & Authorisation
Spring Security 6 + JWT
Stateless authentication implemented with Spring Security 6 and signed JWTs:
- Tokens carry a ROLE_USER or ROLE_ADMIN claim, enforced at the method level via
@PreAuthorize - All protected endpoints reject requests missing a valid Bearer token
- Passwords stored as bcrypt hashes—plain-text credentials never persisted
Admin Identity via Environment Variables
Admin credentials are intentionally not stored in the database. The adminLogin() method compares the submitted username and password against values injected at startup via Spring's @Value annotation, sourced from environment variables. On a successful match, it returns a JWT bearing ROLE_ADMIN—no admin row is ever written to or read from the database. This eliminates an entire class of admin-credential exposure through database breaches or accidental ORM queries.
Core Features
Recipe & User Management
- Ownership-aware search: Recipe queries filter by the authenticated user's ID, ensuring users only interact with their own data unless browsing public recipes
- Cooking history: Each cook event is logged with a timestamp, providing the raw data driving the recommendation engine
- Dietary preferences: Users set and update preferences (vegetarian, vegan, gluten-free, etc.) stored on their profile and applied as scoring bonuses
- Admin panel: Full CRUD for recipes, ingredients, and categories, accessible only with a valid ROLE_ADMIN token
Infrastructure & Deployment
Fully Containerised with Docker Compose
The entire stack—Spring Boot API, React frontend, PostgreSQL database, and nginx reverse proxy—runs as a single Docker Compose project:
- nginx serves the compiled React build and proxies
/apirequests to the Spring Boot container - PostgreSQL data is persisted via a named Docker volume
- Environment variables (DB credentials, JWT secret, admin credentials) are supplied at runtime—no secrets baked into images
- JPA/Hibernate handles schema migrations on startup (
ddl-auto: update), keeping the database schema in sync with entity classes
Frontend
React 18 + TypeScript + Vite
The client is built with React 18, TypeScript, and Vite for fast iteration and type safety:
- SCSS modules for component-scoped styling with shared design tokens
- Axios with request interceptors attaches the JWT Bearer token to every protected call
- Role-aware routing hides admin routes from regular users client-side (server enforces the same via
@PreAuthorize)
Technical Stack
- Backend: Spring Boot 3, Spring Security 6, JWT, JPA/Hibernate, PostgreSQL, bcrypt
- Frontend: React 18, TypeScript, Vite, SCSS, Axios
- Infrastructure: Docker, Docker Compose, nginx
Key Takeaways
Recipe Finder demonstrates building a data-driven personalisation system on a secure, production-ready full-stack foundation. The admin-via-env-vars pattern shows deliberate security design—removing an attack surface rather than hardening it. The multi-factor scoring engine proves that meaningful recommendations don't require ML infrastructure; thoughtful signal composition over relational data is often sufficient.
Auth: Spring Security 6, JWT (ROLE_USER / ROLE_ADMIN), bcrypt, env-var admin credentials
Recommendations: Day-of-week affinity, frequency weighting, recency penalty, preference bonuses
Infra: Docker Compose — Spring Boot, React, PostgreSQL, nginx
Status: Complete
Content extraction using web scraping for recipe dataset

At the forefront of the 'IoT-based Inventory Management System and Recipe Recommendation System' project lies a pivotal sub-component dedicated to data extraction, cleansing, and structuring. This phase is instrumental in creating a robust dataset tailored for recipe recommendation.
Utilizing powerful tools such as BS4 (BeautifulSoup), a Python web crawling library, and Selenium, we meticulously extracted data from various websites. This extracted data, comprising approximately 12,000 recipes spanning diverse cuisines, was meticulously stored in our database for seamless access and management.
BS4 played a crucial role in extracting intricate recipe details, including name, preparation and cooking times, servings, ingredient lists, and step-by-step instructions. Additionally, Selenium facilitated the calculation of nutritional values by sending ingredient lists to a designated website, extracting detailed nutritional information, and which was storing it in our database.
Subsequently, the collected data underwent rigorous cleaning processes to retain only pertinent information, aligning with project requirements. This refined dataset served as the foundation for our recommendation algorithm, enabling tailored recipe suggestions based on user preferences and dietary needs and available ingredients.
Technologies used:
Web Scraping, Algorithms, Python, Selenium
La Creation - Business Portfolio Website

La Creation stands as a premier destination for modular kitchen and furniture solutions, specializing in the design and installation of modular kitchens to elevate living spaces. Their sleek and innovative solutions are meticulously crafted to enhance both functionality and aesthetics in every home.
This website, a single-page application developed using TypeScript, Angular framework, and seamlessly hosted on Google Firebase, serves as an introduction to La Creation's exceptional services and innovative offerings.
With a curated portfolio, La Creation showcases its unwavering commitment to elevating living spaces through sleek design and expert craftsmanship, inviting visitors to explore and envision the possibilities for their homes.
Technologies used:
Typescript, Angular, Google Firebase, HTML, SCSS.
Game 2048 - Desktop Tile Merging Puzzle

Overview
A cross-platform desktop implementation of the 2048 tile merging puzzle, built with Electron and React. Tiles with identical values merge when slid in any direction; the objective is to combine tiles until reaching 2048. The game is packaged as a native desktop application with auto-update support, and also served via the portfolio through an nginx reverse proxy.
Architecture
Electron 3-Process Model
The app follows Electron's standard process separation:
- Main process (
src/main/) — handles the application lifecycle, window management, and native OS integration - Preload script (
src/preload/) — acts as the bridge between the main process and the renderer, exposing a safe, typed API surface viacontextBridge - Renderer (
src/renderer/) — the React + TypeScript UI, running as a standard web app inside the Electron window
Game Logic
OOP Class — Game2048
Core game mechanics are encapsulated in a single TypeScript class that extends GlobalFunctions:
- Manages grid state, score, and best score
- Handles tile spawning in random vacant cells after each move
- Processes input from both keyboard arrow keys and touch swipe gestures
- Communicates with an external API (
API_BASE_URL) for persistent best score storage
GameBoard React component renders the grid as a dynamic boardSize × boardSize layout, with tile values driving CSS class-based styling for each numbered tile.
Build & Distribution
- Built with electron-vite for fast HMR in development and optimised production bundles
- Packaged via electron-builder with targets for
--win,--mac, and--linux - electron-updater provides automatic OTA updates on all platforms
Technical Stack
- Desktop: Electron v35, electron-vite, electron-updater, electron-builder
- UI: React 18, TypeScript, SCSS, react-router-dom
IOT-Based Inventory Management System

The integration of Internet of Things (IoT) technology has revolutionized various industries, and one such application is the development of a sophisticated inventory management system. In the context described, IoT technology was leveraged to create a system for tracking and monitoring available ingredients in real-time, thus significantly enhancing the efficiency of the recipe recommendation process.
Central to this system is the utilization of weight sensors to precisely track the weights of different materials in the inventory. These weight sensors play a pivotal role in ensuring accurate and reliable data collection regarding the quantity of ingredients available. However, to effectively interface these weight sensors with digital systems, we use device such as the Raspberry Pi, and HX711 module.
The HX711 module serves as a crucial component in this setup, functioning as a 24-bit analog-to-digital converter. Its primary purpose is to convert the analog signals from the weight sensors into digital data that can be further processed by the Raspberry Pi. By doing so, it facilitates the seamless transmission of weight measurements to the Raspberry Pi, enabling subsequent calculations and analysis.
Once the weight sensor is connected to the HX711 module, and subsequently to the Raspberry Pi, a calibration process is initiated. Calibration is imperative to ensure the accuracy and reliability of the weight measurements obtained from the system. Typically, this involves collecting samples of known weights, such as 1 kg of a specific item, to establish a reference point for subsequent measurements.
To determine the weight of any item placed on the sensor accurately, sophisticated statistical techniques are employed. Specifically, the kernel density estimator is utilized on a bivariate distribution. By analyzing the distribution of these data points, including factors such as offset and value, the system identifies the set with the maximum density, thereby refining the calibration process. These plots illustrate how the data is distributed and how densely packed the dataset is concentrated. Contour lines on the plot further elucidate the density of data points, providing valuable insights into the accuracy and reliability of the system's weight measurements.
With the calibration process completed, the system becomes adept at accurately calculating the weight of any item placed on the sensor. This precision is crucial for the effective management of inventory, as it ensures that the system can provide real-time updates on the availability of ingredients. Consequently, this IoT-enabled inventory management system enhances operational efficiency, facilitates informed decision-making, and ultimately contributes to the seamless execution of the recipe recommendation process.
Recipe Recommendation System

In a landscape saturated with recommender systems, such as those employed by platforms like Youtube and Netflix, our team was inspired to create a similar prototype tailored specifically for suggesting food recipes to users. This system takes into account a variety of factors, including user preferences, past selections, available ingredients, and nutritional considerations, to generate personalized recipe recommendations. Each recipe is meticulously scored based on these parameters, culminating in a curated list of suggestions for the user's culinary exploration.
Our system continuously refines its recommendations through a self-learning algorithm, enabling it to adapt and improve over time. This iterative process ensures that the recommendations provided become increasingly efficient and precise as the system gains more insights into user preferences and behaviors.
Furthermore, our system operates in real-time, seamlessly processing data from sensors to manage the user's inventory. By leveraging sensor technology, the system accurately tracks the availability of ingredients, allowing for dynamic adjustments to the recommended recipes based on the user's current stock.
The recommendation process begins by presenting the user with a comprehensive list of recipes that align with their preferences. Additionally, the system intelligently filters out recipes containing ingredients that the user dislikes or is allergic to, ensuring a personalized and safe browsing experience. Subsequently, the system analyzes the user's current inventory, refining the list to only include recipes that can be prepared with the available ingredients.
In addition to personalized recommendations, our system also provides nutritional information for each recommended recipe. Users can easily access details such as calorie count, macronutrient breakdown, and other relevant nutritional data, empowering them to make informed decisions about their dietary choices.
To further enhance user satisfaction, our system maintains a record of recently prepared meals to prevent repetition. By prioritizing diversity in recommended options and minimizing the recurrence of recently used recipes, we aim to provide users with a rich and varied culinary experience.
In summary, our recommender system harnesses the power of machine learning and real-time data processing to deliver personalized recipe recommendations that cater to the unique preferences, nutritional needs, and constraints of each user. Through continuous self-improvement, adaptability, and nutritional insights, we strive to elevate the user's culinary journey and foster a deeper appreciation for exploration and discovery in the kitchen.
Technologies used:
Python, Machine Learning Algorithms
Android Application Development using Kivy Python

The Android application served as the central hub for our project, offering users an intuitive interface to access the recipe database, manage their inventory, and receive personalized recommendations. With a focus on seamless integration, our goal was to provide a user-friendly platform that encapsulated all project components effortlessly. Leveraging the Kivy library in Python, we meticulously designed and developed the application to meet the specific requirements of our project, ensuring both functionality and aesthetic appeal.
Utilizing the robust capabilities of Kivy, we crafted a versatile framework for building cross-platform applications, enabling smooth navigation and intuitive interaction. The application, once developed, was packaged into an APK file using Buildozer, streamlining the deployment process and facilitating easy installation on Android devices. Through this streamlined approach, we aimed to enhance convenience and accessibility, empowering users to engage with our project's features seamlessly.
Through the Android application, users could effortlessly explore the extensive recipe database, manage their inventory, and receive personalized recommendations tailored to their preferences and dietary requirements. By providing a cohesive and user-friendly interface accessible via Android devices, we endeavored to enrich the user's culinary journey, fostering a deeper appreciation for exploration and discovery in the kitchen.
Technologies used:
Python, Kivy, Linux, Buildozer, Android device
Personal Portfolio Website

Designed and developed a comprehensive portfolio website over the course of a year to showcase professional projects and technical expertise. The project emphasizes seamless data management, performance optimization, and engaging user experience through modern web technologies and cloud integration.
The website architecture was carefully planned using Figma for design mockups and user experience flow, ensuring every visual element reflects professional identity and technical capabilities. Following a mobile-first approach, the design prioritizes responsive layouts and touch-friendly interactions across all device sizes. The project evolved from an initial Flask implementation to a robust Django framework, providing better scalability and maintainable code structure for long-term growth.
The technical implementation leverages Django as the primary backend framework, hosted on Render for reliable cloud deployment. Firebase Firestore serves as the database solution, enabling real-time data synchronization for project updates and contact information management. This cloud-native approach ensures the portfolio remains dynamic and consistently up-to-date across all user interactions.
Development workflow optimization was achieved through Webpack integration, which streamlines both development and production processes by minifying assets and compiling SCSS to CSS and TypeScript to JavaScript. This setup significantly improves website performance through code splitting, lazy loading, and optimized bundle sizes. SEO management is implemented through Django's meta tag system, structured data markup, and URL optimization to enhance search engine visibility and professional discoverability.
A standout feature is the custom Django ORM admin application, developed as a separate project for comprehensive data manipulation and management. This administrative interface provides powerful content management capabilities, allowing for efficient project updates and portfolio maintenance without direct database access.
The website also serves as a hosting platform for various Python AI projects, demonstrating full-stack capabilities and showcasing machine learning implementations in real-world applications. GSAP animations enhance user engagement through smooth, professional transitions and interactive elements that bring the portfolio to life.
Technologies used:
Django, Python, Firebase Firestore, TypeScript, SCSS, HTML, Webpack, GSAP, Figma, Render
AI-Powered GroupMe Chatbot for Residential Management

Built an intelligent messaging system to streamline communication across 11+ residential building GroupMe groups. The system implements approval workflows and AI-powered message enhancement to ensure professional, accurate announcements reach residents consistently while maintaining proper management oversight.
Managing communications for multiple residential buildings was chaotic—manual copy-pasting across groups, grammar mistakes in official announcements, and messages sent without proper approval from leads. The solution features a centralized approval system where all messages go to an admin group first, get enhanced by Perplexity's Sonar AI model for grammar and tone, then broadcast to all building groups simultaneously upon approval.
The technical architecture uses FastAPI with webhook integration for real-time GroupMe message handling, while MongoDB stores message data and manages group relationships. A Next.js frontend provides the admin interface for approval management with real-time updates. Access control ensures only authorized leads can approve and manage broadcasts across building groups.
The system eliminated hours of manual work, reduced communication errors, and established professional standards for resident announcements. Management now has proper oversight of all communications while residents receive consistent, error-free updates across all building properties.
This project demonstrates full-stack development skills including FastAPI backend development, Next.js/React frontend, MongoDB database design, and real-time webhook integration. It also showcases AI integration through Perplexity Sonar model implementation for natural language processing and text enhancement, along with system architecture capabilities in microservices design, access control, scalable multi-group broadcasting, and production deployment.
Technologies used:
- Python
- FastAPI
- Next.js
- React
- MongoDB
- Webhooks
- GroupMe API
- Perplexity Sonar AI (LLM Integration)
- Natural Language Processing (NLP)
- Real-Time Systems
IOT based Inventory Management System and Recipe Recommendation System

Our team's culminating project during my final year of bachelor's in 2020 focused on revolutionizing the culinary experience for users. The primary objective of this undertaking was to empower users with a comprehensive list of recipes tailored to their preferences, encompassing likes, dislikes, allergies, previous choices, and other relevant parameters. Additionally, the system provided real-time information on the availability of all required ingredients for the selected recipes.
The project was strategically divided into four distinct sub-projects:
- Data Extraction for Recipe Database Creation: This phase involved the extraction of data from various websites to build an extensive and diverse recipe database. The aim was to create a robust foundation for the subsequent stages of the project.
- IOT-Based Inventory Management System: The integration of Internet of Things (IoT) technology was employed to develop a sophisticated inventory management system. This ensured efficient tracking and monitoring of available ingredients in real-time, contributing to the seamless execution of the recipe recommendation process.
- Recipe Recommendation System: A pivotal component of our project, this system utilized advanced algorithms to analyze user preferences, historical data, and other relevant parameters. It generated personalized recipe recommendations, enhancing the overall user experience by providing tailored culinary suggestions.
- Android Application Development: The final sub-project involved the creation of an intuitive Android application. This application served as the user interface, facilitating easy access to the recipe database, inventory information, and personalized recommendations. The goal was to provide a user-friendly platform that seamlessly integrated all components of the project.
Our multifaceted project employed a variety of cutting-edge technologies, both in hardware and software domains, to achieve its objectives.
Hardware Technologies:
- Raspberry Pi: Used as the core component for the IoT-based inventory management system, the Raspberry Pi facilitated seamless integration and communication between the various hardware and software components.
- HX711 and Load Cell: Incorporated into the hardware setup, these components played a crucial role in creating a precise and reliable system for measuring and monitoring ingredient quantities in real-time.
Software Technologies:
- Python: The primary programming language for developing various aspects of the project, Python was employed for data extraction, algorithmic development, and interfacing with the Raspberry Pi.
- Windows and Ubuntu: The project was designed to be versatile, supporting both Windows and Ubuntu operating systems. This ensured accessibility and usability across different platforms.
- Putty: Used for remote access to the Raspberry Pi, Putty facilitated the configuration and monitoring of the IoT-based inventory management system.
- Android: The user interface was developed as an Android application, offering a convenient platform for users to interact with the system, explore recipes, and receive personalized recommendations.
- Buildozer: Utilized in the Android application development phase, Buildozer helped streamline the process of converting Python scripts into Android applications, ensuring a smooth and efficient deployment.
In summary, our comprehensive approach aimed to transform the cooking experience by leveraging technology to provide users with personalized recipe suggestions while ensuring the availability of ingredients. The multi-faceted nature of the project, encompassing data extraction, IoT integration, algorithmic recommendation, and user interface development, reflects our commitment to delivering a holistic solution.
Technical Skills:
- Python
- IoT Systems
- Machine Learning
- Web Scraping (BeautifulSoup, Selenium)
- Kivy Framework
- Raspberry Pi Integration
- Load Cell Sensor Calibration
- Android App Development (Python-based)