AI & Experiments

OpenAI integrations, automation workflows, AI-powered apps, and deliberately overengineered side projects. Because constraints breed creativity, and sometimes chaos is the point.

Discipline
  • ai
  • automation
  • experiment
Tenure
2025

Why side projects matter

Most of our time goes into production work. Optimizing memory, shaving off milliseconds, making things invisible. That is the job and we take it seriously. But production work alone can make you narrow. You get good at solving the same kinds of problems the same kinds of ways.

Side projects fix that. They are where we try tech we have no business using, build things nobody asked for, and learn by doing it wrong on purpose. No client timelines, no sprint reviews, no "is this scalable?" conversations. Just building.

These three projects came out of that mindset. Each one started with a question we wanted to answer, and each one taught us something we brought back to our real work.

New Gen Atari

The question was simple: can we make a Tic-Tac-Toe game where the AI opponent actually plays through an LLM?

We built it with Next.js and React. The game board is straightforward. The interesting part is the opponent. Every move goes through an API Route that asks a language model to pick the next cell.

The free model detour

Before reaching for OpenAI, we tried to make it work with free Hugging Face models. The results were memorable for all the wrong reasons.

DistilBERT returned "positive." It was doing sentiment analysis on our board state. Wrong task entirely.

GPT-2 gave us "3_4" which was out of bounds on a 3x3 grid. Close, but not useful.

Mistral produced something we can only describe as creative gibberish. It clearly understood it was being asked to do something, it just had no idea what.

What actually worked

We switched to GPT-4o-mini through OpenAI's API and it worked on the first try. The model understood the board state, picked valid moves, and played competently. Not perfectly (it is Tic-Tac-Toe, after all), but well enough to be fun.

The lesson was clear: for structured output where you need the model to follow specific rules, the quality gap between free and paid models is enormous. You can get creative with prompting, but at some point you need a model that reliably follows instructions.

Play it live

Laubali (Telegram News Bot)

Laubali is a Telegram bot that fetches news, summarizes it with AI, and adds commentary. The twist: there is zero traditional code. The entire thing runs on n8n, a visual workflow automation platform.

How it works

The workflow starts when a user sends a message to the Telegram bot with a news category. From there:

  1. n8n receives the message through a Telegram trigger node
  2. It calls NewsAPI to fetch relevant articles
  3. The articles go through an AI summarization step
  4. The bot adds contextual commentary
  5. Results are cached so repeated requests do not burn API credits
  6. The formatted response goes back to Telegram

The whole pipeline, from category selection to delivered summary, is a visual graph of connected nodes. No functions, no deployment scripts, no package.json.

What we learned

n8n is surprisingly powerful for this kind of thing. The visual workflow made it easy to debug (you can see exactly where data transforms at each step) and the built-in integrations for Telegram and various APIs saved hours of boilerplate.

The limitation is complexity. Once your logic gets deeply conditional or needs real state management, you start fighting the tool instead of using it. For a focused bot like this, though, it was the right call.

AI Coach App

This one was the most ambitious. A full-stack goal-setting app where users describe what they want to achieve in plain language, and AI structures it into actionable SMART goals.

The architecture

The prompt engineering challenge

The core feature is the AI goal-setting flow. A user types something vague like "I want to get healthier" and the system needs to turn that into specific, measurable, time-bound goals with concrete action steps.

We designed a multi-stage prompt pipeline. The first pass extracts intent and context. The second structures it into the SMART framework (Specific, Measurable, Achievable, Relevant, Time-bound). The third breaks goals into daily and weekly tasks.

Getting this right took iteration. Early versions were too generic ("exercise more" is not a SMART goal). We found that adding constraints to the prompts and giving the model examples of good vs. bad output made a huge difference. Clear prompts beat bigger models, every time.

The onboarding problem

The GoalCreationScreen became the most complex piece of the app. It handles a multi-stage flow where each step involves asynchronous AI processing, user input validation, and task breakdown logic, all while keeping the UI responsive. On mobile, where users expect instant feedback, managing loading states during LLM calls was a real UX challenge.

What we learned from launch

After launch, we ran a detailed product-market fit analysis. The biggest takeaway: when you build on top of LLMs, your product needs to offer something the user genuinely can not do by talking to ChatGPT directly. Persistence and a native mobile experience were real differentiators, but not enough on their own.

That analysis shaped how we think about AI products now. We focus on where AI adds structural value, not just a conversational layer. The full-stack build (backend, Firebase, prompt pipeline, native mobile on both platforms) gave us a working template we have reused in client conversations since.

CountdownBoxes

CountdownBoxes started as a simple question: why is every countdown app either ugly, limited, or both? We wanted something where you could track everything from product launches to birthdays, share boards with friends, and get notified without opening the app.

What we built

A full-stack countdown tracking app. Users create countdown timers for any event, organize them into boards, and share those boards with others. The app supports real-time countdowns, push notifications via cron jobs, and works offline as a PWA.

The 33 API problem

The most interesting technical challenge was the holiday and event data layer. We integrated 33 third-party APIs to provide pre-built countdowns for holidays, sports events, movie releases, and more. Each API has its own format, rate limits, and quirks. We built a normalization layer that maps all of them into a single internal format, with per-source caching and fallback logic when an API goes down.

The stack

What we learned

Building a PWA that actually works well is harder than it sounds. Service worker caching strategies, background sync, and install prompts all have platform-specific quirks. On iOS, push notifications for PWAs only landed recently, and the experience is still rough around the edges.

The cron-based notification system taught us a lot about scheduling at scale. When you have thousands of countdowns expiring at different times, you need to batch efficiently and handle clock drift gracefully.

PostgreSQL with Prisma turned out to be a great combo for this kind of relational data. Boards, users, countdowns, and sharing permissions map naturally to relational tables, and Prisma's type-safe queries caught bugs before they hit production.

Try it live

What we picked up along the way

Tech stack

Next.js, React, TypeScript, OpenAI API, Hugging Face, n8n, Telegram Bot API, NewsAPI, Node.js, Express, Firebase, Cloud Functions, Kotlin, Swift, PostgreSQL, Prisma, Tailwind CSS, PWA, Vercel, Netlify