Pro Formula: AI-Powered Formula Search and LaTeX Generator
Why did I start this project?
In aerospace engineering it is common to use mathematical equations on a daily basis. For example in areas such as fluid/aerodynamics, space vehicle dynamics, and thermal/propulsion systems. Additionally, as the world is increasingly adopting digitalization in academic journals, LaTex has become essential for technical documentation. As a student studying in this field, I encountered two recurring problems. First, it is difficult for the human brain to remember every concept and equation learned over long periods. Second, it can be time consuming to convert handwritten equations from paper to LaTeX.
Then I found the Google Gemini API Developer Competition. I thought I could use this opportunity to build potential solutions for these problems and learn new skills.
Tech Stack
- Frontend: Next.js (App Router), React, TypeScript, TailwindCSS
- Backend & AI Integration: Next.js Server Actions, Google Gemini API
- Data: Curated JSON formulas (
data/*.json) - Caching: In-memory LRU cache (24h TTL)
- Hosting: Previously on Railway (shut down to control costs)
System Architecture (v2)

System architecture for Pro Formula v2
This refactor focuses on a single direct-search pipeline: curated lookup first, AI fallback second, and strict LaTeX validation before render.
- Client submits query + category.
- Server action checks response cache.
- Curated JSON index lookup (fast path).
- If not found, call Gemini.
- Sanitize LaTeX + validate brace balance.
- Cache result and return to client.
- Client renders with KaTeX; copy adds
$$...$$only at clipboard time.
Core Features (v2)

Home page for users to search for formulas (minimalist UIUX).
When I designed the system architecture, I had three considerations:
- Some users already know the exact formula name and just need clean LaTeX.
- Others only know the category and need a fast way to discover results.
- The AI fallback should be reliable and safe to render.
Smart Search — When you know what you’re looking for

Search page for users to select which searching option based on their need (latest UIUX).
The most straightforward feature I built was the direct search. If you know you need the “Navier-Stokes equation” or “Fourier Transform,” you just type it in and get the formula, description, and LaTeX code instantly. No need to dig through hard covered paper textbooks or remember exact mathematical notation.
Reliability & Rendering
The most error-prone part of the system is LaTeX formatting. Gemini can return valid math that still breaks KaTeX due to delimiter or brace issues. The refactor now:
- Normalizes and sanitizes LaTeX on the server.
- Validates balanced braces before returning results.
- Keeps the render string delimiter-free and only adds
$$...$$for copy.
This reduces render failures and avoids showing raw LaTeX in the UI.
Archived (v1) Features
These were part of the original build for the Gemini competition and are preserved in the demo video, but are no longer active in the refactor.
Multi-step Search — When you’re not sure exactly what you need
Sometimes you know you need something related to “fluid dynamics” but can’t remember if it’s Bernoulli’s equation or the continuity equation. Instead of forcing users to type perfect prompts like ChatGPT, I tried to create a guided search where you can browse by field (physics, engineering, math), then narrow down by category, and finally find exactly what you’re looking for.
The “Convert” Feature
Originally, I had a feature that could convert handwritten equations into LaTeX code that allowed users to upload screenshots/images of their handwritten notes, and it would automatically convert them to full LaTeX format for digital notes. This feature was designed specifically for people who aren’t familiar with modern AI Chat Interfaces or who prefer taking handwritten notes.
Check the demo video below to see the original UI/UX from the competition version.
Under 3-minute video demonstration for competition submission
Challenges
- LaTeX correctness: AI outputs often look right but fail KaTeX rendering due to delimiter or brace issues. I added server-side sanitation and brace validation to prevent invalid formulas from reaching the UI.
- Infrastructure limits: Gemini responses sometimes exceeded serverless time limits. I migrated to Railway for longer execution windows, then later shut it down to save money.
Things I Learned
Unfortunately, my project wasn’t selected as a finalist. But, I would like to share some perspectives I learned.
- I should have built more robust pipeline algorithms to filter and validate user inputs instead of relying on the AI for every request, which resulted in unnecessary API costs. Implementing rate limiting and input validation would have been more professional and cost-effective. Purely relying on AI outputs wasn’t ideal since AI responses can vary slightly with each request, making the system less predictable.
- Users don’t care how fancy your AI integration is if the app breaks when something goes wrong. Building robust error handling and retry mechanisms turned out to be just as important as the core search functionality.
- I focused too much on academic users rather than general users. Also if I knew that Gemini API competition allowed to use other AI models’ APIs like one of the winnners’s project, my output result would be improved better instead of relying on only Gemini API.