From Spreadsheets to Systems: How I Built My Own AI-Assisted Stock Analyzer
Over the years, my interest in investing evolved from a hobby into a serious passion. I've spent countless hours refining valuation models and analysis frameworks – most of them living in increasingly complex, manual Excel sheets. While they worked, the process was slow, data collection was tedious, and maintaining them felt like a second job.
In the summer of 2025, I decided to stop fighting the spreadsheets and build a system that could do the heavy lifting for me.
The result is the Stock Analyzer Platform – a full-stack financial ecosystem that combines real-time market data, deep fundamental modeling, and an AI-driven chat interface into a single, cohesive workflow.
Why I Built It
The goal was simple but ambitious: automate the "boring" parts of research so I could focus on high-level decision-making. I needed a tool that could centralize my portfolio, automate DCF (Discounted Cash Flow) calculations and several other calculations and assessments, and challenge my investment theses using AI.
At the same time, I wanted to push my boundaries as a developer. I used this project as a playground to explore AI-assisted development – using tools like Cursor and GitHub Copilot. I wanted to see if I could build a professional-grade microservices system using AI as a pair programmer.
How It Works
The platform is designed to mirror a professional research funnel, moving from high-level monitoring to granular, deep-dive valuation.
- Setting the Stage: Every session starts at the dashboard. It’s my "command center" for checking portfolio performance, reviewing notes from previous days, and seeing which stocks on my watchlists are hitting their target prices. Before starting new research, I’ll often head to the portfolio page to refine my current holdings or update position sizes, ensuring the system’s state matches my actual brokerage.
- The Filter: When I have 10–15 new ideas, I drop the tickers into a chat and ask the system to screen them. The system pulls fresh metrics via the FMP API and applies my proprietary screening logic. Instead of a wall of text, I get a clear summary of which stocks passed my criteria, along with a link to a downloadable Excel report if I want to audit the raw numbers.
- The Deep Dive: For the stocks that pass, I shift to qualitative research. I’ll ask the chat for earnings summaries, analyst sentiment, and "headwinds vs. tailwinds"– like geopolitical risks or regulatory changes. This is a back-and-forth process that challenges my outlook with real-time context.
- The Heavy Lifting: Once I understand the business context, I trigger the full fundamental analysis. The system runs my entire library of valuation models (DCF scenarios, Piotroski F-Scores, Business Quality and many more). It returns a comprehensive report with a Buy/Hold/Sell recommendation and a multi-tab Excel workbook with all formulas intact so I can manually stress-test the growth rates or margins if I want to see e.g. how a "worst-case" scenario affects the valuation.
- Closing the Loop: The final step is a discussion. I’ll ask the AI to tie the fundamental results back to the real-time risks we discussed earlier. We talk through the investment thesis until I’m ready to make a call. I’ll either discard the idea entirely, add it to my watch/buy-list with a specific price target, or pin the entire thread to revisit it later. I might also ask the system to run a technical analysis to understand price action and momentum. This helps me decide on timing: should I DCA (dollar-cost average) now, or is this a "falling knife" I should avoid for now?
Under the Hood
While the interface feels like a simple conversation, there is a complex microservices architecture coordinating every move.
-
The Orchestrator: A central Chat Service uses an LLM (OpenAI) with function calling to map natural language to specific technical tools. When I ask for a screen, the LLM doesn't "guess" the numbers, it calls the Screener Service, which executes my deterministic Python logic.
-
The Data Pipeline: The Data Service fetches, normalizes, and stores over 100 metrics in PostgreSQL from the FMP (Financial Modeling Prep) and Yahoo Finance APIs.
-
Specialized Analysis Engines: I translated my proprietary Excel valuation models into a dedicated Fundamentals Service. It calculates WACC, Z-Scores, multi-scenario DCFs (Bear, Base, Bull) and much more, and provides a comprehensive fundamentals analysis report. For technical analysis the Technical Service uses OHLCV data to output key technical indicators and trade setups.
-
Context & Memory: A token-aware system uses rolling summaries and pinned references so the AI doesn't lose the thread of a complex, multi-hour analysis.
Reflection
This project became my own masterclass in system design and AI collaboration. On the technical side, I learned to manage inter-service communication, coordinate distributed workflows, avoid tight coupling, and translate proprietary Excel logic into precise, testable code.
The biggest takeaway, however, was learning how to use AI as a pair programmer. I discovered that "vibe coding"– giving broad, vague instructions and letting the AI take the wheel on critical design and architectural choices – quickly produces broken, messy, low-quality code after a few iterations of the code base. What actually worked was acting as an architect first:
-
Using Cursor’s Plan mode to outline the service boundaries.
-
Breaking features into tiny, narrowly defined components.
-
Reviewing and iterating on the AI-generated code piece by piece.
What’s Next?
The system is currently stable on my local server. My next goal is building an Autonomous AI Agent. I want to feed it 50–100 tickers and have it run the entire workflow in the background – qualifying stocks against my criteria and only notifying me when it finds a high-conviction investment opportunity.
See the full project description with images and more info here: Stock Analyzer Platform