<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[CRON STUDIO]]></title><description><![CDATA[Somos um Venture Studio que dá vida a produtos e projetos inovadores.]]></description><link>https://substack.cron.studio</link><generator>Substack</generator><lastBuildDate>Mon, 20 Apr 2026 02:22:19 GMT</lastBuildDate><atom:link href="https://substack.cron.studio/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[CRON STUDIO]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[cronstudio@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[cronstudio@substack.com]]></itunes:email><itunes:name><![CDATA[CRON STUDIO]]></itunes:name></itunes:owner><itunes:author><![CDATA[CRON STUDIO]]></itunes:author><googleplay:owner><![CDATA[cronstudio@substack.com]]></googleplay:owner><googleplay:email><![CDATA[cronstudio@substack.com]]></googleplay:email><googleplay:author><![CDATA[CRON STUDIO]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Building a Financial Chatbot Prototype (in 4 Weeks)]]></title><description><![CDATA[When one of our partners challenged us to explore the integration of AI in a real-world financial context, the goal was clear: to build, in record time (4 weeks!), a chatbot capable of providing useful answers about credit consolidation.]]></description><link>https://substack.cron.studio/p/building-a-financial-chatbot-prototype</link><guid isPermaLink="false">https://substack.cron.studio/p/building-a-financial-chatbot-prototype</guid><dc:creator><![CDATA[CRON STUDIO]]></dc:creator><pubDate>Wed, 02 Jul 2025 12:32:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!geZv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When one of our partners challenged us to explore the integration of AI in a real-world financial context, the goal was clear: <strong>to build, in record time</strong> (4 weeks!)<strong>, a chatbot capable of providing useful answers about credit consolidation</strong>.</p><p> At <a href="https://www.cron.studio/">CRON STUDIO</a>, we approached the challenge with our usual venture building mindset, starting from a Minimum Viable Product (MVP) philosophy. Rather than aiming for complexity or completeness from day one, we focused on shipping a solution that was simple, useful, and measurable. This allowed us to validate core assumptions early, deliver tangible value, and leave room for smarter iterations in the future.</p><p>What does this mean in practice? With this prototype, a user can have a natural conversation, with precise calculations, where they:</p><ul><li><p>Ask &#8221;Can I combine my mortgage with my car loan?&#8221; and get an instant estimate of the new monthly payment and potential savings. </p></li><li><p>Say &#8220;How much would I pay if I reduce the term from 30 to 20 years&#8221; and see a clear recalculation on the spot, without the possibility of hallucinations. </p></li><li><p>Request &#8221;How can I move forward with this offer?&#8221; and be redirected to a specific credit advisor who can help close the deal.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!geZv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!geZv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 424w, https://substackcdn.com/image/fetch/$s_!geZv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 848w, https://substackcdn.com/image/fetch/$s_!geZv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 1272w, https://substackcdn.com/image/fetch/$s_!geZv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!geZv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif" width="546" height="780.0975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1143,&quot;width&quot;:800,&quot;resizeWidth&quot;:546,&quot;bytes&quot;:3287497,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://substack.cron.studio/i/167251113?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!geZv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 424w, https://substackcdn.com/image/fetch/$s_!geZv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 848w, https://substackcdn.com/image/fetch/$s_!geZv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 1272w, https://substackcdn.com/image/fetch/$s_!geZv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a7a9c5c-c1a0-43c9-adbb-4131b2bd269b_800x1143.gif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In this blog post, we want to share how we built this prototype and some of our key takeaways, so you can get a feel for how we work, or maybe have a shot at building your own prototypes. &#128578;</p><p></p><h2><strong>Building a Functional Prototype in One Month</strong></h2><p>The architecture was designed for <strong>clarity</strong> and <strong>speed</strong>. We used Django to structure and expose the business logic through a clean API, keeping server-side responsibilities predictable and easy to evolve. On the frontend, React provided a lightweight interface focused purely on the user experience.</p><p>The chatbot logic was designed to be as simple and predictable as possible: the LLM interpreted user intent, while our backend returned results based on real financial calculations and logged each message-response pair for traceability and improvement. This made the system easy to test and iterate, while laying the groundwork for smarter, more stateful behavior in the future.</p><p></p><h2><strong>Designing a Conversational Experience That Feels Natural</strong></h2><p><strong> </strong>When designing the chatbot experience, we had to find a <strong>balance between control and flexibility</strong>. While some chatbots are fully scripted and others completely open-ended, we aimed for something in between: a system where user input could trigger precise calculations, but the assistant still sounded natural and helpful.</p><p>To this end, we decided to use OpenAI&#8217;s function calling. It allowed us to inject precision into the conversation without sacrificing a natural tone. For example, when a user asks, &#8220;Can I consolidate all my loans?&#8221;, the LLM understands the intent and, once it has the necessary information, triggers a backend simulation tailored to the user&#8217;s case without breaking the conversational flow.</p><p>Initially, we explored the idea of routing common questions through a lightweight classification layer matching frequent inputs with predefined responses. We even considered embedding-based similarity searches. But in the interest of <strong>speed</strong> and <strong>clarity</strong>, we opted to let the LLM handle most requests, using OpenAI&#8217;s function calling to cover our core use cases with precision.</p><p>Even so, we laid the <strong>foundation for something more powerful</strong>: every user message is logged and can be tagged with its intent, creating a dataset for smarter routing in the future. This means the chatbot could one day respond even faster and cheaper without compromising on clarity or experience.</p><p></p><h2><strong>Saying No Is Part of Building Well</strong></h2><p>When building something in just four weeks, choosing what not to do is just as important as deciding what to include. We knew early on that the temptation to over-engineer, add real time features, build advanced user models, or fine-tune an LLM would only slow us down.</p><p>We considered several options that, while interesting, didn&#8217;t make sense at this stage. For example: <br></p><ul><li><p>WebSockets were unnecessary for a request-response interaction model.</p></li><li><p>Embedding-based search added complexity without clear short-term benefit.</p></li><li><p>LLM fine-tuning wasn&#8217;t viable without a strong dataset of curated examples. <br></p></li></ul><p><strong>What we said &#8220;no&#8221; to was just as deliberate as what we built.</strong> It allowed us to focus on delivering something real, and left the door open to smarter upgrades in the future.</p><p></p><h2><strong>Internal Testing &amp; Feedback Loops</strong></h2><p>Rather than waiting for a finished version to get feedback, we gave early testers the tools to shape the chatbot in real time. By allowing each message to be rated and commented on, we opened a direct line between users and the product&#8217;s core behavior.</p><p>This wasn&#8217;t just useful, it was transformative. With this data, we could generate feedback reports, adjust the system prompt, add or remove behaviors, and catch misalignments early. We discovered that some things the chatbot <em>could</em> do weren&#8217;t things it <em>should</em> do and we wouldn&#8217;t have known that without this loop.</p><p>What began as a simple evaluation mechanism ended up being one of the most valuable parts of the build. It kept us grounded in real use cases and gave us confidence in the direction of the final product.</p><p></p><h2><strong>Real Constraints, Real Challenges</strong></h2><p>Working with financial data meant being precise both technically and in tone. We had to represent loan conditions, simulate consolidation scenarios, and answer nuanced user questions, all without misleading or oversimplifying. That required careful alignment between frontend phrasing, backend logic, and model prompts.</p><p>We also had to deal with technical uncertainty. Even with tools like OpenAI&#8217;s function calling, responses can vary depending on how the model interprets the context. Ensuring repeatability across conversations meant iterating on prompt design, request formatting, and edge-case handling.</p><p>Choosing the right model was part of the challenge. Since this was an MVP, we prioritized a balance between performance, cost, and future-proofing. We selected a variant of <strong>GPT-4o mini</strong>, OpenAI&#8217;s cost-efficient small model, which provided strong results in function calling, reasoning, and multilingual support all at a fraction of the cost of larger models. The lesson? Model selection is not just about capability: it&#8217;s about timing, cost-effectiveness, and alignment with the product&#8217;s maturity. GPT-4o mini gave us what we needed: reliable intelligence, fast responses, and the flexibility to upgrade later.</p><p></p><h2><strong>Key Takeaways for Future Projects</strong></h2><p><strong>Value doesn&#8217;t come from adding complexity, but from solving the right problems simply.</strong> This project reminded us that the best prototypes don&#8217;t just demo features &#8211; they prove usefulness.</p><ol><li><p><strong>Start with constraints, not features: </strong>instead of asking &#8220;what can we build?&#8221;, we asked &#8220;what must this solve, and under which limits?&#8221;. This flipped the mindset from open-ended ideation to focused problem-solving. The boundaries we drew early on saved time, reduced ambiguity, and helped align everyone around what mattered most.</p></li><li><p><strong>Stabilize the core experience before expanding: </strong>it&#8217;s tempting to add features, polish edge cases, or integrate new APIs. But we prioritized getting one thing right: a smooth, helpful, trustworthy chat experience. Only after that foundation was solid would other additions make sense and that discipline paid off in clarity and coherence.</p></li><li><p><strong>Say no to distractions: </strong>we deliberately avoided premature complexity: no fine-tuned LLMs, no embedded search engines, no real-time infrastructure. These weren&#8217;t rejected forever, just deferred until the need and timing were right. This kept the system lean and maintainable, while still leaving room to evolve.</p></li><li><p><strong>Treat AI as infrastructure, not magic: </strong>we realized early on that function calling was essential for our use cases. We couldn&#8217;t rely on the model to guess everything, so we shaped the experience with structure, guidance, and fallback logic.</p></li><li><p><strong>Build feedback into the system: </strong>user experience feedback wasn&#8217;t just &#8220;nice to have&#8221; &#8211; it was part of the product. Letting testers rate responses and leave comments created a loop we used to actively improve behavior, messaging, and even prompt design.</p></li><li><p><strong>Think modular from day one: </strong>even in a prototype, we aimed for components that could be reused, replaced, or scaled. That applied to backend logic. We didn&#8217;t over-abstract but we did leave doors open for smarter versions later. This is easier said than done &#8211; but, then again, that&#8217;s why we consider ourselves great engineers.</p></li></ol><p></p><h2><strong>Outcome and Next Steps</strong></h2><p>In only four weeks, we delivered a fully functional prototype that met the core requirements: users could ask natural questions about credit consolidation and receive personalized, mathematically accurate, data-backed answers all within a clean and usable interface.</p><p>We considered the project a success, not because it was flawless, but because it worked where it mattered most. Users received relevant answers, interactions felt natural, and the underlying system proved stable and adaptable. Early feedback suggested the assistant was already helpful and intuitive, not perfect, but promising.</p><p>More importantly, the foundation we built was solid: a modular system, a growing dataset of real interactions, and a clear sense of where to go next. For our partner, this isn&#8217;t just a prototype, it's a credible springboard for future iterations, deeper integrations, and smarter automation.</p><p>We proved that an LLM-powered assistant can offer meaningful guidance on a topic as sensitive and complex as personal finance. And we did it without overpromising, overengineering, or losing sight of what users actually need: clear, honest, and helpful answers.</p><p>Next steps? Version 2.0 &#8211; coming soon.</p><p></p><p><em>Article written by <a href="https://www.linkedin.com/in/tiago-caniceiro-a92707264/">Tiago Caniceiro</a></em><br></p><p>Want to learn more about how we create AI solutions that make a difference? <a href="https://www.cron.studio/contacts/">Let&#8217;s talk</a>!</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://substack.cron.studio/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Meets Sports Predictions at Scale]]></title><description><![CDATA[Benfica and Sporting are neck-and-neck and the title will be decided this Saturday. Could you have predicted that? We built an AI engine that does just that!]]></description><link>https://substack.cron.studio/p/ai-meets-sports-predictions-at-scale</link><guid isPermaLink="false">https://substack.cron.studio/p/ai-meets-sports-predictions-at-scale</guid><dc:creator><![CDATA[CRON STUDIO]]></dc:creator><pubDate>Thu, 15 May 2025 10:41:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!upug!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!upug!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!upug!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 424w, https://substackcdn.com/image/fetch/$s_!upug!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 848w, https://substackcdn.com/image/fetch/$s_!upug!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!upug!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!upug!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg" width="724" height="481.9125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:426,&quot;width&quot;:640,&quot;resizeWidth&quot;:724,&quot;bytes&quot;:72934,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://cronstudio.substack.com/i/163465292?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!upug!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 424w, https://substackcdn.com/image/fetch/$s_!upug!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 848w, https://substackcdn.com/image/fetch/$s_!upug!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!upug!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1d3a996-e3a8-4b9a-b8f9-6689801b0c14_640x426.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Project Overview</strong></h2><p><a href="https://www.cron.studio/">CRON STUDIO</a> partnered with a forward-thinking client in the sports prediction space to build an AI-driven service that powers their platform with accurate, data-backed predictions across multiple prediction markets &#8212; 1x2, Goals, Corners, Cards, and more. The system is <strong>in-production, active, in continuous improvement,</strong> and already delivering value to end-users.</p><p>Alongside the prediction models, we created a <strong>statistics engine</strong> that computes detailed match metrics, streaks, and tendencies at scale. By re-using computations and optimising data storage, the system delivers rich, near-real-time insights for thousands of games in parallel.</p><p>From system design and model development to close collaboration with the client&#8217;s engineering team for integration and documentation, we ensured every component worked seamlessly and delivered tangible value.</p><p></p><h2><strong>Key Results</strong></h2><ul><li><p><strong>+3% average improvement over direct competitors</strong> in both accuracy and logarithmic loss for the 1x2 market &#8212; particularly in the major football leagues &#8212; when benchmarked against the leading prediction providers in the sector.</p></li><li><p><strong>200+ competitions supported</strong> and continuously updated.</p></li><li><p><strong>16M+ statistics and trends generated each season.</strong></p><p></p></li></ul><h2><strong>What We Built</strong></h2><ol><li><p><strong>AI-Driven Prediction Models</strong></p><ul><li><p>Developed models tailored to multiple prediction markets, ensuring high quality predictions.</p></li><li><p>The models were highly focused on explainability, so users can not only see predictions, but also understand the reasoning behind them.</p></li></ul></li><li><p><strong>Robust Data Pipeline</strong></p><ul><li><p><strong>Ingested data</strong> from various sources, such as match stats, player trends, and historical records.</p></li><li><p><strong>Cleaned and curated the data</strong>, addressing missing or strange patterns to ensure high-quality inputs.</p></li><li><p>Created an efficient, scalable system to process data and train models in parallel for thousands of games.</p></li></ul></li><li><p><strong>Feature Engineering</strong></p><ul><li><p>Spent significant time analysing and understanding the data.</p></li><li><p>Designed features that capture <strong>temporal perspectives</strong>, like form streaks and historical trends, to enrich the models.</p></li><li><p>Focused on identifying the <strong>most impactful features</strong> for each market, ensuring optimal model performance.</p></li></ul></li><li><p><strong>Insights and Extended Statistics System</strong></p><ul><li><p>We <strong>developed a scalable engine</strong> to generate detailed match statistics, including streaks and tendencies.</p></li><li><p><strong>Optimised computations</strong> by reusing previously computed values and strategically storing data close to processing units, ensuring a near-real-time extended vision of each match.</p></li><li><p>Delivered a set of insights that empower users with deeper, context-rich information alongside the sports predictions.</p></li></ul></li><li><p><strong>Seamless Integration with the Client platform</strong></p></li><li><p>Delivered detailed, easy-to-follow documentation to guide their team in integrating the AI service into their platform.</p></li><li><p>Provided close support during the integration process, ensuring our service fit seamlessly into their workflow and met all expectations.</p><p></p></li></ol><h2><strong>Challenges and Solutions</strong></h2><ol><li><p><strong>Diverse Markets:</strong> eachmarket required tailored models and data perspectives, demanding robust feature engineering and modelling strategies.</p></li><li><p><strong>Scalability: </strong>handling large datasets and supporting predictions for thousands of games required efficient parallel processing and a scalable system.</p></li><li><p><strong>Integration and Usability:</strong> ensuring the service worked seamlessly with the existing client platform required strong collaboration, clear communication, and well-crafted documentation.</p></li></ol><p><strong>How did we solve it?</strong></p><ul><li><p>Adopted a <strong>modular design</strong> that allowed for the parallel development and integration of both prediction and statistics systems.</p></li><li><p>Leveraged <strong>cloud technologies</strong> and <strong>scalable infrastructure</strong> to ensure efficient data processing.</p></li><li><p>Optimised computation workflows by <strong>caching results</strong> and strategically placing data to reduce latency.</p></li><li><p><strong>Close collaboration</strong> with the clients&#8217; team to align our technical solutions with their operational needs.</p><p></p></li></ul><h2><strong>Project Timeline and Key Milestones</strong></h2><ol><li><p><strong>Exploratory Research and Market Analysis (Month 1)</strong></p><ul><li><p>Reviewed existing AI-powered prediction models to understand the landscape.</p></li><li><p>Identified key gaps and opportunities for improvement in current solutions.</p></li><li><p>Defined a clear strategy to differentiate our approach and align with clients&#8217; vision.</p></li></ul></li><li><p><strong>Proof of Concept Development (Month 2)</strong></p><ul><li><p>Built an initial prototype to test both prediction accuracy and the feasibility of real-time statistics generation.</p></li><li><p>Validated the models and statistics computations with historical data, demonstrating significant improvements.</p></li><li><p>Shared early results with the client to gather feedback and set priorities for subsequent development phases.</p></li></ul></li><li><p><strong>AI Model Optimisation and Statistics Engine (Month 3)</strong></p><ul><li><p>Fine-tuned the AI algorithm to handle edge cases and complex scenarios..</p></li><li><p>Integrated additional data sources to make the model even stronger.</p></li><li><p>Initiated the development of the statistics engine to generate detailed match insights, ensuring its computational feasibility.</p></li></ul></li><li><p><strong>Infrastructure and Scalability Development (Months 4-5)</strong></p><ul><li><p>Built a scalable infrastructure capable of processing thousands of matches every day, covering both predictions and statistics.</p></li><li><p>Optimised the statistics engine to reuse computations and store data efficiently, ensuring minimal latency.</p></li><li><p>Continued refining both systems, ensuring they could run in parallel.</p></li></ul></li><li><p><strong>Full Launch and Post-Launch Optimisation (Month 6)</strong></p><ul><li><p>Rolled out the complete solution, offering AI-powered predictions and insights at scale.</p></li><li><p>Monitored system performance in a live environment, making iterative adjustments based on user feedback and performance metrics.</p></li><li><p>Collaborated with the client on planning future updates and enhancements based on real-world usage data and evolving market needs.</p><p></p></li></ul></li></ol><h2><strong>Tech We Used</strong></h2><ul><li><p><strong>Python</strong> for core development</p></li><li><p><strong>Django</strong> for building REST APIs</p></li><li><p><strong>RQ and RQ Scheduler</strong> for asynchronous processing, enabling efficient task management and scheduling in our data pipeline.</p></li><li><p><strong>AWS Infrastructure</strong> with IaC (<strong>Terraform</strong>).</p></li><li><p><strong>SageMaker</strong> for model training and serverless data processing.</p></li><li><p><strong>Machine Learning Models</strong> for classification and regression across prediction markets.</p></li><li><p><strong>Feature engineering</strong> to capture temporal and contextual insights</p></li><li><p><strong>Parallel processing</strong> <strong>frameworks</strong> to manage high volumes of data and computation.</p></li><li><p><strong>Caching and Optimised Storage Solutions</strong> to ensure that statistical computations are reused efficiently and data retrieval is fast.</p><p></p></li></ul><h2><strong>What We Learned</strong></h2><p>Our work on this project highlighted the power of collaboration in delivering cutting-edge AI solutions. By combining advanced technology with a user-focused approach, we actively cooperated to create a platform that goes beyond predictions to provide actionable insights.</p><p>It may sound clich&#233;d, but it is undoubtedly true: this project wasn&#8217;t just about solving technical challenges &#8212; it was about building a partnership. With detailed documentation, close support and our agile approach, we ensured <a href="https://www.cron.studio/technology/">our service</a> was easy for their team to integrate, use, and improve upon.<br></p><p><em>Article written by <a href="https://www.linkedin.com/in/jbrazsimoes/">Jo&#227;o Braz Sim&#245;es</a></em><br><br></p><p>Want to learn more about how we create AI solutions that make a difference? <a href="https://www.cron.studio/contacts/">Let&#8217;s talk</a>!</p>]]></content:encoded></item></channel></rss>