AI in SaaS: Retrofit or redesign?
10 min read
Last edited:
Digital native companies are increasingly turning to AI-powered tools and products to stay competitive. Such tools can boost efficiency and productivity, enhance decision making, and easily scale to meet the demands of a growing business.
A crucial aspect of this conversation is the fundamental question: Should businesses retrofit AI into existing systems? Or should they opt for a complete redesign?
The case for redesign
While retrofitting AI into legacy systems may seem like a pragmatic solution, it has its limitations. Legacy systems are not inherently designed to accommodate the complex processing demands of AI algorithms. Bolting on AI into a legacy system inevitably leads to a platform full of compromises—in performance, reliability, cost of ownership, and user experience.
The shift to AI is no different from previous evolutions in technology that took digital businesses by storm:
In other words, retrofitting AI into existing architectures may result in patchwork solutions that fail to fully leverage the transformative capabilities of AI.
What it means to be AI-native
An AI-first approach advocates for considering AI integration during the foundational stages of software development. This makes SaaS applications inherently intelligent and adaptive.
Over the next few years, business software will need to become AI-native to stay relevant, just like how desktop applications had to be made mobile-native. Such applications will be:
- Zapping fast: AI will work at the speed of human keystrokes, ensuring that users can seamlessly interact with applications and receive real-time assistance.
- Search-first, deflection-first, recommendation-first: By leveraging advanced search algorithms and predictive analytics, AI systems will proactively anticipate user needs.
- Agentic: As AI agents come into play, machines will constantly execute tasks on behalf of humans. This includes background workflows like clustering, deduplication, routing, and so on; constantly reasoning, planning, routing; deduplicating, reconciling, augmenting, summarizing, attributing… and even speculating!
Principles of AI-driven SaaS architecture
1. AI applications require an extremely scalable knowledge graph
When business-critical applications like email were slow and not real-time, we quickly shifted to collaboration apps such as Slack and Teams. But Slack and Teams lacked workflows and search capabilities leading to:
- Disjointed collaboration: While these platforms were great for human collaboration, they could not do the same for AI and humans collaboration.
- Siloed data: Conversations and context was spread across multiple systems, making it harder to connect the dots.
For AI, data is the most critical input, driving everything that comes next. The more data a model has to reason over, the more effective the outputs, insights, and workflows will be.
But right now, enterprise apps talk to 17 different data sources. Retrofitting AI with these systems will result in having multiple “AI projects”. It will require a lot of consulting and one-off things to integrate with 17 different systems, and they will be fragile.
So it’s crucial to bring the data together, canonicalize it, and connect it between the customer, the product, the employees, and activities. With a robust knowledge graph, we bring all the data to one place.
At the end of the day, the chief customer officer or the chief technology officer or the chief product officer are no longer at the mercy of the IT department. They no longer depend on the data teams to get their information. Using the analytics and semantic search that comes out of the box, they have access to the knowledge they need to make business decisions. We are their AI engineers, data engineers, and visualization experts.
2. AI applications require data pipelines and continuous training
Extract, Transform, Load (ETL) is a fundamental aspect of AI implementation, yet ETL tasks have been traditionally left to the discretion of individual teams. They have to navigate pipelines, figure out what’s failing, and troubleshoot issues as they arise. This approach leads to inefficiencies, inconsistencies, and operational headaches.
ETL cannot be treated as an afterthought or left to ad-hoc solutions. Instead, it must be integrated directly into the AI platform itself, helping organizations streamline data ingestion, transformation, and loading, minimizing the burden on individuals and teams.
3. AI applications require intelligent labeling
Labels are a hallmark of data systems, helping identify key attributes or characteristics. In a Customer Relationship Management (CRM) system, customers may be labeled based on demographics, purchasing behavior, engagement level, or any number of other criteria.
As platforms continue to develop, labeling will also extend beyond simple categorization to encompass a deeper hierarchy, situating data in the context of a business’s customers, products, and people.
One such example is DevRev’s approach to work management. The DevRev platform allows businesses to identify and categorize their products, then break these down into underlying parts including capabilities, features, and APIs. This makes it easy to connect these parts to the people who build them and the customers that use them.
Effective labeling makes it simple to carry out AI-driven tasks such as deflection, deduplication, routing, attribution, and analysis to identify patterns, correlations, and relationships, enabling more precise decision-making and action.
4. AI requires constant crawling
For AI systems to effectively interact and make sense of diverse data sources. In the new architecture, AI agents need to constantly crawl both structured and unstructured data islands for clustering, classifying, deduplicating, and more.
In other words, they need to exhibit curiosity about what lies beyond their system. This involves getting information from various sources, such as document folders in Notion and Google Docs, as well as external data repositories.
By crawling these external sources, AI systems can enrich their knowledge graph with additional insights and context, enabling them to derive meaningful insights and help make informed decisions.
5. AI applications must feel low latency
Business applications deal with complex processes and large volumes of data, leading to performance issues and data management difficulties as businesses scale. Factors such as legacy architectures, network congestion, hardware limitations, and database bottlenecks can all contribute to latency issues.
Teams building AI-powered SaaS applications must be vigilant when it comes to addressing latency in data fetch, processing, and serving – but not at the expense of data integrity. Users expect fast responses and any delays can negatively impact the user experience and satisfaction. This makes it all the more crucial to implement efficient data handling practices and use best-in-class data processing packages.
6. AI = attention (custom context)
AI must possess a deep understanding of the enterprise's taxonomy—the hierarchical classification of its data and knowledge. This understanding enables AI to navigate and interpret the context of information effectively, akin to how humans discern the relevance of different resources in a given situation.
Building such contextual awareness into AI requires a concerted effort to integrate domain expertise and organizational knowledge into the underlying systems and platforms. By “left-shifting” this expertise into an operating system or platform, organizations can establish a standardized approach to managing and leveraging enterprise-specific context, thereby ensuring consistency and control in customer interactions.
In essence, attention in AI is not just about processing data but understanding the nuanced relationships and priorities within an enterprise's ecosystem.
7. AI agents are incomplete without human (agent) handoffs
Humans and AI have entirely different strengths. AI handles repetitive tasks with speed and precision whereas humans excel in their approach to novel tasks, never-before-seen challenges, and ambiguous circumstances. Due to their complementary skill sets, humans and machines are the perfect pairing. Rather than viewing AI as a threat to jobs, we must embrace AI as a collaborative partner that helps humans achieve more.
At DevRev, we believe that AI will enable human agents to do more deep work while leaving the shallow and monotonous work to machines. What this means, however, is that there are crucial moments when AI systems hand off tasks to humans and vice versa. This handoff requires immense design, attention to detail, and user empathy to effectively leverage each party’s respective strengths. AI agents will use a customer-centric and product-led platform for its knowledge graph, workflow and analytics engine, and semantic search
8. AI demands stricter data governance
Data governance is a big issue. In many organizations, data is categorized into distinct tiers based on its level of confidentiality and importance to the business:
The back office product and engineering data is the crown jewel of intellectual property. Its exposure could provide competitors with valuable insights into an organization's product development roadmap and potential vulnerabilities. Consequently, product and engineering data should be one of the last bastions that needs to move to the cloud, and requires immense security and compliance.
9. And finally, AI demands 10x faster software and experimentation speeds
With operating systems, models, surfaces, and communities evolving faster than ever before, AI-native SaaS will require:
- A modern marketplace, which hosts customer’s code securely in a serverless fashion, built in any modern programming language
- APIs and webhooks that meter and bill fine-grained usage than throttle customers applying arbitrary limits
- Public roadmaps, with users providing comments on surfaces shared with other users and PMs
- Multi-layered design mindset to navigate vastly complex enterprise knowledge graphs, now possible in augmented reality experiences
The inevitability of redesign
As the demand for AI-driven solutions continues to rise, redesigning SaaS for AI integration is becoming increasingly apparent.
Retrofitting AI into existing systems may offer short-term gains. However, this fails to address the profound challenges in user experience and low-latency engineering imposed by older architectures. Such retrofitting makes it difficult to achieve the necessary speed and efficiency required to provide an exceptional user experience and set an application apart from its competitors.
The best SaaS applications of tomorrow will integrate AI at the foundational stages of software development, embedding it in every aspect of the software architecture.