Synthetic intelligence is shifting from the testing section to real-world use, however weak infrastructure, poor information high quality, and rising compliance points nonetheless forestall it from doing so. Jakub Dunak, the principal architect at Ness Digital Engineering, shares why constructing for belief, not simply velocity, will form the way forward for AI.
Within the second quarter of 2025, the world spent $95.3 billion on cloud infrastructure, Canalys says. A giant chunk of this determine comes from companies hurrying to scale up AI. For instance, hospitals use machine studying to enhance diagnostics, banks depend on AI to detect fraud, and e-commerce shops need to improve on-line procuring experiences. The demand to innovate retains rising, however the surge has uncovered an infinite flaw: constructing AI fashions is just not troublesome, however guaranteeing that they’re dependable, protected, and compliant is way more difficult. We see these challenges every single day. Banking apps may crash midway by way of a cost. Buying web sites may go down when too many individuals attempt to use them. Weak protections may go away personal well being information susceptible. Whereas AI doubtless represents the long run, weak foundations may flip its potential benefits into main setbacks.
Jakub Dunak, the principal architect at Ness Digital Engineering, has skilled these points firsthand. With greater than 10 years of expertise, he has develop into so extremely sought-after that international trade leaders entrust him with their most crucial digital initiatives. He protected Europe’s monetary methods at Diebold Nixdorf, a multinational company with 1000’s of workers and purchasers in over 100 international locations, guided Fortune 500-level enterprises at CANCOM Slovakia in defining their multi-cloud methods, and constructed AI infrastructures at scale for international organizations at Ness Digital Engineering, a worldwide expertise innovator with an unlimited worldwide presence. His huge expertise and high-level certifications like AWS Options Architect Skilled and Microsoft Cybersecurity Architect Knowledgeable, subsequently, don’t replicate his robust technical abilities alone, but additionally his means to function on the stage of the world’s most complicated and demanding enterprises.
Jakub, at Diebold Nixdorf, you labored on monetary methods the place downtime didn’t simply trouble you, however was one thing that might cease hundreds of thousands of transactions. So, what particular issues did you do there to ensure their methods have been robust sufficient to cope with this drawback, and the way do they nonetheless affect what you do at present?
At Diebold Nixdorf, making methods dependable was a high precedence. We managed databases utilized by main banks and retailers to deal with transactions. Even a quick downtime may shut down ATMs, cease gross sales methods from working, or interrupt international cash transfers. Such weaknesses trigger extra than simply monetary harm. They break buyer belief. To unravel that drawback, I put catastrophe restoration protocols in place. This ensured that restoration instances went from hours to just some minutes. I additionally launched automated backup strategies to scale back errors made by people. I equally made changes to enhance efficiency so methods may deal with heavy masses with out breaking down. As we did that, I realized that resilience isn’t simply elective, however types the spine of belief. With out it, even superior AI issues nothing. So, each time I work on AI pipelines now, I maintain on to that lesson: if a system can’t deal with failure, you’ll be able to’t depend on it.
Then, at CANCOM Slovakia, the issue was not reliability however complexity. Companies desired the flexibleness of multi-cloud, but what you noticed was chaos. What did you do to handle multi-cloud there?
Multi-cloud seems interesting with AWS, Azure, and Google Cloud getting used collectively, so nobody is caught with a single supplier. However I watched firms get overwhelmed by how difficult it was. Every cloud platform had its personal approach of dealing with safety, compliance, and billing. They didn’t work effectively collectively. As a substitute, these firms had silos, duplication of dangers, and elevated prices. At CANCOM, I labored on making multi-cloud setups work higher. I created safety controls that labored throughout totally different platforms. I additionally matched compliance guidelines with one another to make sure audits wouldn’t flip into complications. As well as, I designed methods that unfold workloads primarily based on price and efficiency. As an illustration, delicate monetary information may keep in a GDPR-compliant area in Europe, whereas non-critical workloads may run within the U.S., the place GPUs are extra reasonably priced. This introduced the whole lot collectively. The result was consistency: methods have been now simpler to handle, the danger of downtime was minimized, and purchasers have been capable of benefit from the flexibility that they’d been promised. With out this framework, multi-cloud wouldn’t assist; it might simply trigger issues.
Now, at Ness Digital Engineering, you lead tasks to assist companies scale AI. Nonetheless, compliance and velocity are often in battle with one another. How do you design methods that obtain each?
A whole lot of firms assume that compliance is one thing they will add later, however that could be a very dangerous method. AI platforms work with delicate monetary transactions, medical information, and private data, and regulators are all the time watching. So, at Ness, we take a distinct method. As a substitute of including compliance later, we construct it into the whole lot we do from the beginning. Encryption, entry guidelines, and information residency aren’t extras at our firm. They’re a part of our core design. We mix this basis with automation, utilizing Infrastructure as Code and CI/CD pipelines, in order that scaling and deployments occur with out human errors. This methodology gives clear outcomes you’ll be able to measure. Duties that used to require months to roll out now end in simply weeks. Automation removes pointless steps, which cuts infrastructure prices lots. Better of all, purchasers know they will deal with audits with out worrying about something as a result of compliance is not a problem, however is now a part of how the system works.
You’ve confused how vital information high quality is. What particular methods or instruments have you ever used to make sure AI methods depend on reliable and unbiased information?
Poor information high quality can break AI tasks. Even with the strongest mannequin, poor information will result in poor outcomes. At Ness and different locations I’ve labored, I’ve utilized information lineage frameworks to hint the origin of knowledge and the way it modifications. I’ve additionally used machine studying instruments to detect incomplete or corrupted information and employed bias mitigation strategies like fairness-focused algorithms or rebalancing approaches. One undertaking targeted on a financial institution the place credit score scoring information had an opportunity of reinforcing bias. To repair this, my crew expanded the dataset and added equity checks. This effort helped stability the skew, making the mannequin stronger and extra dependable. That’s the kind of behind-the-scenes work that helps AI develop into not simply smarter, but additionally fairer and safer.
AI is understood for utilizing quite a lot of power. What strategies do you employ to make your infrastructure extra eco-friendly?
Sustainability is a giant deal, each environmentally and financially. Operating giant AI coaching fashions eats up quite a lot of power, and that racks up big prices. I give attention to enhancing how we handle workloads. By utilizing Kubernetes, I let methods scale up throughout excessive demand and scale down when much less is required. I additionally distribute workloads to areas the place power is used extra and is out there. This cuts waste, saves cash on electrical energy, and reduces carbon footprint. For purchasers, it’s a twin profit: eco-friendly operations and extra predictable prices.
By 2030, what practices do you assume firms might want to comply with when utilizing AI?
I see three key areas. One is zero-trust safety, the place methods deal with each request as dangerous until it’s verified. One other is compliance-as-code, which means firms will construct regulatory guidelines into their deployment processes. The third is managing workloads, making power use simply as vital as velocity and price. These gained’t be elective. To make use of AI at a big scale by 2030, companies should implement these items as primary necessities.