The week on AI – December 29, 2024

The opinions are only mine.

OpenAI releases a new model: o3, and o3 mini

OpenAI’s new model o3, which will be tested for public safety over the coming weeks, seems poised to break all benchmarks. In the Software Engineering (SWE-bench Verified) category, it scores 71.7, compared to o1’s score of 48.9. In the Competition Code (Codeforces) category, o3 achieves a score of 2727, while o1 scores 1891. The results in mathematics are even more impressive: in the Competition Math (AIME 2024) category, o3 scores 96.7, compared to o1’s score of 83.3. Furthermore, for PhD-level Science Questions (GPQA Diamond), o3 receives a score of 87.7, whereas o1 scores 78.0. Watch the full introduction here. With o1 being pre-released in September 2024 and o3 following in December 2024, reasoning models could address the plateauing challenges that LLMs have faced recently. You might be wondering why there has been no o2 model? It’s because of potential copyright and trademark conflicts with the telecom operator O2.

Is Colossus really up and running with 100’000 Nvidia H100 chips?

xAI is in the process of building the largest AI supercomputer, equipped with 100,000 Nvidia H100 chips. Elon Musk claims that it is already operational at full capacity. However, the available information suggests that he may be exaggerating: the data center currently lacks the necessary power capacity from the grid, and connecting all these GPUs to function as a “single unit” is not that easy with today’s networking technology. Some AI providers, such as OpenAI, have started raising concerns that xAI might have access to more GPU capacity than they do. Read xAI just raised another USD 6 billion, giving it a valuation of USD 35-40 billion. Read

Other news

Google has released its Gemini 2.0 Flash Thinking model, ranked #1 in reasoning capabilities (note: is that still true when I publish this?) and is free. Google’s AI Studio allows users to experiment with Gemini prompts using operators and code. DeepMind, which is part of Google, has also released a new benchmark for assessing the factuality of large language models. Meta is planning to integrate its video generator into Instagram early next year, enabling users to create personalized videos and has published Apollo, a new family of models that can understand and explain videos of up to one hour in length. OpenAI now offers a free 15-minute monthly phone call service to ChatGPT. Elon Musk said that “Grok 3.0 will be the most powerful A.I. in the world” and Peter Diamandis predicts that it will reach an IQ exceeding 140 in 2025.

Other readings

> OpenAI has an edge over Google in winning publishers’ business, read
> Data centers are consuming so much energy in the US that they may be distorting the normal flow of electricity for millions of Americans, read
> Nvidia Christmas’ presents, read

The week on AI – December 22, 2024

The new Nvidia Blackwell chip appears to be encountering ongoing challenges. Following design flaws that delayed its release, the chip is now facing overheating issues, making the servers less reliable and reducing their performance. Nvidia has requested its suppliers to modify the design of the 72-chip racks multiple times, causing anxiety among customers about potential further delays. And delays may be worsened because large cloud providers need to customize the racks to fit into their vast cloud data centers. It seems Nvidia is facing the same challenges with the smaller 36-chip racks. In the meantime, customers have decided to buy more Hopper chips.

Nvidia becoming a cloud and AI software provider

Nvidia has been quietly building its own cloud and AI software business (Nvidia AI Enterprise) and is already close to generating USD 2 billion in revenues annually. This is not surprising when we know that all major cloud providers (e.g., Microsoft, AWS, Google) are developing their own AI chips to become less dependent on Nvidia. The AI Enterprise suite includes all the necessary tools and frameworks to accelerate AI developments and deployments, including but not limited to PyTorch and TensorFlow for deep learning, NVIDIA RAPIDS for data science, TAO for model optimization, industry-specific solutions, NVIDIA RIVA for speech AI and translation, and much more. But don’t be mistaken, Nvidia is still far behind the major cloud providers and will continue to operate Nvidia DGX, their AI supercomputer, on the infrastructure of its competitors. Does Nvidia have a hedge compared to other big tech firms due to its proximity to AI hardware? Some believe so. Nvidia still has a long way to go before becoming a cloud and AI software business provider, but it definitely has the means to succeed, and that could become another major revenue stream.

Apple moving in AI chips with Broadcom

Apple is working with Broadcom to develop its own AI chips for servers, aiming for mass production by 2026. These chips are expected to be used internally rather than entering the consumer market, highlighting Apple’s effort to reduce reliance on Nvidia and other competitors. This trend mirrors a broader industry shift, as many tech companies seek to create custom AI processors to cut their dependence on Nvidia. However, designing AI chips is a complex undertaking, and most firms continue to rely heavily on Nvidia, with Google being a notable exception. In most cases, tech companies collaborate with chip makers to leverage their intellectual property, design services, and manufacturing capabilities. The deal between Apple and Broadcom seems to be different from other deals; Apple is still managing chip production with TSMC (it seems). Read

Other readings

> A look at why the world’s powers are locked in a battle over computer chips. How will Europe continue to compete against China from an investment perspective? read
> Broadcom chief Hock Tan says AI spending frenzy to continue until end of decade, read
> Perplexity’s value triples to $9bn in latest funding round for AI search engine, read, read about Perplexity here

The week on AI – November 17, 2024

Are LLM reaching a plateau?

The reasoning capabilities of LLMs may be reaching a plateau, suggesting that the scaling laws might be hitting a limit. Scaling laws which are based on observations and are not proper laws (like the Moore Law), describe how machine learning models improve as a function of resource allocation, such as compute power, dataset size, or model parameters. Reports suggest that OpenAI’s upcoming model, Orion, is showing only modest improvements over GPT-4, falling short of the significant leaps seen in earlier model iterations. The industry is beginning to exhaust its data for training LLMs, and the legal disputes over copyright rights are escalating. Additionally, the use of synthetic data generated by AI presents its own set of challenges. In addition, computation power is not limitless, even in the cloud, and it brings limitations and hard decisions for LLM developers like OpenAI. The industry is working to overcome these challenges by developing new training approaches that align more closely with human thinking. This has already been used in the development of OpenAI’s o1 model.

Google DeepMind has a new way to look inside AI models

As previously discussed, we currently do not fully understand how AI operates. Google DeepMind has taken on this challenge by introducing Gemma Scope, which is a collection of open, sparse autoencoders (SAEs) aimed at providing insights into the internal workings of language models. This research falls under the category of mechanistic interpretability. To better control AI, we will need to further refine our approaches, balancing the need to reduce or eliminate undesirable behaviors—like promoting violence — without compromising the model’s overall knowledge. Additionally, removing undesirable knowledge is a complex task, particularly when it involves information that should not be widely disseminated (such as bomb-making instructions) as well as knowledge that may be incorrect [on the internet]. Mechanistic interpretability has the potential to enhance our understanding of AI, ensuring that it is both safe and beneficial. Read

Elevating AI-coding to the next level

In a crowded landscape filled with AI coding tools such as GitHub Copilot, Dodeium, Replit, and Tabnine, many of these options function primarily as coding assistants. Tessl aims to elevate AI-based coding to the next level. They envisions a future where software developers transition into roles more akin to architects or product managers, allowing artificial intelligence to handle the majority of the coding. Upon examining their proposal on their website, it seems that Tessl is not attempting to turn everyone into a developer (at least not yet). Their tool will still be targeted at developers but will empower them to define what they want to build and let the Tessl AI tool define the internal architecture of the solution and develop it. Let’s see how far they can push the concept. They have just raised another USD 100 million making them worth a reported USD 750 million. Read

Other readings

> Inside Elon Musk’s colossus supercomputer, watch (no content guarantee)
> Amazon to develop its own AI chips to take on Nvidia, read
> Nvidia’s message to global chipmakers, read
> A.I. Chatbots Defeated Doctors at Diagnosing Illness, Read

The week on AI – November 10, 2024

It’s too soon to call the hype on Artificial Intelligence

Predicting the future of technology has always been a challenge. It’s likely that optimists will face disappointment in the short term, while pessimists—some of whom are even predicting the end of humanity—may also end up being wrong. In other words, as per Amara’s law, we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. New technologies often take decades to enhance productivity and often follow the J-curve pattern described by some economists. In this pattern, productivity initially declines before experiencing significant growth. As per Carlota Perez, this was true for the industrial revolution (1770), the steam and railway age (1830), and the electricity and engineering age (1870). Carlota Perez sees Artificial Intelligence as a revolutionary technology, not a technology revolution. AI depends on powerful microprocessors, computers, and the Internet. She argues that AI is better seen as a key development of the ICT (information-communication-technology) revolution that started in the 1970s. Read the whole essay here.

To fully exploit AI, new infrastructure will have to be built, new ways of working developed, and new products and services launched. But AI seems to be on a much faster trajectory than any technology in the past, so we might not have to wait for a decade. Read

ChatGPT is competing with Google for search

ChatGPT has introduced search capabilities that will compete with Google and startups like Perplexity. This search feature is directly accessible in the ChatGPT interface. The AI determines when to use the internet and when to rely on its internal knowledge, but users can prompt it to perform a web search. To further enhance its search capabilities, ChatGPT is also developing long-term memory functionalities. Currently, Google’s search results appear to be more accurate. Perplexity seems still better than ChatGPT in how it presents source references. The ChatGPT search feature is not yet available to users on the free plan but it should be available over the coming months. Competition for the search market is definitely on. Use

The battle for the AI stack

Programming GPUs has historically been complicated, but this changed with the release of Nvidia’s CUDA platform in 2006, which abstracts the GPU complexity from developers. CUDA is a general-purpose platform that allows C code to run on Nvidia GPUs, making it easier for programmers to utilize these powerful processors that are necessary for AI. Most AI engineers and researchers prefer using Python, often with libraries such as PyTorch or Google’s JAX. Under the hood, PyTorch operates on CUDA, which runs C code on the GPU. The industry is now exploring alternatives to CUDA: AMD has introduced its ROCm platform (Radeon Open Compute), and Google has released the XLA (Accelerated Linear Algebra) platform designed for its TPU (Tensor Processing Unit) chips. The key point here is that developing Artificial Intelligence relies not only on the chips created by Nvidia, AMD, and others but also on the software platforms that support these chips, which are equally crucial as the hardware. Nvidia is definitely ahead but the the CUDA platform is getting some serious competition. Read

Other readings

> TSMC to close door on producing advanced AI chips for China from Monday, read
> Salesforce to Hire 1,000 People for AI Product Sales Push, read
> Evident AI index for banks, read

The week on AI – November 3, 2024

Cerebras continues to break records

Cerebras continues to set new benchmarks and lead the field in AI performance, advancing by unprecedented factors. The below chart says it all (source: Cerebras). Remember the IPO announcement?

Charting a path to the data- and AI-driven enterprise of 2030, McKinsey & Company

According to McKinsey, by 2030, data will be ubiquitous, readily available to all employees, and integrated into systems, processes, channels, interactions, and decision-making. However, many companies are still struggling to understand the capabilities and types of data needed to achieve better outcomes. To succeed, data leaders need to make data easy to use, easy to track, and easy to trust.

One challenge is that all enterprises use the same tools and resources, which does not create any competitive advantage. The true value lies in how these tools are assembled and the use of proprietary data. McKinsey recommends that leaders take three key actions: i) tailor models using proprietary data; ii) unify data, AI, and systems; iii) invest further in high-value data products.

Data architecture is crucial, with three primary approaches: a) centralized, b) decentralized, and c) federated, which may utilize a data mesh. No approach seems perfect, and each comes with specific challenges.

In many companies, challenges with data management often arise from unclear responsibilities, limited skill sets, and disjointed governance. McKinsey suggests identifying a leader who can concentrate on three key areas: i) governance and compliance; ii) engineering and architecture; iii) delivering business value. That’s usually a very complicated profile to find.

McKinsey concludes on the talent needed by 2030 and emphasizes the importance of managing the transition. It also highlights the necessity of addressing the risks and governance related to data and AI.

Other readings

> How software built Nvidia’s $2.97T hardware empire, read
> Lawsuit claims Character.AI is responsible for teen’s suicide, read
> Time to place our bets: Europe’s AI opportunity, read

The week on AI – October 27, 2024

Perplexity AI search start-up targets USD 8bn valuation

Perplexity AI is an AI-powered search engine that leverages large language to deliver fast, accurate answers to user queries. The company positions itself as a user-focused alternative to traditional search engines like Google, aiming to provide a more streamlined and informative search experience without relying on advertisements. Perplexity differentiates itself by offering concise summaries of search results with citations, enabling users to easily verify information and avoid the often overwhelming presence of sponsored content found on other platforms. Driven by the success of other AI ventures and the potential of AI-powered search, Perplexity is actively pursuing a new round of funding, aiming to raise between USD 500 million and USD 1 billion. This would increase its valuation to an impressive USD 8 billion, more than double its previous valuation of USD 3 billion in June. Perplexity’s current investors include prominent names like Nvidia, Jeff Bezos, Andrej Karpathy, Yann LeCun, and SoftBank’s Vision Fund 2, reflecting the strong belief in the company’s potential to disrupt the search engine landscape. While Perplexity’s annualized revenues have increased from USD 5 million in January to USD 35 million in August, the company is not yet turning a profit. This is largely due to the substantial operating costs associated with training and maintaining its advanced AI models. The expenses related to these models reportedly amount to “millions of dollars,” potentially creating a significant burn rate as the company strives to establish a sustainable business model. Perplexity’s reliance on venture capital funding underscores this financial challenge, as the company works to achieve profitability through subscriptions and other revenue streams. Read

Notebook LM from Google, to help with research and writing

Notebook LM (https://notebooklm.google.com) is a new tool from Google designed to help users quickly create content based on specified information. This means it utilizes only the information you provide and includes references to the sources, so that content can be quickly verified (eliminate hallucinations). It excels at writing and following instructions, which is not always the case with ChatGPT and other LLM models. Be sure to watch the video that explains Notebook LM. I have been experimenting for quite a few days now, and I must say that I love it.

The future of automation is [almost] here

Anthropic has released the “Computer Use” API, allowing developers to automate processes much like a human would use a computer. Although the AI is still in its early stages, slow, and not yet performing optimally, it will likely improve rapidly in the coming months. A demo is available on Anthropic’s website. Similar tools will be released in the coming weeks from companies like Microsoft, Asana, and Salesforce, among others. But it seems that Anthropic is ahead of the game on that one.

ARM CEO sees AI transforming the world much faster than the internet

Arm Holdings plc is a British semiconductor and software design company based in Cambridge, specializing in the architecture and licensing of central processing unit (CPU) technologies. Founded in 1990, Arm’s designs are integral to a wide range of devices, from smartphones to automotive systems. Rene Haas, the CEO of ARM, is optimistic about the future of AI and believes its evolution will be faster than that of the internet revolution. One of the main challenges ARM is facing is the need for more engineers. As AI continues to grow, we will require more energy, but it’s also crucial to develop more efficient chips. Read

Other readings

>Intel has tough choices to make to survive. Read
> OpenAI to release its latest model Orion before the end of 2024. Read But it does not seem for 2024 anymore! Read

The week on AI – October 20, 2024

Behind OpenAi’s audacious plan to make AI flow like electricitytricity

Sam Altman, the CEO of OpenAI, aims to create a global pool of computing power specifically for developing the next generation of artificial intelligence. Initially, Altman aimed to raise trillions of dollars by engaging with US officials, Middle Eastern investors, and Asian manufacturing giants. However, he has now scaled down his ambitions to hundreds of billions. Altman envisions making AI as pervasive as electricity, but the United States faces a challenge in continuing to build data centers, highlighting the significant power AI requires to operate. To understand Altman’s vision: he aims to construct data centers that cost USD 1 billion each, at least 5 times more than current data centers. These centers would house two million AI chips and consume 5 gigawatts of electricity. It seems that TSMC does not take Sam Altman’s plan very seriously. In parallel, OpenAI is seeking funding for its ongoing operations, which continue to consume more cash than they generate. Read

What’s at Stake in a Strained Microsoft-OpenAI Partnership

Microsoft has already invested billions of dollars in OpenAI, following a recent funding round that totaled $6.6 billion, which included cash and access to substantial computing power. OpenAI anticipates that it will spend as much as $37.5 billion annually on computing resources in the coming years. However, tensions are rising between the two companies. Microsoft has alleged that OpenAI is not delivering the expected AI software, while OpenAI has expressed concerns that Microsoft is not providing sufficient computing capacity. In this context, Microsoft has begun to diversify its AI strategy by hiring top talent, including former Google executive Mustafa Suleyman. Meanwhile, OpenAI has started forming partnerships with Microsoft’s competitors to secure additional computing resources. The ongoing lack of profitability at OpenAI poses a significant challenge. At the same time, Google is preparing to enhance its competitiveness in the AI sector. Read

Cerebras, an AI chipmaker trying to take on Nvidia, files for an IPO

The Silicon Valley company would be one of the first artificial intelligence companies to go public since the release of ChatGPT about two years ago. Cerebras is betting on not developing small chips, but rather going big: the chips they develop are up to 56 times larger than traditional chips used for artificial intelligence. A Cerebras chip is up to 21.5 cm by 21.5 cm. Nobody else produces chips that big. Read

  CerebrasNvidia Balckwell (planned)
Transistors on chip:4 trillion208 billion
AI cores900’00025’000
Petaflops of peak AI12520
Manufacturing process5nm4nm

Generative AI to unleash developers’ productivity

Mid-June, I wrote about “Leveraging Artificial Intelligence in Software Development.” McKinsey & Company just published a study that “shows that software developers can complete coding tasks up to twice as fast with generative AI.” Not surprisingly, generative AI can be used for code generation, code refactoring, and code documentation, speeding up these activities by 20 to 50 percent.

The purpose of Generative AI is to assist developers rather than replace them. It is important for developers to have solid coding skills and to dedicate time to learning how to use Generative AI effectively. Generative AI won’t replace developers in integrating some organizational context (e.g., integration with other processes and applications), examining code for bugs and errors, and navigating tricky coding requirements.

As per McKinsey’s research, generative AI shined and enabled tremendous productivity gains in four key areas: expediting manual and repetitive work, jump-starting the first draft of new code, accelerating updates to existing code, and increasing developers’ ability to tackle new challenges.

The transition to coding with Generative AI will take time to happen. Technology leaders must train and upskill their development teams, start experimenting early, and deploy risk control measures. Risk control must cover many topics, including but not limited to security, data privacy, legal and regulatory requirements, and AI behavior.

Improving developer productivity through generative AI is a journey that will take some time. It is crucial for companies, particularly regulated ones such as asset and wealth managers, to begin experimenting with it. So that they can also better understand regulatory and security constraints and understand how to best address them.

Enterprise Data Paradigm Shift for Financial Institutions

This article has been co-written with Rémi Sabonnadiere (Generative AI Strategist – CEO @ Effixis) and Arash Sorouchyari (Entrepreneur, Speaker, Strategic Advisor for Fintechs and Banks).

This is the next episode of Time for an Enterprise Data Paradigm Shift.


The banking industry relies heavily on data-driven insights to make informed decisions, but gathering and consolidating data can be a slow and difficult process, especially for large financial institutions. Consider The Financial Company, a fictive global wealth manager with billions of assets under management. The firm has grown quickly through multiple acquisitions, resulting in a complex IT landscape with various Investment Books of Records (IBORs) and data repositories.

At The Financial Company, it can be a challenge for business users to find out how much the company is exposed to a specific country or sector. To get this information, they have to request the IT department to create custom queries from several databases and then wait 1-2 business days to receive the answer. This process is time-consuming and not efficient.

One commonly used approach to solve that challenge involves utilizing business intelligence and data visualization tools like Microsoft Power BI. This approach involves the IT department creating a solution tailored to the specific needs of the business user. However, this approach could be more efficient as it is only reactive and not easy to scale. Each new query or use case requires a new customized solution, which often leads to copying more data into an existing data warehouse or creating a new one. BI developers must identify the correct data in various databases, gain access to them, create extraction procedures, and adjust data warehouse structures to receive the data.

Imagine if business users could get real-time answers without depending on the IT department. This is where the paradigm shift occurs – using Generative AI to change the data retrieval process from a query-based model to a prompt-based one.

Moving From Query to Prompt

Generative AI brings an innovative shift by placing a Large Language Model (LLM) powered agent on top of multiple databases, eliminating the need for never-ending and costly database consolidation. This approach requires two key elements:

  • Database Crawlers: To gather data from numerous databases, files, and services with different API technologies is a significant challenge. Database Crawlers can help by connecting to multiple databases, reading their schemas, and comprehending them. These Crawlers can function as domain agents that possess knowledge of a particular domain’s data and context. They are aware of the databases and structures within their domain, eliminating the need for model discovery with each request.
  • Generative Prompt: The generative prompt helps interpret user requests, generate query codes, and gather data from multiple databases. The consolidated data is then presented to the user. The prompt can seek user assistance if there is any uncertainty in selecting the appropriate data sources and fields.

By leveraging the exceptional text-to-code abilities of Large Language Models as well as their ability to understand very well both human questions and data dictionaries, it creates an intelligent layer capable of answering many requests in a reliable, explainable, and intuitive way. The benefits for an organization are numerous.

Key Benefits

Instant Access and Enhanced Decision Making

Generative AI offers banks immediate and reliable access to data, thus empowering real-time decision-making. The ability to query data easily and access it in real-time enables banks to rapidly recognize potential risks and opportunities and make informed strategic decisions.

Improved Data Completeness and Accuracy

By accessing data from various sources and utilizing intelligent agents, Generative AI ensures databases are complete and accurate. This significantly reduces errors and improves overall data quality, ensuring that decision-making processes are grounded on current and comprehensive information.

Bridging the Skills Gap

GenerativeAI eliminates the need for advanced technical skills, as business users can interact with the system using natural language queries. This bridges the skills gap, allowing users to derive the necessary insights independently and fostering a self-sufficient environment.

Scalability and Flexibility

Generative AI systems are inherently scalable and flexible. They can adapt to changing business needs and accommodate new use cases effortlessly. Instead of creating individual solutions for each query, the AI system can dynamically handle various requests irrespective of the underlying database management systems and data structures. This adaptability allows banks to remain agile and swiftly respond to new data demands.

Cost Reduction

Generative AI removes the necessity for expensive data migration projects by allowing data retrieval from current, dispersed sources. This leads to significant reductions in both time and expenses associated with data consolidation.

Addressing Data Challenges

Data Gathering and Data Quality

Generative AI also utilizes data healers to enhance data quality. However, accessing these data sources with crawlers entails challenges such as access rights, filtering data based on user rights, identifying inconsistencies, merging data, and avoiding overloading transactional databases with queries.

By adopting a domain-based agent approach, each domain agent ensures that performance, access rights, and other issues are tackled. The agents are developed by the respective domains and are equipped to provide answers related to their data model across all databases. Moreover, AI doesn’t bypass the need for IT expertise but enables them to create intelligent agents that can autonomously answer future queries.

Additionally, AI can search online sources for relevant data to deal with incomplete databases. For example, by analyzing articles, the AI system can identify companies associated with the oil and gas sector and create an extra column named “Industry_AI_generated”, which can be automatically populated with pertinent values.

Minimizing System Overload

In order to avoid system overload, domain agents should use tactics like read-only database instances, setting up local data storage, or utilizing performance-optimized services, particularly if dealing with transactional databases. It is the responsibility of each domain to handle performance concerns effectively.

Way Forward

Banks can benefit from using Generative AI, specifically LLM-powered agents, to retrieve data from multiple databases. Although AI isn’t a complete fix, having agents that are knowledgeable about their specific domain can greatly help alleviate the issues. These agents act as important components in the data retrieval process, as they’re familiar with the context and data of their domain.

It is important to understand that this technology does not replace the need for IT expertise. Rather, it repositions IT to create intelligent agents that can autonomously answer future queries. This approach aligns with the data-mesh strategy and is a transitional phase that helps IT departments focus on long-term strategies for data management and legacy system transformation.

Banks should begin testing this technology to discover its potential as a game changer. By doing so, they can transform into a data-driven company more efficiently than they anticipated. If you are interested in learning more about this approach or running a proof of concept, please contact info@effixis.com.

We will soon publish an exciting new episode, where we will introduce a cutting-edge prototype powered by Generative AI. Stay tuned!

Leveraging Artificial Intelligence in Software Development

Artificial Intelligence (AI) offers diverse applications in software development that will drastically change how firms develop software. It can:

  • Support developers by accelerating coding tasks, leading to faster and higher-quality code.
  • Document existing codes that have no documentation.
  • Help developers to appropriate codes that are not theirs.
  • Debug codes.
  • Accelerate or even automate the migration of legacy stacks to more modern technologies.

Within the next 6 to 18 months, most software development tools will integrate some artificial intelligence to support developers. On the one hand, there are traditional players like Microsoft with its Copilot. But competition is building up with solutions from Tabnine, Codeium, and CodeComplete, to name a few. And you can expect all products for data science like Databricks, Hex, and data iku to integrate some “copilot” to support users and developers.

There are big questions on the intellectual property of the code generated by artificial intelligence, and the code firms share with these solutions. Everybody knows the horror story of Samsung using ChatGPT to debug some proprietary and confidential code. ChatGPT eagerly consumed the data, using it as training material for future public responses.

The rise of artificial intelligence will not render developers obsolete. Instead, it offers a unique chance to establish a harmonious collaboration between humans and computers. Developers should see Artificial Intelligence as a new colleague with superpowers. By delegating repetitive and mundane tasks to AI, developers can devote more time to creative problem-solving and embark on a journey of enhanced productivity and innovation.

Another interesting topic, not linked to artificial intelligence, is how financial institutions recruit and deploy developers. The traditional way has been to hire and locate them in internal facilities. But the war for talent has made it very difficult to hire outstanding developers, not to mention that they often don’t want to be employees or don’t want to be in a specific location. They want more freedom. An ecosystem of secure software development solutions is becoming available in the public cloud and from specialized providers like StrongNewtork (www.strong.network)— time for financial institutions to start looking into this.