In the realm of artificial intelligence (AI), there is an ongoing battle between companies that keep their datasets and algorithms private and those that believe in transparency. This battle is often referred to as open-source versus closed-source AI.
Recently, Meta, the parent company of Facebook, made a significant move in favor of open-source AI by releasing a collection of large AI models. One of these models, called Llama 3.1 405B, is being hailed as the first frontier-level open-source AI model by Meta’s CEO, Mark Zuckerberg.
This development is good news for those who believe in a future where AI benefits are accessible to everyone. Closed-source AI refers to proprietary models, datasets, and algorithms that are kept confidential. While these products can be used by anyone, there is no transparency regarding the dataset and source codes used to build them. This lack of transparency can undermine public trust, slow down innovation, and create dependence on a single platform.
Closed-source AI also hampers the achievement of ethical frameworks that aim to improve fairness, accountability, transparency, privacy, and human oversight in AI. For example, OpenAI’s ChatGPT, a closed-source AI, does not release its dataset or code to the public, making it impossible for regulators to audit it.
On the other hand, open-source AI models make their code and dataset available to everyone. This fosters collaboration, enables smaller organizations and individuals to participate in AI development, and allows for scrutiny and identification of biases and vulnerabilities. However, open-source AI also presents new risks, such as low quality control and susceptibility to cyberattacks.
Meta has emerged as a pioneer of open-source AI with its new suite of AI models. Llama 3.1 405B, the largest open-source AI model to date, is a powerful language model that can generate human language text in multiple languages. While it is not fully open because Meta has not released the dataset used to train it, Llama 3.1 405B levels the playing field for researchers, small organizations, and startups.
To ensure the democratization of AI, three key pillars are needed: governance, accessibility, and openness. These pillars require regulatory and ethical frameworks, affordable computing resources, user-friendly tools, and open-source datasets and algorithms. Achieving these pillars is a shared responsibility among government, industry, academia, and the public.
However, there are still questions surrounding open-source AI, such as how to balance intellectual property protection and innovation, address ethical concerns, and safeguard against misuse. Properly addressing these questions will determine whether AI becomes an inclusive tool for all or a tool for exclusion and control. The future of AI is in our hands.