OpenAI's o1 model aka Strawberry can create bioweapons, comes with ‘medium risk’ accepts AI giant

As per OpenAI's system card, the new o1 models have been rated with a "medium risk" for chemical, biological, radiological, and nuclear (CBRN) weapons, the highest risk level the company has ever attributed to its AI technology

featured-image

OpenAI has acknowledged that its latest artificial intelligence models, known as o1 or “Strawberry,” poses an increased risk of misuse, particularly in the creation of biological weapons. The company stated that these models, launched recently, have significantly enhanced capabilities, which inadvertently heightens the potential for dangerous applications in the wrong hands. The models boast improvements in reasoning, solving complex mathematical problems, and answering scientific research questions, marking a step forward in the development of artificial general intelligence (AGI).

According to OpenAI’s system card, the new o1 models have been rated with a “medium risk” concerning chemical, biological, radiological, and nuclear (CBRN) weapons, the highest risk level the company has ever attributed to its AI technology. This means that the models now enable experts to develop bioweapons more effectively, raising ethical and safety concerns. AI’s advanced reasoning abilities, while a breakthrough in the field, are considered a potential threat if used by bad actors for malicious purposes.



Experts, such as Professor Yoshua Bengio, one of the leading voices in AI research, have highlighted the importance of urgent regulation in light of these risks. A proposed bill in California, SB 1047, aims to address such concerns by requiring AI developers to take steps to minimise the risk of their models being used to create bioweapons. Bengio and others have stressed that as AI models evolve closer to AGI, the associated risks will only increase unless strong safety measures are implemented.

The development of these advanced AI systems is part of a broader competition among tech giants such as Google, Meta, and Anthropic, all vying to create sophisticated AI that can act as agents, assisting humans in various tasks. These AI agents are viewed as significant revenue generators for companies, which face high costs in training and operating such models. OpenAI’s chief technology officer, Mira Murati, emphasised that the company is proceeding cautiously in releasing the o1 model to the public.

While it will be available to ChatGPT’s paid subscribers and developers via an API, rigorous testing has been conducted by “red-teamers,” experts tasked with identifying potential vulnerabilities in the model. Murati noted that the latest model has demonstrated better safety performance compared to earlier versions. Despite the risks, OpenAI has deemed the model safe to deploy under its policies, assigning it a medium risk rating within its cautious framework.

.