The Current Regulation of CBRNs

Of all the near-term risk posed by AI, CBRNs (Chemical, Biological, Radiological, and Nuclear hazards) appear to be the easiest to develop and thus highest near-term threat that we currently face from AI, especially regarding biological weapons.
CBRN hazards regulations aim to eliminate the risk of AI-developed CBRNs. The US is/was the only player in the game that had formal regulation regarding CBRNs, with China and the EU not having any specific legislation in place governing them. Not long after taking office in January, President Trump signed an executive order rescinding Biden's Executive Order on AI (which contained specific rules regulating AI-produced CBRNs), with an update coming within 180 days post-removal. For this brief discussion, we will work with the old Executive Order on AI, as some of its language may be retained by Trump's incoming AI order.
So far, the order has "succeeded" in preventing any US catastrophe regarding AI-developed CBRNs, as there haven't been any reported incidences involving their creation. However, this may more likely be due to the lack of capability of current generative AI systems.
What's really wild is that 40,000 molecules (some known and many previously unknown) were discovered in just six hours by the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection, by simply "flipping the switch" on AI screening for drug toxicity back in March 2022. Screening for a drug's toxicity inherently gives the knowledge of its toxicity, making this an area of huge importance as far as regulation goes, which begs the question: why haven't the EU or China enacted similar legislation in the past three years since the Swiss Federal Institute's demonstration?
Although it has been reported that frontier AI models still pose no more threat to accelerating the production of CBRNs than a web search, it's increasingly worrisome that this might change soon with the rapid advancement of AI and the lack of any current government regulation (made particularly troubling given that the 40,000 toxic molecules were discovered three(!) years ago). It seems unlikely that whatever CBRN AI regulation gets put in place within the next four years will be timely or effective enough in deterring bad actors from either jailbreaking or developing an AI model that can generate novel toxic molecules or other CBRNs that may be weaponized.
First, because there isn't any current external regulation (Anthropic's Responsible Scaling Policy or RSP includes CBRNs as one of their top risks to look out for, but this is for their own internal use) regarding CBRNs, and second, because it seems relatively easy for bad actors to get their hands on a sufficiently capable AI in the first place by way of jailbreaking, cyber theft, or independent development. Not only should regulation be quickly put in place to deter the creation of CBRNs, but AI should be used as part of the defense against bioweapons, for which we are thankfully better prepared to deal with having recently undergone the COVID-19 pandemic.