World Leaders Unite to Ensure AI is ‘Secure by Design’ in US, UK, and Beyond

The United States, United Kingdom, and 16 other countries have made an agreement to make artificial intelligence (AI) “secure by design.” This means that AI systems will create security for people and the public. It’s the first time that a group of countries has made rules on how to keep AI safe from people who might use it the wrong way.

CISA director Jen Easterly said that it’s important to think about safety first when it comes to making new AI. They want other countries to join in and agree with these rules.

The other countries that have joined are Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore. Europe is already working on making laws to control the making of new AI. But, they are having trouble agreeing on these laws.

The United States hasn’t made laws yet, either. The White House wants to make rules for AI companies to do safety tests. President Biden signed a paper last month to protect AI systems from hackers. Apple is a company that uses AI in its products. They are careful about new technology, so they might take a long time to make changes. AI is hard to make safe because it can do things that no one knew it could do.

The 20-page agreement is just the beginning. It means that research companies have a job to look for safety problems in AI. But, the agreement doesn’t talk about how AI could be a threat to people.

Spread the love