WWW.ZUSTNEWS.COM DOMAIN NAME FOR SALE (WHATSAPP +91-7675876267)

Major tech companies partner with US government for ethical AI innovation

As part of the agreement, the tech companies have committed to subjecting their AI systems to security testing by both internal and external experts before release. This step will help identify potential security vulnerabilities that ensure safer and more reliable AI deployments.

Seven major artificial intelligence (AI) tech companies, including Google, OpenAI, and Meta, have come together to forge an agreement with the Joe Biden administration to address the risks associated with AI technology. The collective effort aims to implement new measures and guardrails to ensure responsible and safe AI innovation.

According to IANS, the companies involved in the deal are Amazon, Anthropic, Meta, Google, Inflection, and OpenAI. The proposed measures include conducting security testing of AI systems and making the results of these tests publicly available. This move towards transparency is crucial in enhancing accountability and building trust with users and the wider public.

During a meeting at the White House, President Biden mentioned the importance of responsible AI innovation. He acknowledged the significant impact that AI will have on people’s lives worldwide and stressed the critical role of those involved in guiding this innovation responsibly and with safety in mind.

“AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly,” said Nick Clegg, Meta’s president of global affairs.

“As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society,” he added.

Moreover, the companies will implement watermarks in their AI-generated content, enabling users to identify such content more easily. Regular public reporting of AI capabilities and limitations will also be made, contributing to a more transparent AI landscape.

In addition to security testing, the companies will undertake research to address risks associated with AI, such as bias, discrimination, and privacy invasion. This research is crucial to ensure AI technologies are developed responsibly and ethically.

OpenAI highlighted that watermarking agreements will require companies to develop tools or APIs to determine if specific content was generated using their AI systems. Google had already committed to deploying similar disclosures earlier this year.

Furthermore, Meta recently announced its decision to open-source its large language model, Llama 2, making it freely available to researchers. 

Credit : https://www.indiatvnews.com/technology/news/major-tech-companies-partner-with-us-government-for-ethical-ai-innovation-2023-07-22-882425

Leave a Reply

Your email address will not be published. Required fields are marked *