Breaking

EU drafts AI Act: Generative AI to come under greater scrutiny

The European Commission has begun drafting the AI Act to regulate the emerging technology. Generative AI tools, including OpenAI’s ChatGPT, will be classified according to their perceived risk level: minimal, limited, high, and unacceptable.

Areas of concern include biometric surveillance, spreading misinformation or discriminatory language.

While high-risk tools will not be banned, companies using them will need to be highly transparent in their operations. The new rules will also require companies deploying generative AI tools to disclose any copyrighted material used to develop their systems.

What is Generative AI?

Generative AI is a type of artificial intelligence that is capable of creating new content, such as images, videos, music, and even text, that is similar to content that it has been trained on. This technology uses complex algorithms to learn from data and generate new, unique content.

Generative AI has been used in a wide range of applications, from generating realistic-looking faces to composing music and writing articles. One popular example of generative AI is OpenAI’s GPT-3 language model, which is capable of generating human-like text.

Background: EU’s efforts to regulate AI technology

The European Commission has been working on drafting the AI Act for almost two years, seeking to regulate the emerging technology. The regulation will be the first comprehensive law governing AI technology.

The AI Act aims to regulate the use of AI tools and the development of AI systems, ensuring transparency and accountability, and addressing concerns around the use of biometric surveillance, spreading misinformation, and discriminatory language.

Proposed Classification of AI tools

Under the proposed AI Act, AI tools will be classified into four categories based on their perceived risk level. The categories are minimal, limited, high, and unacceptable. High-risk tools will not be banned, but companies using them will need to be highly transparent in their operations.

The proposed AI Act is aimed at regulating AI tools and the development of AI systems, ensuring transparency, and accountability.

Transparency requirement for companies using copyrighted material

Companies deploying generative AI tools, such as ChatGPT or image generator Midjourney, will have to disclose any copyrighted material used to develop their systems. This requirement was a late addition to the proposed AI Act, according to a source familiar with the discussions.

Initially, some committee members had proposed banning copyrighted material being used to train generative AI models altogether. However, this was abandoned in favor of a transparency requirement.

Svenja Hahn’s views on the draft AI Act

Svenja Hahn, a European Parliament deputy, stated that the parliament found a solid compromise that would regulate AI proportionately, protect citizens’ rights, foster innovation, and boost the economy.

She praised the proposed AI Act for striking a balance between conservative wishes for more surveillance and leftist fantasies of over-regulation.

Emergence of ChatGPT and other generative AI tools

OpenAI’s ChatGPT has caused a sensation worldwide, with many expressing awe and anxiety about the AI-powered chatbot. ChatGPT became the fastest-growing consumer application in history, reaching 100 million monthly active users in a matter of weeks.

The emergence of generative AI products has created a race among tech companies, leading to concerns from some onlookers, including Elon Musk, who backed a proposal to halt the development of such systems for six months.

Concerns around the development of generative AI products

The development of generative AI products has raised concerns about their potential misuse. The proposed AI Act aims to address these concerns by regulating the development and use of AI tools, ensuring transparency and accountability, and protecting citizens’ rights.

The new rules will require companies using generative AI tools to be transparent about their operations and to disclose any copyrighted material used to develop their systems.

Share the article with your friends
William Marshal

William has been one of the key contributors to 'The Cybersecurity Times' with 9.5 years of experience in the cybersecurity journalism. Apart from writing, he also like hiking, skating and coding.

Recent Posts

Privileged Access Management: 5 Best PAM Solutions in the Market

Explore the top 5 best PAM Tools, market trends, and expert insights to secure the…

1 week ago

Apple Device Management: Top Solutions for iOS and macOS Management

Explore the top solutions for Apple Device Management including to iOS Device Management and macOS…

2 weeks ago

IAM Software: Top 5 IAM Solutions for Enterprise Security

Find the top 5 IAM software solutions, explore their features, and find the best tools…

2 weeks ago

Top 5 MDM Tools for 2024 – Best Mobile Device Management Software

MDM software is used to manage smartphones, tablets, laptops, kiosk devices and iPads and more.…

2 weeks ago

Scalefusion MDM Alternatives: Top 5 Scalefusion Alternatives for IT Decision Makers

Discover the top 5 Scalefusion alternatives for MDM, offering better features, scalability, and integration for…

2 weeks ago

Okta Vs OneLogin: A Detailed Comparison

Compare Okta and OneLogin: a detailed guide on features, pricing, customer base, security, and more…

2 months ago