Navigating the EU AI Act - A 3-Step Practical Approach for Businesses
The European Union’s AI Act is a landmark regulation to ensure that AI systems operate transparently, ethically, and safely. Our recommendations help businesses using commercial AI-solutions for general purposes to adopt a proactive strategy in a complex regulatory environment.
Businesses should first define their role within the AI ecosystem. Most companies simply using but not significantly developing AI will be qualified as “deployers”. The main challenge for deployers of AI is to maintain oversight, data quality and transparency in how AI is utilized.
Step 1: AI-inventory and classification
Businesses should identify their AI systems and classify them into the four risk categories defined by the EU AI Act: minimal, limited, high, and unacceptable.
- Minimal risk includes tools like spam filters and text suggestions.
- Limited risk covers chatbots and recommendation systems.
- High-risk applications include AI-driven recruitment, medical devices, and autonomous vehicles.
- Unacceptable risk involves systems like social scoring or biometric identification in public spaces.
Creating and regularly updating an AI inventory is crucial for compliance and responsible AI use.
Step 2: Consciousness and human oversight
All employees should be trained in AI functionality, risk management, and ethical guidelines to ensure responsible AI use. Training should cover AI operations, bias mitigation, compliance requirements, and human oversight.
A dedicated AI Code of Conduct should be developed with input from all relevant stakeholders within business. Additionally, larger organizations often create comprehensive AI strategies. These initiatives equip employees to effectively manage AI-related risks while promoting transparency and accountability.
While automation and efficiency are primary drivers of AI adoption, human oversight remains crucial. High-risk AI systems must have human-in-the-loop mechanisms to intervene when AI errors occur. Companies should establish clear escalation protocols, ensuring that AI can be overridden in cases of unintended consequences.
Additionally, emergency shutdown procedures should be defined. Having a fail-safe mechanism in place will be critical in preventing AI-related disruptions in sensitive sectors such as healthcare, finance, and public services.
Step 3: Transparency and documentation
Businesses using an AI system to create or alter text, images, audio, or video—especially for public information or deepfake content—must clearly disclose that it was artificially generated or modified. This transparency obligation does not apply if the AI-generated content has undergone a process of human review or editorial control, and where a natural or legal person holds editorial responsibility for the publication of the content.
One of the most critical aspects of compliance is maintaining detailed technical documentation. Companies need to establish clear audit trails, ensuring that AI decisions can be explained and justified. This transparency will be vital in industries such as finance and healthcare, where AI-driven decisions can significantly impact individuals.
Did AI help write this article?
At first, we attempted to write this article using artificial intelligence—specifically ChatGPT and Perplexity—expecting that AI could summarize the practical application of the AI Act in three key points.
The results were disappointing: AI provided only a superficial and impractical summary of the AI Act’s key regulatory areas, but it failed to interpret the law in a way that identified the most important tasks for businesses. As a result, we wrote the article ourselves, using AI only for refining wording and creating translations into foreign languages.

© 2025 Gyarmathy&Partners | Tax, Accounting and Advisory Firm in Budapest, Hungary