Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/vhosts/brixon.ai/httpdocs/wp-includes/functions.php on line 6121

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the borlabs-cookie domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/vhosts/brixon.ai/httpdocs/wp-includes/functions.php on line 6121
AI in the Works Council: How to Win Employee Representation Over to Your HR Innovations – Brixon AI

Introduction: The Importance of the Works Council in AI Implementations

The introduction of AI systems in mid-sized companies has shifted from being optional to becoming a necessity in recent years. Yet, while many executives primarily focus on technological and financial aspects, a key factor for success is often overlooked: involving the works council.

A 2023 study by the Fraunhofer Institute for Industrial Engineering (IAO) illustrates the scale of the challenge: An impressive 38% of all AI projects in mid-sized companies fail not because of the technology itself, but due to lack of acceptance among employee representatives. The numbers speak for themselves.

This is especially true in HR, where AI systems can optimize recruitment processes, support personnel development, or prepare for employee interviews. Here, co-determination is not only a legal requirement but is also crucial to project success.

“The biggest mistake in AI projects is often not technical, but rather the late involvement of the works council. What was meant to increase efficiency often ends up as a lengthy conflict.” – Dr. Stefanie Kremer, Research Director, Institute for Digital Work, 2024

The challenge is clear: Works councils are often brought on board only after fundamental decisions have already been made. This leads to understandable mistrust and a defensive posture—after all, representing employee interests is their core responsibility.

But a constructive partnership brings significant benefits:

  • Greater acceptance of AI solutions among staff
  • Legal compliance through early attention to co-determination rights
  • Valuable practical insights from the works council for more user-friendly implementations
  • Prevention of costly project delays due to objections raised too late

This article provides concrete guidance on how to win over the works council as a strategic partner for your HR-related AI projects right from the start. We cover the legal framework, typical concerns, and proven strategies for a constructive dialogue.

Legal Framework: Co-determination Rights of the Works Council in AI Projects

Before we discuss strategies for persuasion, we must first understand the legal foundations. The co-determination rights of the works council regarding AI systems are not optional—these rights are firmly enshrined in law.

The key reference is the German Works Constitution Act (Betriebsverfassungsgesetz or BetrVG). The following sections are particularly relevant for AI implementations:

  • § 87 Sec. 1 No. 6 BetrVG: Mandatory co-determination right when introducing and applying technical devices that can monitor employee behavior or performance
  • § 90 BetrVG: Rights to information and consultation during the planning of technical equipment, work procedures, and workflows
  • § 91 BetrVG: Co-determination on changes to workplaces, workflows, or the work environment
  • § 95 BetrVG: Co-determination regarding guidelines for hiring, transfers, reclassifications, and dismissals

The Federal Labor Court has clarified in several landmark rulings (most recently Case No. 1 ABR 27/21 on 16.11.2022) that these co-determination rights also apply to algorithmic and AI-based systems. The case law clearly shows a trend towards strengthening co-determination rights in the digital context.

“AI applications in HR do not exist outside the law. The Works Constitution Act, despite its age, provides surprisingly accurate points of reference for modern technologies.” – Prof. Dr. Martin Henssler, Director, Institute of Labor and Commercial Law, University of Cologne

An important legal development is the EU AI Act, expected to take full effect in 2025. It classifies certain AI applications in employment as high-risk. These will require extensive risk assessments, transparency obligations, and quality assurance measures—all areas where cooperation with the works council is highly beneficial.

Special Co-determination Rights in HR

In HR, co-determination rights are especially broad. They include, among others:

Application Area Relevant BetrVG Sections Scope of Co-determination
AI-supported applicant selection § 95 BetrVG Comprehensive for selection guidelines and criteria
Performance evaluation systems § 94, § 87 Sec. 1 No. 6 BetrVG Comprehensive for evaluation principles
Time tracking with AI § 87 Sec. 1 No. 2, 6 BetrVG Comprehensive for methodology and use
Training recommendations § 96-98 BetrVG Involvement and co-determination

Works agreements have become an effective tool for regulating the use of AI technologies. They provide legal certainty for both companies and works councils and can be adapted as technology evolves. The Hans Böckler Foundation reported a 175% increase in AI-specific works agreements in 2023 compared to the previous year.

The law is clear: In HR-related AI projects, the works council is not only to be informed—their active involvement is a legal requirement. To view this as a mere bureaucratic hurdle would be shortsighted. A smarter approach is to see the legal necessity as an opportunity for constructive partnership.

Typical Concerns of Works Councils Regarding AI Implementations

To win over the works council for your AI projects, you must understand their perspective. Works councils typically voice four main concerns you should address proactively:

1. Concerns About Data Protection and Surveillance

Almost always, top of the list are worries about data protection and potential surveillance. A 2024 survey from the German Trade Union Confederation (DGB) found that 76% of works council members feared that AI systems could be used for continuous performance and behavior monitoring.

Especially in HR, where sensitive personal data are processed, this worry is particularly prevalent. Typical questions from works councils include:

  • What data do the AI systems collect and for how long are they stored?
  • Can individual performance profiles be created?
  • How is purpose limitation for data use ensured?
  • Are data from different sources combined, and what does this mean for surveillance potential?

2. Fear of Job Loss and De-skilling

The second major concern focuses on possible job losses. The Institute for Employment Research (IAB) forecasts that by 2035, around 2.9 million jobs in Germany could be changed or replaced by AI and automation.

Works councils are not just worried about quantitative job loss but also about qualitative changes:

  • Devaluation of existing qualifications (“deskilling”)
  • Increasing dependence on technical systems
  • Loss of autonomy and personal responsibility
  • Polarization between “AI winners” and “AI losers” among staff

3. Lack of Transparency and the “Black Box” Problem

Another central topic is the lack of transparency in many AI systems. The “black box” nature of complex algorithms makes decisions harder to follow, which is highly problematic from an employee representation perspective.

A 2024 study by the Technical University of Munich found that 68% of works council members said they did not understand or only partly understood how the AI systems in their company worked. This leads to fundamental questions:

  • What criteria are used to make decisions?
  • How can the accuracy and fairness of results be verified?
  • Who is responsible for algorithmic decisions?
  • How can employees challenge or correct decisions?

4. Concerns About Discrimination and Errors

Finally, works councils often raise concerns about potential discrimination by AI systems. These fears are not unfounded: A 2023 study by AlgorithmWatch demonstrated that poorly trained AI systems in HR can reinforce existing inequalities by replicating historical patterns in hiring and promotion.

“The danger often lies in the training. If historical data contains bias, the AI will learn and perpetuate it. Without human oversight and correction, systemic disadvantages for certain groups are likely.” – Dr. Julia Borggräfe, former Digitalization Department Head, BMAS

All these concerns are legitimate and should be taken seriously. The challenge for companies is not to dismiss or belittle these concerns but to develop solutions together with the works council.

The good news: For every one of these worries, there are tried-and-tested answers and approaches. In the next section, we show you how to optimally prepare for meeting with the works council.

Preparation Phase: Strategies Before the First Conversation

The key to success is thorough preparation. Before initiating discussions with the works council, be sure to follow these four strategies.

1. Early Involvement Instead of Presenting a Done Deal

The Hans Böckler Foundation found in its 2024 study that AI projects with works council involvement from the outset had a 62% higher success rate compared to those in which employee representatives were informed only after the concept phase.

Practically, this means: Ideally, inform the works council during the very early stages of consideration—at the latest before deciding on specific systems or providers. This shows appreciation and lets the council actively shape rather than merely react.

A practical approach is to invite them to an informal preliminary talk, where you:

  • Outline the basic ideas for AI use cases in HR
  • Ask for experiences and assessments from the works council
  • Jointly define initial criteria for a successful implementation
  • Agree on a roadmap for further discussions

2. Building AI Skills in Management

Nothing undermines your credibility faster than a lack of basic understanding of the technology you’re about to implement. Before approaching the works council, make sure you yourself have a solid overview.

This includes:

  • Knowledge of different AI technologies and their specific applications in HR
  • Understanding the legal and ethical implications of AI-driven decisions
  • Awareness of common risks in AI systems and mitigation strategies
  • Clarity on what AI can and cannot actually do

At Brixon AI, we’ve seen that many managers can’t clearly differentiate between rule-based automation and self-learning systems—yet this distinction is crucial for questions of transparency and control.

“We’ve been training managers and works council members together for years. The most common ‘aha’ moment comes when both sides realize that modern AI systems can indeed be designed to be explainable and controllable—if planned for from the start.” – Thomas Meyer, AI Implementation Expert

3. Develop a Clear Business Case with Measurable Benefits

A convincing business case lays the groundwork for constructive discussions. It should clearly define:

  • What specific problems the AI system is intended to solve
  • What measurable improvements are expected (with realistic metrics)
  • How employees will benefit from the implementation
  • Which alternatives were considered and why they are less suitable

It’s crucial that you work out not just the business advantages, but improvements for staff as well. For example:

  • Reducing repetitive tasks
  • Faster and more objective decision-making processes
  • Better matching in job placements
  • Personalized training recommendations

A 2023 Bitkom survey found that 72% of works council members agree to AI projects if there is clear evidence of improved working conditions—but only 23% if the main focus is purely efficiency gains.

4. Prepare Persuasive Answers to Critical Questions

Anticipate possible questions and prepare well-founded, honest answers. A 2023 survey from the Institute for Co-determination in Business shows that 83% of works council members found open communication about potential risks helped build trust.

Here are some typical questions and possible responses:

Critical Question Convincing Answer
Will jobs be lost to AI? “We do not plan to cut jobs. The aim is to automate repetitive tasks so employees can focus on value-adding activities. We’d like to develop a skills program together with you.”
How transparent are the system’s decisions? “We’re using explainable AI models, where you can see which factors influenced a decision. Also, final decisions will always be made by humans—AI only provides recommendations.”
How is data protection and security ensured? “We’ve developed a data protection concept, which we’d be happy to review with you in detail. Key points are: data minimization, clear purpose limitation, and technical security measures.”
How do you ensure the system does not discriminate? “We will test and continuously monitor the system for bias. In addition, we’d like to work with you to develop an audit procedure that regularly checks the system’s fairness.”

A proactive information strategy also helps: Provide the works council with relevant materials, studies, and best-practice examples from comparable companies in advance of your meeting.

With this level of preparation, you’re ready for constructive dialogue with the works council—which we’ll explore in detail in the next section.

Dialog with the Works Council: Communication Strategies for Successful Persuasion

Now things get practical: How can you structure dialogues with the works council for productive cooperation? Our experience at Brixon AI shows four particularly effective approaches.

1. Joint Workshops and Training Sessions

Building knowledge as a team is an excellent starting point. The 2024 Co-determination Competence Center found that works councils’ willingness to support AI projects increased by an impressive 48% when they took part in joint training with management.

Organize hands-on workshops where management and works council representatives:

  • Develop a shared understanding of AI technologies
  • Discuss concrete use cases within the company
  • Jointly identify and assess potential risks
  • Develop practical solution strategies together

Interactive formats involving external, neutral experts who include both perspectives are especially effective. One workshop participant reported: “The shared learning process not only boosted our expertise, but also vastly improved our mutual understanding.”

2. Transparent Communication About Goals and Limitations

Avoid exaggerated promises or downplaying risks. Nothing undermines trust faster than overly high expectations that are later disappointed.

Instead, we recommend:

  • Clearly communicate which specific problems AI is supposed to solve
  • Be transparent about the limitations of the technology
  • Discuss potential risks honestly and your mitigation plans
  • Explain how the effectiveness of actions will be checked

For example: Instead of saying, “Our AI-driven recruiting will be entirely objective,” a more honest statement would be: “The system can reduce some unconscious biases, but we’ll regularly check results for fairness and retain human oversight.”

“Being honest about limitations builds more trust than extravagant promises. Works councils aren’t opposed to technology—they just want to ensure employee interests are safeguarded.” – Michaela Schulz, Works Council Chair of a Mid-sized Industrial Company

3. Developing a Shared Language for AI Topics

An underestimated success factor is developing a shared language. The Institute for Work and Technology (IAT), in its 2023 “AI in Dialogue” guide, recommends creating a glossary that explains technical concepts in understandable terms.

This glossary should:

  • Translate technical jargon into everyday language
  • Provide practical examples for abstract concepts
  • Be endorsed by all participants
  • Be treated as a living document, updated continuously

Such a common vocabulary avoids misunderstandings and makes discussions more productive. At Brixon AI, we’ve found that visual metaphors are very helpful: “AI as a navigation system with humans still in the driver’s seat” is an image that resonates intuitively with both managers and works councils.

4. Promoting Regular Exchanges of Experience

The dialogue shouldn’t end with initial approval. Establish structures for ongoing exchange throughout the implementation and beyond.

Proven formats include:

  • Regular update meetings with status reports and Q&A sessions
  • Joint visits to successful reference implementations
  • Test runs accompanied by works council members
  • Balanced working groups for ongoing improvement

Setting up a joint AI committee or working group has proven particularly useful in practice. This group can meet regularly, evaluate experiences, and develop proposals for improvements.

Another effective tool is joint participation in external events on AI and the world of work. This provides access to new knowledge, fosters informal exchange, and strengthens trust.

With these four strategies, you’ll lay the foundation for a constructive partnership. In the next section, we’ll look at concrete success stories showing how other companies have successfully involved the works council in AI projects.

Best Practices: Successful Collaboration with Works Councils in AI Projects

Theory is good; practice is better. Let’s examine how companies have successfully involved works councils in AI projects through real-world examples.

Case Study 1: AI-based Applicant Management in a Mid-sized Company

A mid-sized automotive supplier (150 employees) faced the challenge of making its recruitment process more efficient without sacrificing quality. The solution: an AI-based applicant management system.

The key to success: The works council was involved from the tendering phase. Requirements for the system were defined jointly, with particular focus on transparency and anti-bias mechanisms.

Concrete measures:

  • Joint evaluation of five vendors, with works council representatives attending all presentations
  • Balanced working group to configure the system
  • Definition of “red lines” (e.g., no automated rejections without human review)
  • Joint test phase with systematic assessment of results

The result: Recruitment processes were shortened by 40% while the diversity of new hires increased. The works council chair became a project champion and even presented the results at an industry conference.

Case Study 2: Step-by-step Introduction of an AI Support System

A software company (80 employees) wanted to introduce an AI system to prioritize and semi-automatically respond to support tickets. Initial works council concerns about job security and performance monitoring were addressed through a clever, phased approach.

Key elements:

  • Starting with a limited pilot project (only certain ticket types)
  • Voluntary participation from interested staff in the pilot
  • Establishment of evaluation criteria in partnership with the works council
  • Regular feedback rounds with all stakeholders
  • Gradual rollout only after positive evaluation of each phase

“Taking it step by step eased fears. Once colleagues saw the system freed them from routine tasks and allowed them to focus on the complex cases, attitudes shifted from skepticism to support.” – Team Lead, Customer Support, Software Company

Remarkably, after a year, average processing times for support queries had dropped by 35%, while customer satisfaction rose by 18%. The team even grew, thanks to company expansion.

Case Study 3: Co-creating AI Governance Structures

A service company with 220 employees chose a particularly participatory approach: Instead of looking at individual AI projects in isolation, a company-wide AI governance framework was developed together with the works council.

Key components:

  • Establishing a balanced “AI Ethics Committee” with representatives from management, works council, and specialist departments
  • Joint development of guidelines for ethical AI use
  • Creating a vetting process for new AI applications
  • Defined escalation procedures for concerns or unexpected impacts
  • Regular monitoring and annual reviews of the guidelines

This proactive governance model was highlighted by the Federal Ministry of Labor and Social Affairs as a good practice example. It not only accelerated implementation but also significantly increased workforce acceptance.

Shared Patterns of Success

Analyzing these and other success stories reveals four shared patterns:

  1. Early involvement: The works council was always part of the project from the start, not brought in post-planning.
  2. Joint learning: There were always phases of shared knowledge building and experience exchange.
  3. Clear rules: Transparent, binding agreements provided security for all parties.
  4. Continuous evaluation: The impact of the AI systems was assessed together on a regular basis, with a willingness to adapt.

These best practices show: Successful involvement of the works council isn’t a necessary evil—it’s a true value-add that leads to better solutions. The next section shows how you can translate these insights into concrete works agreements.

Designing Works Agreements for AI Applications

Works agreements are the key tool for ensuring legally secure AI implementations while fostering acceptance. But how do you design an agreement that allows innovation and protects employee interests?

Core Elements of a Future-proof AI Works Agreement

A comprehensive works agreement on AI applications in HR should include:

  • Preamble with shared vision: A statement of shared objectives and principles for using AI
  • Precise definition of scope: Which systems and processes are specifically covered?
  • Clear definition of purpose: For what purposes can the system be used—and for which ones explicitly not?
  • Data protection provisions: Which data are stored, for how long, and for what purpose?
  • Transparency and information obligations: How are decisions explained and documented?
  • Qualification measures: What trainings are offered and how is skill-building ensured?
  • Rules on performance and behavior monitoring: What are the limits on monitoring and evaluations?
  • Design of human-machine interaction: Who has the final say in decisions?
  • Evaluation and adaptation mechanisms: How and when is impact reviewed?
  • Conflict resolution mechanisms: How are problems and disagreements handled?

The Hans Böckler Foundation found in a 2024 analysis of over 100 AI-related works agreements that the most successful agreements strike a balance between promoting innovation and protecting staff. They are neither too restrictive nor too vague, but offer a clear framework with defined flexibilities.

Balancing Innovation and Protection

The central challenge is striking a balance. Overly restrictive agreements can block innovation, while agreements that are too loose may undermine trust.

Successful balancing approaches:

  • Differentiated rules by use case and risk level, rather than blanket requirements
  • Positive shaping objectives instead of just prohibitions (e.g., “The system should strengthen employees’ autonomy” instead of just “The system may not monitor”)
  • Defined trial zones for new applications with clear boundaries
  • Joint decision-making structures for further developments and adjustments

“A good works agreement for AI should be like a good travel guide: It highlights worthwhile destinations, warns of dangerous shortcuts, but leaves enough room for your own discoveries.” – Prof. Dr. Thomas Koczelnik, Labor Law and Digitalization Expert

Evaluation and Adaptation Clauses

Evaluation and adaptation clauses are especially important, as AI technologies develop rapidly. A recommended approach is to agree on regular reviews (e.g., every six months) and define indicators that trigger early review.

Potential design elements include:

  • Setting quantitative and qualitative success indicators
  • Defined processes for collecting user feedback
  • Ongoing joint assessments of experience
  • “Sunset clauses” that require renegotiation after a fixed period

Especially innovative companies have included “change management clauses” in their agreements, allowing the systems to develop in an agile way without having to renegotiate the whole agreement every time.

Sample Clauses for Key Regulatory Areas

Below are tried-and-tested example clauses for central areas of an AI works agreement in HR:

On Purpose Limitation:

“The AI system will only be used to support personnel selection. It does not make autonomous decisions, but generates recommendations that are always reviewed by qualified HR personnel. Use for monitoring employee performance or for automatic rejections of applicants is excluded.”

On Data Protection:

“Only the data listed in Appendix A may be used for training and operating the AI system. Personal data will only be processed with explicit consent and in accordance with the GDPR. Storage is limited to [X] months, after which automatic deletion will occur. Only those listed in Appendix B have access to the data.”

On Transparency:

“All AI-generated recommendations must be accompanied by an explanation of the main factors behind the decision. This explanation must be in plain language. Employees have the right to a full explanation of any decisions affecting them. For this, a documented inquiry process is set up in accordance with Appendix C.”

On Qualification:

“All affected employees will be offered training to understand and use the AI system. Training takes place during working hours and is financed by the company. The training program is developed jointly by employer and works council and updated regularly. It covers both technical and ethical aspects of AI use.”

On Evaluation:

“The effect of the AI system is evaluated every six months based on the criteria in Appendix D and with a balanced working group. The results are documented and presented to both management and the works council. If there are significant deviations from expected results or unwanted side effects, corrective action is taken.”

These sample clauses provide a starting point for your own works agreement, but must always be tailored to your company’s specific situation.

Further Training and Skills Development: Joint Initiatives with the Works Council

The successful implementation of AI systems in HR stands or falls on employee skills—among users as well as works council members. Joint skill-building thus provides an ideal starting point for collaboration.

Building AI Skills for Works Council and Employees

A 2023 study from the Bertelsmann Foundation reports impressive figures: Companies that invest in AI education for their workforce see a 34% higher success rate for AI projects. Even more striking: When the works council also receives targeted training, implementation conflicts decrease by 56%.

Concrete approaches for skills development include:

  • Basic AI technology and terminology training for all involved
  • Advanced workshops on specific HR use cases
  • Training on legal and ethical aspects of AI
  • Site visits to companies with successful AI implementations
  • Participation in conferences and networking events

At Brixon AI we’ve had good experiences with “tandem learning”: Having a management representative and a works council member attend trainings together and relay the knowledge back to their groups. This not only improves knowledge transfer, but also strengthens mutual understanding.

Participatory Development of Training Concepts

Joint development of training concepts is especially effective. Acceptance increases significantly when the works council is involved from the very start of the process.

A structured process could look like this:

  1. Joint needs analysis: What skills are needed for successful AI use?
  2. Assessment of current status: What skills are already present, where are the gaps?
  3. Defining learning objectives: What should participants know and be able to do after training?
  4. Joint selection of formats and providers: Which training formats best suit the target group?
  5. Pilot run: Test the concept with a small group
  6. Evaluation and adjustment: Joint assessment and optimization
  7. Rollout: Implementation for all affected employees

“The best training concepts are created through dialogue. The works council often knows staff concerns and needs better than management—this knowledge is gold when designing effective training.” – Dr. Sandra Müller, Head of Digital Learning, mid-sized industrial company

Equal Opportunities in Digital Transformation

Ensuring equal opportunities is especially important. The Federal Ministry of Labour and Social Affairs, in its “Fair Digitalization” guide (2024), recommends paying special attention to the needs of older employees and those with less affinity for technology.

Possible measures include:

  • Differentiated training for various skill levels and learning types
  • Mentoring or learning buddy programs
  • “Drop-in sessions” for individual questions and support
  • Offering different training formats (in-person, digital, blended learning)
  • Sufficient time resources for skills development

Joint development and monitoring of these measures by management and the works council ensures no one is left behind on the journey to an AI-powered workplace.

Reskilling and Upskilling Programs

Where AI systems change existing job profiles, early reskilling and upskilling programs are decisive. Here, close coordination with the works council is especially important.

Best practice examples from real companies include the following elements:

  • Early analysis of job and skill changes
  • Joint development of competence profiles for modified or new roles
  • Individual development plans for affected staff, with specific steps and timelines
  • Internal job rotation to foster understanding of new ways of working
  • Supportive coaching programs to help with transitions

A particularly innovative example comes from a mid-sized industrial firm (140 employees) that launched a “AI Scout Program”: Employees from different departments, including works council members, were trained as AI experts and became multipliers and contact persons within their teams. This led to much greater acceptance of the technology and faster knowledge transfer.

The experience shows: Joint training initiatives are not only an effective means of preventing conflict, but also provide a solid basis for the successful use of AI systems in everyday business.

Implementation Process: From Theory to Practice

Once the conceptual groundwork has been laid, it’s time for practical implementation. How can you manage the process to ensure efficiency while maintaining participation?

Agile Implementation Methods with Works Council Involvement

Agile methodologies have proven especially effective in AI projects. An analysis by the Fraunhofer Institute (2023) showed that agile approaches increase the probability of AI project success by up to 42%. The challenge is effectively involving the works council in agile processes.

Some tried-and-true approaches:

  • Works council reps as part of the extended project team with defined roles and clear participation opportunities
  • Regular “review meetings” where interim results are presented and discussed
  • Joint “retrospectives” to reflect on and improve collaboration
  • Transparent documentation of all decisions and development steps

A mid-sized IT service provider developed a “hybrid model”: The dev team works Scrum-style, with a “works council representative as Product Owner proxy” present at all sprint reviews, ensuring the employee perspective is included.

“Agile doesn’t mean the works council is brought in last—on the contrary. Early and regular involvement is a core principle of agile processes, as it enables continuous feedback and avoids costly course corrections at the end.” – Agile Coach, mid-sized software company

Clear Responsibilities and Communication Paths

Unclear responsibilities and communication channels are a frequent source of conflict during implementation. Reliable structures include:

  • A balanced steering committee for strategic decisions, meeting regularly (e.g., monthly)
  • Designated contacts on both sides for operational questions
  • Regular status meetings with a standardized agenda
  • Clearly defined escalation paths for any issues
  • Transparent documentation, e.g. through a shared project wiki

A communications matrix defining who is informed about what, when and by whom, has proven especially helpful in practice:

What is communicated? Who communicates? To whom? How? How often?
Project progress Project manager Steering committee Status report Monthly
Project plan changes Project manager Steering committee + WC Change request As needed
Interim results Dev team WC reps + key users Demo/review Every 2 weeks
User feedback Key user Dev team + WC Feedback report After each test phase

Dealing with Unexpected Challenges

AI projects rarely go entirely to plan. Managing unexpected challenges requires flexibility and open communication. Experience shows the following approaches help:

  • Early communication of problems—before they escalate into conflicts
  • Joint root cause analysis instead of finger-pointing
  • Collaborative development of solutions with all perspectives involved
  • Transparent documentation of “lessons learned” for future projects

A structured problem-solving process might look like this:

  1. Identify and document the problem
  2. Call a joint meeting with all relevant stakeholders (including the works council)
  3. Analyze causes and effects
  4. Develop and assess solution options
  5. Agree on an action plan
  6. Implement and track progress
  7. Evaluate results and communicate them

Early involvement of the works council in this process helps ensure technical hurdles don’t turn into crises of trust.

Measuring Success and Impact

Success and impact should be measured using mutually agreed KPIs. In addition to technical and business metrics, also include employee satisfaction, working conditions, and skill development.

A balanced set of criteria could cover:

  • Efficiency metrics: Time savings, cost reductions, throughput times
  • Quality indicators: Error rates, accuracy of AI suggestions
  • User acceptance: Frequency of use, satisfaction surveys
  • Employee impact: Job satisfaction, changes in workload
  • Skill development: Knowledge gain, new skills acquired

Jointly defining and regularly monitoring KPIs provides an objective basis to assess project success and make data-driven decisions on adjustments and further development.

The implementation phase is the moment of truth for every AI project. With the right structures, clear communication, and a true culture of participation, it can become a shared success for management and works council alike.

Outlook: Long-term Collaboration in AI Evolution

AI implementations aren’t a one-time project—they mark the beginning of ongoing development. Long-term, partnership-based collaboration with the works council pays off for both sides.

Continuous Improvement of AI Systems

Continuous improvement of AI systems should be seen as a joint responsibility of management and works council. Experts from the Fraunhofer Institute for Intelligent Analysis and Information Systems recommend establishing a “continuous improvement process” (CIP), covering both technical and organizational aspects.

Key elements could include:

  • Regular performance checks against defined metrics
  • Systematic collection and analysis of user feedback
  • Monitoring for undesirable effects or bias
  • Joint workshops to identify areas for improvement
  • Transparent documentation of changes and their impact

Setting up an “AI Quality Circle” with representatives from specialist departments, IT, and the works council, meeting regularly to develop and prioritize improvement proposals, is especially effective.

Joint Innovation and Further Development

Joint innovation goes beyond improving existing systems—it includes ongoing exploration of new applications and technologies.

Encourage joint innovation through:

  • Regular innovation workshops with the works council and staff to develop new ideas
  • An ideas management system for AI, systematically leveraging employee suggestions
  • Joint technology scouting, with management and works council monitoring new developments
  • Shared visits to trade fairs and conferences—to widen perspectives

“The best innovation ideas often come from the employees themselves. When works council and management create spaces together for these ideas, a positive dynamic emerges, linking technological possibilities with practical needs.” – Innovation Manager, mid-sized mechanical engineering company

For example: In a medium-sized automotive supplier (150 employees), the idea for an AI-driven knowledge management system came out of a joint workshop with management and the works council. The system, which now makes production knowledge accessible and facilitates knowledge transfer between experienced and new employees, was developed cooperatively from the start—with exceptionally high user acceptance as a result.

Adapting to Changing Legal Frameworks

The EU AI Act and other regulatory developments will keep changing the legal framework for workplace AI in the coming years. Proactively working with the works council on regulatory matters can provide key advantages.

Successful companies:

  • Monitor regulatory developments together with the works council
  • Anticipate need for adjustment early and plan ahead
  • Use transition periods to jointly prepare for new requirements
  • See compliance not as a burden, but as a chance for better, more trustworthy AI systems

Setting up a joint “regulatory watch” team to monitor changes in regulations and derive recommendations has proven effective in practice.

Vision of a Human-centered, AI-supported Workplace

Long term, it’s about more than individual AI applications: It’s about shaping the future world of work together. The Stiftung Neue Verantwortung’s 2024 “Future of Work” study stresses that participative design of AI systems leads to higher acceptance, better usability, and ultimately greater business success.

Develop a shared vision with your works council that:

  • Centers people, seeing AI as support
  • Promotes continuous skills development and lifelong learning
  • Aligns business aims with good working conditions
  • Opens opportunities for new, creative, and meaningful work

This kind of shared vision can act as a compass, guiding long-term collaboration on AI and providing a framework for individual decisions.

Forward-thinking collaboration with the works council on AI is not a one-way street—it’s an ongoing dialogue that benefits both sides: Companies through greater acceptance and better solutions, the works council through early influence and skills development. Together, digital transformation can become a success story for all.

Summary: Key Recommendations for Action

In summary: How can you win over the works council for your HR-related AI projects and establish a successful partnership?

The Most Important Success Factors at a Glance

Our analysis reveals seven key factors for success:

  1. Early involvement of the works council—ideally already in the concept phase, before specific system decisions are made
  2. Jointly building competence through training and workshops for both management and the works council
  3. Transparent communication about goals, opportunities, and risks of planned AI implementation
  4. Clear rules via tailored works agreements that enable innovation while protecting employee interests
  5. Participatory implementation design with defined roles and regular feedback
  6. Continual evaluation and adaptation of systems based on jointly defined criteria
  7. Long-term perspective with ongoing development and innovation

Following these principles demonstrably leads to higher success rates for AI projects, lower implementation costs, and greater employee satisfaction.

Checklist for the Implementation Process

The following checklist offers practical guidance for your next steps:

  • ☐ Early information for the works council about planned AI projects
  • ☐ Joint foundational AI training for management and works council
  • ☐ Clearly document project objectives and expected outcomes
  • ☐ Conduct a joint risk analysis
  • ☐ Develop a participative concept for implementation
  • ☐ Negotiate and conclude a works agreement
  • ☐ Pilot phase with close monitoring
  • ☐ Establish regular evaluation meetings
  • ☐ Make adjustments based on feedback
  • ☐ Communicate successes transparently and celebrate together
  • ☐ Set up a long-term process for continuous improvement

These steps can and should be tailored to your company’s exact situation—but the basic logic of partnership remains.

Resources for Further Exploration

For further reading, we recommend the following up-to-date resources:

  • The guide “AI and Co-determination” by the Hans Böckler Foundation (2024)
  • The handout “Legally Compliant AI Implementation” from the BMAS (2023)
  • The publication series “Algorithms and Work” from the Fraunhofer Institute
  • The online platform “KI-Campus” with training offers for companies and works councils
  • The standard reference work “Works Agreements on Artificial Intelligence” (Klebe/Neugebauer, 2023)

Networks such as the “New Quality of Work Initiative” (INQA) and various industry associations also regularly offer events and forums on the topic.

Your Path to Successful AI Transformation with the Works Council

Successful AI transformation with the works council takes time, patience, and a genuine commitment to collaboration. Experience shows, though, that the initial extra effort to actively involve the works council pays off through greater acceptance, better outcomes, and more sustainable success.

“The works council is not a roadblock but can become a key driver for digital transformation—if engaged as a strategic partner from the start.” – Prof. Dr. Jutta Rump, Institute for Employment and Employability

At Brixon AI, we support you in this process with tailored workshops for management and works council, legally compliant works agreement templates, and a proven implementation approach that factors in all stakeholders’ interests.

Start today—open the dialogue with your works council, share your vision for an AI-supported workplace, and invite them to help shape that vision together. Experience shows: Early, open communication is the key to success.

Frequently Asked Questions (FAQ)

Is involvement of the works council in AI projects a legal requirement?

Yes, in many cases, involving the works council in AI projects is a legal obligation. The German Works Constitution Act (BetrVG) gives the works council extensive co-determination rights, especially through § 87 Sec. 1 No. 6 (introduction and use of technical equipment for monitoring), § 90 (information and consultation rights in planning), § 91 (co-determination on work organization), and § 95 (selection guidelines). The Federal Labor Court has made it clear in several rulings that these rights also apply to AI systems. These rights are especially strong in HR, where personal data is processed and decisions about employees are made. Ignoring these rights can lead to legal consequences—including the possible stoppage of a project.

How can I convince works councils of AI benefits if they are fundamentally skeptical?

For fundamental skepticism, a multi-layered approach works best. Start with joint training sessions that present a realistic picture of AI—avoiding exaggeration or sugarcoating. Show best practice examples, ideally by arranging visits to similar-sized companies in your industry that have successfully implemented AI. Focus on real benefits for employees, not just business metrics. Start with a small, low-risk pilot project that delivers visible improvements. Provide written assurances that the introduction of AI will not result in layoffs and that the primary aim is to automate repetitive tasks, freeing up time for more challenging work. Be patient: skepticism often only dissipates after positive experiences, so plan for the long term and keep the conversation going.

What typical mistakes should be avoided when involving the works council in AI projects?

The most common mistakes to avoid: 1) Involving the works council too late—if all major decisions are already made, they will feel bypassed. 2) Inadequate information—flooding them with complex technical jargon without enough explanation leads to mistrust. 3) Ignoring their concerns—if the council feels their input is ignored, positions quickly harden. 4) Concealing risks—do not try to hide potential disadvantages or risks; this destroys trust. 5) Refusing to compromise—don’t insist on your original plans with no flexibility. 6) Not allocating enough resources—insufficient time or budget for the participation process makes it a sham. 7) Rushing due to unrealistic deadlines—don’t set timeframes that make careful consideration impossible. These errors almost always lead to resistance, delays, and, ultimately, poorer outcomes.

How can we ensure that our AI systems do not make discriminatory decisions?

Ensuring non-discriminatory AI decisions requires a comprehensive approach: Start by carefully choosing and reviewing training data for representativeness and historical bias. Implement technical anti-bias measures such as regular fairness audits and statistical tests for discrimination patterns. Set clear fairness metrics and thresholds that trigger manual review if exceeded. Establish a “human-in-the-loop” process so that critical decisions are always double checked by people. Promote diversity within development teams to minimize blind spots. Conduct regular, documented test runs with various groups of people. Set up a transparent process for affected individuals to appeal AI decisions. These measures should be included in the works agreement and monitored by a joint oversight panel with works council involvement.

How extensive should a works agreement on AI systems be?

A works agreement on AI systems should be comprehensive but not overly detailed. It must cover all relevant topics: scope of application, defined purpose, data protection, transparency obligations, qualification measures, rules on performance and behavior monitoring, the design of human-machine interaction, and evaluation and adjustment mechanisms. At the same time, it should be flexible enough to allow for technological development without constant renegotiation. A good length is typically 10-15 pages for the main section, with technical appendices for specific details. Clarity is more important than length: every rule should be unambiguous to minimize interpretation disputes. A modular structure is recommended—differentiating long-term fundamental principles from specific technical details (which may need more frequent updates).

How do we address concerns about job losses from AI systems?

Job loss worries are among the most common concerns in AI projects. Address these proactively through several measures: Work with the works council to transparently analyze which activities are really set for automation and how remaining job profiles will change. Develop a training concept early for affected employees, offering new skills and development paths. Consider a formal employment guarantee for a defined period. In communication, emphasize relief from repetitive work and the enhancement of human tasks via AI support. Show concrete examples from similar companies where AI led to new roles and duties. Take a long-term approach to staff development—often, AI creates new roles that can be identified and shaped early. An open, fact-based dialogue is crucial.

What role does data protection play in HR-related AI projects?

Data protection is critical in HR-related AI projects, as particularly sensitive personal data is processed. The GDPR imposes strict requirements for legality, transparency, and purpose limitation. Specifically: You need a legal basis for processing (consent, contract, legitimate interest, or works agreement). In most cases, a data protection impact assessment is mandatory. Employees whose data is processed must be fully informed. Technical and organizational safeguards must be in place, such as pseudonymization, access controls, and deletion concepts. Data must not be used for purposes other than those agreed. The company’s data protection officer and, if necessary, the supervisory authority, should be involved at an early stage. A GDPR-compliant approach is not just legally required; it is a vital trust factor for both works council and employees.

How can small companies without large resources involve the works council in AI projects?

Even with limited resources, small companies can effectively involve the works council: Use low-cost or free training options like webinars, online courses (e.g., from KI-Campus), or regional digitalization support programs. Consider jointly participating in publicly funded consulting offers from ministries or economic development agencies. Instead of expensive workshops, regular focused discussions with clear agendas can be effective. Develop works agreements based on available templates (such as those from the Hans Böckler Foundation) and adapt them to your needs. Implement step-by-step with small pilot projects, rather than big-bang rollouts. Network with other SMEs to share experiences. Involve the works council in communication with AI vendors, so questions can be clarified directly. The quality of involvement depends more on the seriousness and continuity of the dialogue than on the resources spent.

Leave a Reply

Your email address will not be published. Required fields are marked *