Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the acf domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/vhosts/brixon.ai/httpdocs/wp-includes/functions.php on line 6121

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the borlabs-cookie domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/vhosts/brixon.ai/httpdocs/wp-includes/functions.php on line 6121
Protecting Trade Secrets When Using AI: A Practical Guide for SMEs – Brixon AI

Why Data Protection is More Critical than Ever with AI Tools

You know the dilemma: your project managers could work much faster with ChatGPT, Claude, or Copilot. But what happens to your engineering data, client conversations, or cost calculations that get entered in the process?

The use of generative AI tools has surged in German companies. However, only a few have already implemented adequate data privacy policies.

The problem is clear: by nature, AI tools process huge volumes of data. Unlike classic software, this data often flows into complex algorithms whose behavior is hard to predict.

Legally, we’re operating in a tension field between the GDPR, Trade Secrets Act (GeschGehG), and industry-specific regulations. §2 of the Trade Secrets Act clearly defines: trade secrets are information that is secret, economically valuable, and appropriately protected.

But what does “appropriately protected” mean when it comes to AI tools? This is the crux for your company’s success.

As regulation increases for digital services, transparency requirements for AI providers are also rising. Companies must be able to trace how and where their data is processed.

But it’s not just about compliance. A data leak can cost your company millions – not just in fines, but in lost trust and competitive disadvantage.

The Most Common Data Privacy Pitfalls of AI Tools

Cloud-Based AI Services and Data Transmission

The biggest pitfall arises with the very first click: where does your data go when you enter it into ChatGPT, Gemini, or similar tools?

Many AI tools store chat histories or user inputs on servers that are often outside the EU, for example, in the US.

The problem: any data transmission outside the EU is subject to the rules for international data transfers under Art. 44 et seq. GDPR. You need adequate safeguards – usually in the form of Standard Contractual Clauses.

But beware: copy-pasting clauses does nothing for you. You have to assess your industry’s specific risks and implement suitable protections.

A concrete example: if you upload engineering drawings into an AI tool to generate parts lists automatically, this data could in theory be used to train future model versions.

Training Data and Model Updates

This is where things get especially tricky: many AI providers use user inputs to improve their models. What goes in as your trade secret today might become part of generally accessible knowledge tomorrow.

With many providers, it’s possible to disable use of your data for further training, at least with paid or enterprise versions. However, default settings are often problematic.

The solution is smart contracts. Enterprise versions usually offer better control over data usage. Some solutions guarantee that corporate data won’t be used for training.

Still, as the saying goes: trust is good, control is better. Implement technical data minimization measures before data even reaches the tool.

Local vs. External AI Systems

The alternative to cloud-based services is local AI installation. Meta’s Llama or Mistral provide open-source models you can run completely on-premise.

The advantage is obvious: your data never leaves your network. You also have full control over updates and configuration.

But even here, there are traps. Open-source models come without warranties or support. You’ll need the right IT expertise and hardware resources.

For many mid-sized companies, a hybrid approach is optimal: sensitive data remains local, less critical tasks are handled by cloud services.

Legally Compliant AI Tool Selection: Checklist for Decision Makers

Contract Design and Data Processing Agreement (DPA) Requirements

Every AI tool implementation starts with the right contract. Under Art. 28 GDPR, you must sign a data processing agreement (DPA) if the AI provider processes personal data on your behalf.

Check these core points in any AI contract:

  • Purpose limitation: May the provider use your data only for the agreed purpose?
  • Right to erasure: Can you request deletion of your data at any time?
  • Sub-processors: Who has access to your data and where are the servers located?
  • Audit rights: May you check compliance with the agreements?
  • Data protection impact assessment: Does the provider support you in DPIA processes?

A practical tip: ask the provider for a detailed data flow diagram. This helps you understand exactly where your data goes.

Especially critical: contracts with US-based providers. Here, you must also meet the requirements of the ECJ “Schrems II” ruling.

Evaluating Technical Safeguards

Legal safeguards are only half the battle. What really counts are the provider’s technical security measures.

You should require at least these security features:

Safeguard Description Importance
End-to-end encryption Data is encrypted throughout the entire transmission Critical
Zero-trust architecture No automatic trust; every access is verified High
Tenant separation Your data is logically separated from other customers High
Logging and monitoring All access is logged and monitored Medium
Backup and recovery Secure data backup and restoration Medium

Specifically ask about certifications. ISO 27001, SOC 2 Type II, or BSI C5 are good indicators of solid security standards.

But beware of certification theater: a certificate alone doesn’t guarantee real-world security. Ask for implementation details.

Identifying Compliance-Conform Vendors

Not all AI vendors are equally suited for German mid-sized businesses. Here’s an assessment of the most important players:

Microsoft Copilot for Business: Good GDPR compliance, EU data centers available, but high licensing costs. Ideal for Office 365 environments.

Google Workspace AI: Strong technical capabilities, but checkered privacy track record. Only recommended with special contracts.

OpenAI Enterprise: Market leader in functionality but US-based. Requires careful legal review.

German/EU Providers: Aleph Alpha (Germany) or Mistral (France) offer stronger privacy compliance but more limited features.

A pragmatic approach: start with EU-based providers for sensitive use cases and rely on international players for non-critical tasks only.

Crucial: document your decision criteria. In privacy audits, you must be able to prove why you chose specific providers.

Practical Safeguards for Everyday Business

Data Classification and Access Control

Before you deploy any AI tools, you need to know: what data do you actually have? Systematic data classification is the cornerstone of any AI governance.

Establish a simple four-level system:

  1. Public: Press releases, website content – can be freely used with AI tools
  2. Internal: Org charts, internal processes – only with approved tools and limitations
  3. Confidential: Customer data, contracts – local or specially vetted AI systems only
  4. Strictly confidential: Development data, trade secrets – total AI ban or air-gapped systems only

Implement technical controls: Data Loss Prevention (DLP) tools can automatically detect when employees attempt to enter sensitive data into web-based AI tools.

A practical example: configure your browser or network to prevent certain file types or content with classification tags from being transferred to external AI services.

Implementation must remain practical. Excessively restrictive measures just drive employees to seek workarounds.

Employee Training and Awareness

Your best firewall is between your employees’ ears. Without proper awareness training, even the best technology is ineffective.

Develop practical training modules:

Basic training for all staff: What are AI tools, what are the risks, which tools are approved? Duration: 2 hours, with quarterly refreshers.

Advanced training for managers: Legal basics, incident response, vendor management. Duration: half-day, annually.

Technical training for IT teams: Configuration, monitoring, forensic analysis. Duration: two days, as needed.

But beware of death-by-PowerPoint: use interactive formats. Simulate realistic scenarios where employees have to decide if certain AI uses are permitted.

A proven format: “AI Office Hours” where employees can discuss concrete use cases. You’ll simultaneously identify new risks and opportunities.

Measure your training’s effectiveness. Phishing simulations using AI tools can show whether your employees are truly sensitized.

Monitoring and Incident Response

If you can’t measure it, you can’t manage it. So implement systematic AI monitoring in your IT landscape.

At a minimum, track these metrics:

  • Tool usage: Which AI services are used by whom?
  • Data volumes: How much data is sent to external AI providers?
  • Anomalies: Unusual spikes in uploads or access patterns
  • Compliance violations: Use of non-approved tools or transmission of classified data

Use SIEM systems (Security Information and Event Management) to correlate AI-related events. Many traditional security tools can also monitor AI usage with the right rules.

Develop an AI-specific incident response plan. What do you do if an employee accidentally enters trade secrets into ChatGPT?

The process might look like this: immediately block the affected account, contact the AI vendor with a deletion request, assess the potential damage, and if necessary, notify the authorities.

Important: test your plan regularly with tabletop exercises. Theory and practice often diverge greatly.

Industry-Specific Requirements and Best Practices

Every sector has its own data privacy requirements for AI use. Here are the key considerations for typical mid-sized industries:

Mechanical Engineering and Manufacturing: Engineering data and production parameters are often your most valuable assets. Use AI primarily for public documentation and customer communication. For engineering AI, invest in local solutions like Fusion 360 AI or SolidWorks AI with on-premise deployment.

SaaS and Software Development: Source code and algorithms must never reach external AI systems. GitHub Copilot Enterprise with training disabled is acceptable, but check your settings regularly. For code reviews, use local large language models like CodeLlama.

Consulting and Services: Client projects and strategies are highly sensitive. Establish strict client separation: each client gets separate AI instances or workspaces. Use AI mainly for internal processes and anonymous analyses.

Retail and E-commerce: Customer data and pricing strategies are critical. Use AI for product descriptions and marketing, but never for customer segmentation using personal data in external tools.

A success story: a 150-employee engineering firm uses local AI for design optimization and cloud AI only for translating user manuals. Result: 30 percent time savings with zero compliance risk.

Document industry-specific decisions thoroughly. Supervisory authorities expect traceable risk assessments that reflect your sector’s specifics.

Building Future-Proof AI Governance

AI technology is evolving rapidly. Your governance structures need to keep pace with this speed.

Establish an AI governance board with representatives from IT, legal, data protection, and specialist departments. This committee should meet quarterly and take on the following tasks:

  • Evaluating new AI tools and providers
  • Updating AI policies to reflect legal changes
  • Analyzing AI incidents and lessons learned
  • Authorizing critical AI applications

Implement an AI register: document all AI tools in use, their purpose, processed data types and legal bases. This keeps you in control as your AI landscape grows.

Plan for the long term: the upcoming EU AI Act will bring strict requirements for high-risk AI systems. High-risk systems will be subject to conformity assessment procedures. Start preparing now.

A pragmatic approach: begin with a simple Excel-based AI inventory and gradually expand your governance. Perfection is the enemy of good – what matters most is getting started.

Invest in ongoing education. AI law changes quickly, and what’s compliant today may be problematic tomorrow.

Frequently Asked Questions

Are we allowed to use ChatGPT for internal documents?

It depends on the type of documents. For public or internal documents without personal data, you can use ChatGPT under certain conditions. Make sure to activate the “disable chat history and training” option in the settings. For confidential business documents, you should use local AI solutions or enterprise versions with special privacy guarantees.

Which AI tools are GDPR compliant?

GDPR compliance depends more on configuration and contracting than on the tool itself. Microsoft Copilot for Business, Google Workspace AI with EU-hosting, and European providers like Aleph Alpha offer strong foundations. What matters are proper data processing agreements, EU data hosting, and guarantees against training with your data.

What if employees accidentally enter trade secrets?

Act quickly: document the incident, immediately contact the AI provider with a deletion request, and assess the potential damage. Most reputable providers have procedures for such cases. What’s crucial is a predefined incident response plan and regular employee training to prevent issues in the first place.

Are local AI solutions always safer?

Not automatically. Local AI systems offer better data control, but you are responsible for security, updates, and compliance. Without proper IT expertise, local systems can even be less secure than professionally managed cloud services. The optimal solution is often a hybrid: local AI for sensitive data, cloud AI for non-critical applications.

How often should we review our AI governance?

Review your AI governance at least quarterly. The AI landscape changes fast – new tools, laws, and security threats require regular updates. In addition, carry out extraordinary reviews after any major incident, change in legislation, or when introducing new AI tools.

Do we need a data protection impact assessment (DPIA) for AI tools?

A DPIA is often necessary for AI tools, especially if you process large volumes of personal data or make automated decisions. Check Art. 35 GDPR: if there is “high risk” to data subjects, a DPIA is mandatory. When in doubt, conduct a DPIA – it also helps you systematically identify and minimize risks.

What are the costs for a privacy-compliant AI implementation?

Costs vary significantly depending on company size and security requirements. Expect to pay €5,000–15,000 for initial legal review and policy development, €2,000–5,000 per year for enterprise AI licenses, and €10,000–30,000 for technical security measures. Local AI systems require additional hardware investments starting at €20,000. The ROI comes from avoiding fines and boosting productivity.

Leave a Reply

Your email address will not be published. Required fields are marked *