Part 1 of the Data Egress Review Series
One of the biggest challenges engineers face is that most people don’t understand what happens when data needs to leave the secured perimeter. Your company spends months building a secure infrastructure. Access controls, encryption, monitoring, literally everything designed to let people do their jobs without friction inside that container. Then the reality hits, sometimes data has to be sent out to vendors for various of business reasons. And you can’t say no, unless there is a valid reason to reject.
So instead of focusing on the negatives you try to place controls that allow data to stay in or out, and in transit, which all boils down to the data type itself. Each having its own set of rules, and the job as a privacy engineer is to ensure it is done by following industries best practice.
That’s the gap most people miss.
They know how to work within the system. But they don’t know what it takes to move data outside of it. So they submit vague requests. They assume leadership approval means technical approval. They don’t realize that “customer identifiers” could mean the difference between a routine export and a CCPA or HIPAA violation.Then they’re surprised when the request gets rejected, or worse, stuck in back-and-forth emails for weeks trying to explain why the data is even necessary.
This article walks through how to handle such scenarios. It will be like a form for compliance checklist.
You’ll learn:
- Why data egress requests are different from normal data access
- The specific questions that determine whether your request moves forward or gets rejected
- What “valid business use case” actually means in practice
- How to assess whether you even need to send the data in the first place
- The difference between a vendor being “approved by leadership” and being approved by security
- How to document your request so it doesn’t get stuck in review hell
Who this is for:
Anyone who’s ever submitted a data egress request and wondered why it took three weeks and five meetings to get approved. Aspiring privacy engineers who want to see the review process from the inside. Teams who work with external vendors and need to understand what privacy/security is actually evaluating. Or those who are just curious about the end to end flow of data.
What You’ll Understand By the End:
The real reason your “simple” data export request isn’t simple. And how to navigate the process without the frustration of constant back-and-forth.
What Privacy Engineers Are Actually Reviewing
When you submit a data egress request, you’re asking for permission to move data outside the security controls your company spent months building. Data egress means data leaving your controlled environment. It’s anything that crosses the secured perimeter: sending customer data to marketing platforms, sharing employee records with payroll vendors, uploading files to external cloud storage, integrating with third-party CRMs, or exporting reports to partners.
Inside your secured environment, there are controls for access, monitoring, and encryption. If something goes wrong, the company can detect it and respond immediately.Outside that environment? That trust goes to a third party to protect data the company is still legally responsible for. If they get breached, misuse it, or fail to delete it when they’re supposed to, it’s your company that faces the fines, lawsuits, and regulatory consequences.This is why the review process exists.
One of the main rules to understand is that risk of data leaving can no longer be controlled by what happens to it once it’s out. Which means we need to make sure that what you are asking makes sense for the business, is justified, properly mitigated, and legally defensible.
Here’s what that actually means when your request comes in:
- We’re looking at the data itself – Is this generic analytics or Personally Identifiable Information (PII)? PII triggers stricter regulatory requirements. If your form says “customer identifiers,” we need to know exactly what fields you’re sending. Names and emails are treated very differently than anonymized user IDs. And you can’t just name it, you need to provide a detailed inventory of the data, along with where it came from and where it’s going. Full paths, along with understanding who will have access to that data, whether it will be in a protected environment, and locked down from other individuals who may not have meaning to see that data.
- We’re assessing the vendor – “Leadership approved this vendor” doesn’t mean they’re secure. We need to verify they have proper certifications, encryption standards, and a track record that doesn’t include recent breaches. If they’re a startup with no SOC 2 report, that’s a red flag. Most big companies have teams that do this for you. For us, it was our Third-Party Risk group. In our request, we just ensure we capture those details, along with any other credentials, before giving the green light.
- We’re reviewing the technical implementation – How is the data being transmitted? Is it encrypted in transit? What protocol are you using? Where is it stored once it arrives? Who has access? How long do they keep it? These questions are regulatory requirements. Typically you can see these details by asking for a solution diagram. This will help you see the flow of data and help when asking if the proper controls are in place, or if there needs to be a revisioning of such protocols.
- We’re documenting the risk – Sometimes the ideal solution (don’t send the data, use anonymized data instead, build it in-house) isn’t feasible. When that happens, we document why the risk is acceptable given the business need, what controls are in place, and what the fallback plan is if something goes wrong. This also will require C-suite approval, which basically puts the responsibility on leadership if the stakeholders go against what was agreed upon.
How Data Egress Actually Works
When a request comes in, the requester (usually someone from the business side) has a use case for sharing data. Our job as privacy engineers is to ensure that use case is valid, documented, and has the right controls in place to keep the company compliant. There’s no universal checklist for this, every request is different. But there are core questions we ask to assess risk, and those questions are tailored based on what we learn as we dig in.
Here’s how we approach it:
First, we figure out where the data is going and what it contains:
- Is this an internal transfer or going to an external vendor?
- What specific data fields are you sending?
- Is this generic analytics (low risk) or Personally Identifiable Information like names, emails, addresses (high risk)?
If PII is involved, everything changes. PII triggers stricter regulatory requirements under GDPR, CCPA, and HIPAA. We need a full inventor not just “customer identifiers,” but the exact fields, where they’re coming from, and where they’re going.
Next, we assess the vendor (if it’s external):
- Who are you sending this to?
- Is the vendor trustworthy? What’s their security track record?
- Do they hold the right certifications (SOC 2, ISO 27001)?
- Have they had any recent breaches?
In most large organizations, there’s a Third-Party Risk team that handles vendor assessments. If your company has one, we verify that assessment has been completed before moving forward. If it hasn’t, the review stops here until that’s done.
Then, we review the technical implementation:
- How is the data being transmitted? (API, SFTP, manual export?)
- Is it encrypted in transit?
- What protocol are you using?
- Where is it stored once it arrives?
- Who has access to it?
- How long does the vendor keep it?
- Is it encrypted at rest?
These aren’t optional questions, they are actually regulatory requirements. If you can’t answer them, we’ll typically ask for a solution diagram that shows the full data flow so we can verify the proper controls are in place.
Finally, we document the risk:
- What’s the backup plan if something goes wrong?
- How will you know if there’s a breach?
- Can you revoke access if needed?
- How do you verify deletion when the vendor is done with the data?
Sometimes the ideal solution (don’t send the data, anonymize it, build in-house) isn’t feasible due to budget or timeline constraints. When that happens, we document why the risk is acceptable, what mitigations are in place, and escalate to leadership for approval.
Now let’s see what this looks like in practice.
A marketing team submitted a request to send customer data to a third-party analytics vendor. The business case was solid: they needed the data to optimize campaign targeting and measure ROI. Leadership had already approved the project, allocated budget, and set a six-week launch timeline.
On the surface, this looked straightforward. But when the request landed on our desk, we immediately spotted a problem.
Here’s what the form said:
- Data: Customer identifiers and engagement preferences
- Vendor: Third-party marketing analytics platform
- Purpose: Campaign targeting and ROI measurement
- Timeline: 6-week launch
- Status: Leadership approved
The phrase “customer identifiers” is a red flag. It’s vague. It could mean anything from anonymized user IDs (low risk) to full names, emails, and addresses (high risk). And we never assume.
First step: Schedule a call with the marketing lead to clarify.
We ask: “What specific fields are you including in ‘customer identifiers and preferences’?”
Their answer: “Names, email addresses, physical addresses for customers who’ve purchased. Plus browsing history and campaign interactions.”
This doesn’t really tells us much. But we know that customer identifier is Personally Identifiable Information (PII), and PII falls under strict regulatory constraints.
PII Changes Everything
Here’s the thing about PII: it’s heavily regulated at both the national and state level because privacy laws are built around consent, the idea that people have a right to know and control what happens to their personal information. PII means data that can identify a specific individual, either directly (name, email, SSN) or indirectly (IP address combined with browsing history).
When PII is involved, the stakes are higher:
- Legal liability increases – GDPR, CCPA, and HIPAA all have specific requirements for how PII must be handled, stored, and shared
- Breach consequences are severe – If this vendor gets breached, your company faces mandatory breach notifications, regulatory fines, and potential lawsuits
- Consent matters – Depending on jurisdiction, you may need explicit user consent to share their data with third parties
- Data minimization is required – You can only send the minimum data necessary for the stated purpose
This is why we can’t just approve the request as-is.
The marketing team thought they had everything lined up, leadership approval, budget, timeline. But they didn’t account for the regulatory requirements that kick in when PII leaves the company’s control.
So, What Happens Next
Now that we’ve identified PII in the request, the review process intensifies. We can’t approve this as-is, the regulatory risk is too high.
The next phase involves:
- Verifying the vendor’s security posture (SOC 2 certification, breach history, encryption standards)
- Reviewing the technical implementation (how data is transmitted, stored, and accessed)
- Assessing whether all this data is actually necessary (can we anonymize or tokenize some fields?)
- Documenting the risk and determining what mitigations are required before approval
This is where most data egress requests hit friction. The requester thought they were done after getting leadership approval. But the technical and legal review is just getting started.
The disconnect happens because business teams focus on “why we need this” while privacy engineers focus on “how we protect this once it leaves our control.” Both perspectives are necessary. Neither is optional.
TLDR
If you’re submitting a data egress request, here’s what you need to know:
- Leadership approval ≠ technical approval – Business sign-off doesn’t mean the data can leave. Privacy and security reviews are separate gates.
- “Customer identifiers” isn’t specific enough – We need exact field names, data sources, and destinations. Vague descriptions guarantee your request will be sent back.
- PII triggers stricter requirements – If your data includes names, emails, addresses, or anything that identifies individuals, expect additional scrutiny.
- Plan for this upfront – If you know you’ll need to share data with a vendor, loop in privacy/security early, before signing contracts or setting timelines.
In Part 2 of this series, I’ll walk through how we evaluated the marketing team’s vendor, what red flags we found, and how we assessed the technical security controls before making a decision.
