Part 2 of the Data Egress Review Series
In Part 1, we walked through why data egress requests are different from normal data access. We covered what privacy engineers are actually reviewing and looked at a real example: a marketing team wanting to send customer PII to a third-party analytics vendor. The request had leadership approval. The business case was solid.
However, the timeline was aggressive. The request can’t move forward until one critical question is answered; That is, is this vendor secure enough to handle our customers’ data? This is where most teams get stuck. They assume “leadership approved the vendor” means the technical review doesn’t have to happen or assumes someone else already did it. This is not true, and you should NEVER assume. Business approval and security approval are completely different gates. When it comes to issues in reference to fines, it will be based off these two differences.
Leadership Approved Doesn’t Mean Security Approval
When leadership approves a vendor, they’re evaluating business fit: Does this vendor solve our problem at a glance? Can we afford them? Do they have good references? Are they reputable in the market? That’s not the same thing as asking: Can this vendor protect our data?
Business teams don’t typically have visibility into a vendor’s security architecture, breach history, encryption standards, or compliance certifications. It’s not usually their interest. Mostly because that’s not their job but it is ours.
This is not to say you will not have people that understand this. These type of questions requires a person to be technical enough to understand the nuances. The other thing I have noticed is that a lot of companies will sell the dream, without actually knowing how to deliver the product. You see it all the time, most companies are just a replication of another, finding a gap that didn’t exist in an existing product, and using that as the message to use their product instead. Sometimes, they do deliver and sometimes, they just have good salesman that call sell the dream. The company uses that buy in to try to build in real time and your company becomes the case study to it failing or being able to pull it through.
So when a data egress requests lands on our desk with a “leadership approved” vendor, we are looking for any of those flaws.
We need to be able to verify:
- If the the vendor’s security certifications and compliance frameworks are following industry recommendations.
- Their breach history and incident response capabilities exist and are documented.
- How do they protect sensitive data, how its configured and how do they do so in a manner that its protected from unauthorized users?
- How they handle encryption? Do they do it in transit and at rest?
- Who has access to the data once it’s in their system, are there workflows established?
- How long will they retain the information, and how will they illustrate how to prove they deleted such data?
- What happens if they get acquired or go out of business?
- What happens If a breach does occur, what is the mechanism to notify those impacted?
- Whats the SLA for support? and so on.
Without answers to these questions, the request doesn’t move forward. Period.
The Vendor Security Assessment Framework
There is not a template for how to go about doing these assessments. The reason is because it’s based on context and every company handles this differently. Some have dedicated Third-Party Risk Management (TPRM) teams. Other fold this into InfoSec groups or Privacy teams. Some use vendor questionnaires. Others rely solely on certification audits. But there are some core questions that do remain the same.
1. Does this vendor hold recognized security certifications?
The gold standards we look for:
- SOC 2 Type II – An independent auditor verified the vendor’s security controls over time, not just at a single point.
- ISO 27001 – International standard for information security management systems.
- PCI DSS – Required if they handle payment card data.
- HIPAA compliance – Required for organizations that handle protected health information.
- FedRAMP – Required authorization framework for cloud services handling government data.
- COBIT – governance framework that helps organizations align their IT controls, including privacy and security practices, with business objectives and regulatory requirements.
- Cybersecurity Supply Chain Risk-Management Practices for Systems and Organizations – Provides guidance for identifying, assessing, and mitigating cybersecurity risks throughout the supply chain at all levels of an organization.
Red flag: A vendor with no certifications at all. That doesn’t automatically disqualify them, but it means we need to dig deeper into their security practices more hands on.
2. What’s their breach history?
We check:
- Have they had any reported data breaches in the last 3 – 5 years?
- How did they respond? (Remediation, Notification timelines, customer communication)
- What was the root cause? ( if it was preventable negligence vs sophisticated attack)
A breach also doesn’t automatically disqualify the vendor. However what would, is how they handled it. Did they do so cleanly and efficiently? Were affected customers notified in a timely manner? Did they make any updates to their controls to ensure that it doesn’t happen again?
Red flag: Multiple breaches with the same root cause or slow/ opaque incident response.
3.How do they encrypt data?
- We need specifics:
- In Transit: Are they using TLS 1.2 or higher? (TLS 1.0 and 1.1 are deprecated and insecure)
- At rest: What encryption standard? ( AES-256 is the baseline)
- Key management: Who controls the encryption keys? Do they use a reputable KMS?
4. Who has access to the data?
This is often over looked. Jut because vendor is “secure” doesn’t mean everyone at that vendor should see you data.
We ask:
- Which specific teams will have access?
- Is access role based (RBAC) and does it follow the principle – of -least-privilege?
- Do they log and monitor data access?
- Can they provide audit trails if we request them?
Red flag: Broad access with no segmentation. If “all engineers have access to customer data,” that’s a risk.
5. How long do they keep the data?
Data retention policies matter for both compliance and risk management.
We need to know:
- How long will they store the data?
- What’s their deletion process? ( Hard delete vs Soft delete)
- Can they provide proof of deletion on request?
- Do they have automated retention polices or manual process?
Red flag: Indefinite retention with no clear deletion process creates ongoing liability. It also signifies that they may be out of compliance in reference to the use of data and how long they are allowed to have it. This also plays in data deletion request, if you don’t understand the different laws in place and whether the company you are looking to share data with follows it, then you should not move forward.
6. What happens if they get acquired or shut down?
Vendor instability is also another factor, especially when working with start ups.
We ask:
- What’s their data portability plan?
- Can we export our data if needed?
- What happens to customer data if they’re acquired?
- Do they have a wind-down process for service termination?
Red flag: No plan. If they can’t answer these questions, they haven’t thought about business continuity from a customer data perspective.
Now, I have gave a few different scenarios for what we are solving for. Our goal is validation that this company followed best practice to ensure that their company as well as the clients they work with have a smooth business operating model. If we use the example from our marketing team in Part 1 of this section, you can see where some of these questions may come in handy.
Case Study: The Marketing Team Request
You have read the six steps.
Now here is what they actually look like when you are in the middle of one.
Remember the marketing team from Part 1? They wanted to send customer PII, names, emails, addresses, and browsing history, to a third-party analytics vendor. Leadership had already approved it. The contract was signed. The timeline was six weeks.
Here is how each step played out.
Step 1: Certifications. The vendor had SOC 2 Type II, which cleared the first bar. But the report was 18 months old. SOC 2 reports are expected to be renewed annually. We requested the current one before moving forward.
Step 2: Breach History. No publicly reported incidents. Clean record. We also reviewed their incident response policy to confirm they had a documented plan, not just a clean past.
Step 3: Encryption. TLS 1.3 in transit and AES-256 at rest, both acceptable. But when we asked about key management, the answer was vague. “Securely stored” is not an answer. We pushed for specifics. They were using AWS KMS, which is fine, but the vagueness made us look harder at everything else.
Step 4: Access Controls. Only their data science team had access, and it was logged. Good. Then we asked the question that always catches people off guard: do any subprocessors touch this data? The answer was yes. A third-party cloud provider for storage and a separate vendor for email delivery. Now we had two more vendors to assess. This is normal. One vendor often means three or four downstream dependencies.
Step 5: Retention. Their default policy was indefinite retention unless the customer requests deletion. That was a hard stop. GDPR and CCPA both require defined retention periods. We went back to the marketing team and asked how long they actually needed the data. Six months. We required the vendor to implement a six-month retention policy with automated deletion and documented it in a contract addendum.
Step 6: Business Continuity. We asked what happens if they get acquired or shut down. They had a data portability clause, meaning we could export our data if needed. Acceptable.
The Outcome. Risk level: medium. Not perfect, but defensible given the business need and the mitigations we put in place. Approved with conditions: updated SOC 2 within 60 days, retention capped at six months with proof of deletion, quarterly access reviews, and required notification if any subprocessors change.
The marketing team was not thrilled about the extra steps. But the alternative was approving unmitigated risk. We chose controlled risk over blind trust.
That is what a vendor security assessment looks like when it is actually working. Not a checklist you fill out and file. A live process that uncovers things the original request never surfaced.
In Part 3, we cover what happens after the assessment: how to document the decision, how to make the risk defensible, and what a proper solution diagram needs to show before data transfer gets approved.
