Get Early Access

Why Your Video Tool's Source Code Matters More Than Its Privacy Policy

The trust model is backwards

When you evaluate a video messaging tool, the vendor points you to their privacy policy. It is a legal document, usually 3,000 to 8,000 words, written by lawyers, reviewed by lawyers, and designed primarily to protect the company — not to inform you.

Privacy policies are not transparency tools. They are liability shields.

This distinction matters because video tools process a category of data that most software does not: biometric data. Faces, voices, behavioral patterns — all classified as special category data under GDPR Article 9. The legal requirements for processing this data are strict, and the consequences for getting it wrong are significant.

Yet the primary mechanism we use to evaluate whether a tool handles this data responsibly is a document that almost nobody reads, that changes without notice, and that is specifically crafted to maximize the vendor’s legal flexibility.

What privacy policies actually permit

Read the privacy policies of major video platforms carefully and you will find patterns that should concern any compliance team.

Broad data usage rights. Most policies reserve the right to use your data for “improving services,” “developing new features,” or “research purposes.” These phrases are legally expansive enough to cover almost any use of your recordings, including training machine learning models on your face and voice data.

Unilateral modification. Nearly every privacy policy includes a clause allowing the company to change its terms at any time, with notice typically limited to posting an update on their website. Your consent today does not bind the company to the same terms tomorrow.

Third-party sharing. Policies routinely permit sharing data with “service providers,” “affiliates,” and “business partners” — categories that can include dozens or hundreds of entities. The specific companies receiving your data are rarely named.

Retention ambiguity. “We retain data for as long as necessary” is a standard clause that tells you nothing about actual deletion timelines. Some policies distinguish between “deactivation” and “deletion,” where deactivation means your data continues to exist on their servers indefinitely.

None of this is illegal. These are standard industry practices. But when the data in question includes biometric information about your employees and customers, standard industry practice is not the same as adequate protection.

Why open source changes the equation

Open source does not make software automatically trustworthy. But it changes the verification model fundamentally.

With closed-source software, trust is based on claims: the vendor says they handle your data responsibly, and you either believe them or you do not. The privacy policy is the extent of your visibility into their practices.

With open-source software, trust is based on evidence. Your security team can read the code and verify:

  • How recordings are stored. Are they encrypted at rest? With what algorithm? Who holds the keys?
  • What happens during processing. Is video data sent to external services for transcription or analysis? Are thumbnails generated locally or via a third-party API?
  • How deletion works. When a user requests deletion, does the code actually remove the data from all storage layers, or does it flip a boolean flag and leave the files intact?
  • What telemetry exists. Does the application phone home? What data does it transmit, and to whom?
  • Whether data is used for training. Is recording data fed into ML pipelines? Open source makes this impossible to hide.

This is the same principle behind open-source cryptography. The security community learned decades ago that cryptographic algorithms must be publicly auditable because “trust us, it is secure” is not a security posture. The same logic applies to software that processes biometric data.

Kerckhoffs’s principle — that a system should be secure even if everything about it except the key is public knowledge — is the foundation of modern cryptography. Applied to video tools: a platform should be trustworthy even when you can see exactly how it works. If transparency would compromise the product, the product has a problem.

The self-hosting dimension

Open source also enables something that no privacy policy can guarantee: self-hosting.

When you run a video tool on your own infrastructure, the jurisdictional questions disappear. There is no third-party processor. There is no cross-border data transfer. There is no CLOUD Act exposure. Your recordings live on servers you control, in a jurisdiction you choose, with access limited to people you authorize.

For organizations in regulated industries — healthcare, financial services, government, legal — this is not a nice-to-have. It is increasingly a requirement. And it is only possible when the source code is available.

A proprietary video tool can promise EU data residency. An open-source, self-hostable video tool lets you guarantee it.

What SendRec does

SendRec is built on this principle. The entire platform is open source under AGPLv3 — a copyleft license that ensures transparency cannot be stripped away. If someone forks SendRec and builds a proprietary version, they are required to release their modifications under the same license. The transparency guarantee travels with the code.

The infrastructure decisions follow the same logic:

  • EU-owned hosting: Hetzner (German-owned) for compute and storage. No US cloud providers in the data path.
  • Self-hostable: Docker Compose and Helm charts for teams that need full infrastructure control. Not an enterprise add-on — a first-class deployment path.
  • No external dependencies in the data path: No third-party analytics, no external CDN for recordings, no tracking scripts. Every component is auditable.

SendRec is in early development. The codebase is public, the architecture decisions are documented, and we are building in the open. If your team processes video recordings and you have been relying on privacy policies as your primary trust mechanism, it might be worth considering what verifiable transparency looks like instead.

Join the waitlist for early access, or follow the project on GitHub.