When people ask whether video calls are safe, they usually want a simple answer. Technology does not really offer one. The more honest answer is: safe from whom, and at which layer?
A video call can be well protected from someone snooping on public Wi-Fi, yet still expose you to the platform hosting the call, the browser tab running it, the people inside the meeting, or the room visible behind you.
That is the core truth users often miss. Video calls are not protected by one giant shield. They are protected by layers, and each layer solves a different problem.
The network is usually safer than people think
At the transport level, modern video calls are generally far more secure than many users assume. Much of the web’s live audio and video communication relies on WebRTC, and its security design is not casual.
The IETF’s WebRTC Security Architecture says media channels must be secured with SRTP and DTLS-SRTP, while data channels must use DTLS. It also says implementations should prefer cipher suites with forward secrecy and generate fresh authentication keys for new calls.
In plainer terms, that means the audio and video moving across the network are not supposed to be readable to random outsiders. If your fear is that someone sitting on the same coffee-shop Wi-Fi can casually intercept your call, the answer is often no. The underlying transport stack was designed to make that difficult.
The IETF’s Security Considerations for WebRTC reinforces this point. It says public-key-based key exchange is “imperative” for WebRTC and should provide forward secrecy. It also explains that the protocol includes checks to verify the remote side actually wants to receive traffic. So on the network layer, modern video calls are usually doing real security work.
Encryption in transit is not the same as total privacy
This is where the answer gets less comforting. A platform can protect your call in transit without protecting it from the platform itself.
Google says on its Google Meet security page that Meet data is encrypted in transit by default and that recordings stored in Google Drive are encrypted at rest by default. But Google also explains in its Meet client-side encryption documentation that standard Meet calls can still be decrypted by Google’s data center services for Meet. Only client-side encryption, Google says, keeps the media encrypted in each participant’s browser using keys available only to participants.
That distinction matters. It means “encrypted” does not always mean the provider is blind to the content.
Microsoft makes a similar distinction. In its support page for end-to-end encryption in Microsoft Teams calls, Microsoft says Teams uses TLS and SRTP by default, but treats end-to-end encryption as a specific mode for one-on-one calls. Its documentation for Teams meetings E2EE also makes clear that only audio, video, and video-based screen sharing are end-to-end encrypted, while chat, reactions, and several other meeting features are not.
Zoom says much the same. Its support article on end-to-end encryption says regular meetings use AES-GCM encryption for content in transit, but meeting keys are managed by Zoom’s servers unless E2EE is enabled. Once E2EE is turned on, features such as cloud recording, live transcription, and some AI tools are disabled.
That tradeoff tells you something important: when stronger privacy turns off smart features, it usually means those features needed server-side access to your meeting content.
The safest architecture is often the least convenient
A useful benchmark here is Signal. Signal says its calls are end-to-end encrypted by default, and in its technical post on large-scale encrypted group calls, the company explains why architecture matters. Signal says it uses selective forwarding instead of server mixing, because server mixing would require the server to view and alter media, which does not fit end-to-end encryption.
That is a deeper lesson about video-call safety: privacy is not just a feature toggle. It is also a design choice. Some systems are built to keep servers as blind as possible. Others are built to support more cloud intelligence, convenience, and integrations, which often means the server sits closer to the content.
So if you want the strongest privacy, you may have to give up some convenience.
Your browser and device are part of the risk
Even if the call itself is well encrypted, that does not make your device trustworthy.
The IETF’s WebRTC security considerations warns that if a calling service is delivered over insecure web pages, or if trusted pages load active content from untrusted sites, an attacker may inject code and effectively “bug” the user’s computer. The same document notes that browsers try to limit access to local resources like the microphone and camera, but screen sharing introduces risks users do not always understand well.
That means a secure video call can still become unsafe if the browser page is malicious, the device is compromised, or permissions are granted too casually.
In other words, encryption protects the stream. It does not clean up a bad endpoint.
Screen sharing is one of the easiest ways to leak information
Many users think about security only in terms of interception. But accidental disclosure is often the more realistic problem.
The same WebRTC security guidance describes the danger of oversharing during screen sharing. A user may think they are sharing one document when they are actually sharing an entire screen filled with icons, messages, tabs, and notifications. The RFC also describes scenarios where a malicious site could exploit screen-sharing behavior to expose sensitive information visible on screen.
This is a useful reminder that video call safety is not only about cryptography. It is also about interface design and human error. A call can be secure on the network and still leak private information through a careless share.
“Mute” is not always as absolute as users assume
One of the most unsettling findings in video-call research comes from the paper Are You Really Muted?. The researchers examined how conferencing apps handle microphones after a user presses mute and found “fragmented policies” across apps. Some apps continued monitoring microphone input while muted, others did so periodically, and one app transmitted audio statistics to telemetry servers during mute.
The paper also describes a proof-of-concept system that achieved 81.9% macro accuracy when identifying common background activities from intercepted telemetry packets.
That does not mean every muted app is secretly streaming your conversations. But it does mean mute is not always as simple as users imagine. In some cases, apps may still sample, summarize, or analyze aspects of audio even after the user believes the microphone is functionally “off.”
Your background can reveal more than you think
Video creates another class of risk that has little to do with network security: environmental exposure.
In the paper ‘The privacy protection effectiveness of the video conference platforms’ virtual background, researchers argue that even privacy features like virtual backgrounds can be unstable enough to leak information. Their abstract says instability in virtual backgrounds may leak users’ privacy and affect users’ behavior and mentality.
That matters because many people treat blurred or replaced backgrounds as if they were perfect shields. They are not. Room edges, moving objects, lighting changes, or segmentation errors can still reveal pieces of your home or workplace.
And then there is the human layer. In Zooming Into Video Conferencing Privacy and Security Threats, researchers collected more than 15,700 publicly available collage images from online video meetings and extracted information such as faces, usernames, age, gender, and sometimes full names. Their work shows that even when the call itself is secure, people inside the meeting can still capture, repost, and expose what they see.
That is the part no encryption setting can fully solve.
Real safety depends on controls, not just cryptography
The National Institute of Standards and Technology recommends basic but important protections for virtual meetings: use strong meeting passwords, restrict who can join, lock the meeting when everyone is present, limit screen sharing, encrypt recordings, and delete provider-hosted recordings when they are no longer needed.
These are not small settings. They are part of the actual safety model. Many video-call failures happen not because the encryption broke, but because the meeting was left open, the wrong person got the link, the host allowed careless sharing, or the recording lived forever in cloud storage.
So, am I safe?
The most accurate answer is this: usually safer than you think on the network, but less private than you may assume at the platform, device, and human levels.
If you are worried about strangers intercepting the traffic, modern video calls are usually built to defend against that. If you are worried about the platform provider, the answer depends on whether the service uses ordinary in-transit encryption, optional end-to-end encryption, or client-side encryption. If you are worried about your browser, your microphone, your screen, your background, or what other participants may record and share, then the risks expand well beyond the transport layer.
That is why “safe” is too blunt a word. A better question is: which part of the call do I trust, and which part do I not?
The practical answer is simple. Use platforms that clearly explain their encryption model. Turn on E2EE or client-side encryption for sensitive conversations. Keep meeting links private. Lock meetings. Limit screen sharing. Treat mute and virtual backgrounds as helpful tools, not absolute guarantees. And assume that anything another participant can see may also be copied.
That is not paranoia. It is the technology answer, stated honestly.