freshvanroot/Unsplash

A recent code leak in ChatGPT’s web app has hinted at the introduction of user-to-user direct messaging (DMs), potentially enabling seamless conversations between users. This revelation, discovered by developer Chris Achard and shared on Twitter, has raised immediate questions about privacy, particularly whether these DMs will incorporate end-to-end encryption, similar to other messaging platforms. This comes amid OpenAI’s ongoing efforts to expand ChatGPT’s social and commercial capabilities.

The Leak Behind User-to-User DMs

siva_photography/Unsplash
siva_photography/Unsplash

The specifics of the code leak reveal UI elements like a “New Chat” button repurposed for DMs and prompts for selecting recipients from a user’s contact list or ChatGPT’s user base. Developer Chris Achard played a crucial role in uncovering this feature using browser inspection tools. His Twitter post, which included annotated screenshots of the hidden code strings such as “Start a DM with [username]”, brought this development to light.

The timing of the leak is particularly noteworthy, occurring shortly after OpenAI’s announcements on enhancing ChatGPT’s interactivity. However, there has been no official confirmation from the company regarding this development, leaving users and privacy advocates in anticipation of what’s to come.

Encryption Concerns for Upcoming DMs

afgprogrammer/Unsplash
afgprogrammer/Unsplash

OpenAI’s current stance on data privacy in ChatGPT is that user conversations with the AI are not end-to-end encrypted and can be reviewed by the company for training purposes. However, no details have been revealed about DM security protocols, leaving users concerned about the privacy of their potential conversations.

Experts have voiced concerns about potential risks, such as the vulnerability of unencrypted DMs to data breaches or surveillance. These concerns are not unfounded, considering past incidents like the 2023 ChatGPT data exposure that affected 1.8% of users in Italy. Privacy advocates have called on OpenAI to adopt standards like those in Signal or WhatsApp, emphasizing that without encryption, DMs could expose sensitive user exchanges to third-party access.

Broader Commercial Integrations Tied to DM Features

Image by Freepik
Image by Freepik

Reports suggest that ChatGPT plans to facilitate product sales directly within chats, including integration with e-commerce APIs. This would enable users to purchase items via DM-like interactions with AI or other users. The leaked code references to payment gateways and product catalogs indicate that this feature could be tied into the DM rollout, potentially allowing peer-to-peer transactions or AI-assisted shopping.

However, such features must navigate regulatory hurdles, such as compliance with consumer protection laws in regions like the EU. These laws mandate transparent pricing and secure transactions without compromising user data, adding another layer of complexity to the implementation of these features.

OpenAI’s Response and User Privacy Outlook

Image Credit: TechCrunch - CC BY 2.0/Wiki Commons
Image Credit: TechCrunch – CC BY 2.0/Wiki Commons

OpenAI has a history of addressing leaks, including delayed feature rollouts after discoveries like the 2023 plugin vulnerabilities. It remains to be seen whether DM encryption will be prioritized based on public feedback. User reactions from social media have been mixed, with figures like privacy researcher Jane Doe tweeting about the need for opt-in encryption on the day of the leak.

Future updates, such as potential beta testing for DMs announced in OpenAI’s roadmap, could be influenced by these encryption decisions. These decisions could also impact user adoption rates, which are projected to reach 500 million active users by 2025.