Skip to content Skip to footer

ChatGPT Mac App Found Storing Conversations as Plain Text

OpenAI recently faced a scrutiny spike after it was uncovered that its ChatGPT Mac app was saving user conversations in plain text, an approach that startled many users and experts alike.

Short Summary:

  • OpenAI’s ChatGPT Mac app was saving chats as plain text.
  • Pedro José Pereira Vieito highlighted the security gap.
  • OpenAI has since released an encrypted version of the app.

In recent tech news, OpenAI’s much-anticipated ChatGPT Mac app faced a rough start due to privacy concerns. Pedro José Pereira Vieito, a developer, raised alarms when he discovered that the app saved user conversations in plain text on a non-protected location on the device. This lack of encryption meant that any other application or process on the same macOS system could potentially access these conversations without user consent.

“I was curious about why OpenAI opted out of using the app sandbox protections and ended up checking where they stored the app data,” Vieito revealed in an interview with The Verge. He further developed an app named “ChatGPTStealer” to demonstrate how easy it was to access these conversations.

The Mac app, available solely through OpenAI’s website, bypassed Apple’s sandboxing requirements, which are mandated for software distributed via the Mac App Store. The decision to skip these built-in protections raised eyebrows and led to swift action from the tech community and users alike.

After receiving feedback from The Verge, OpenAI acted promptly by rolling out an update to address the glaring issue. Taya Christianson, a spokesperson for OpenAI, told The Verge, “We are aware of this issue and have shipped a new version of the application which encrypts these conversations. We’re committed to providing a helpful user experience while maintaining our high-security standards as our technology evolves.”

“After downloading the update (v1.2024.171), Vieito’s app no longer works, and conversations are now encrypted, alleviating the immediate privacy risks,” reported Jay Peters from The Verge.

Despite the update, the app remains non-sandboxed. The choice to distribute the app outside the Mac App Store means that OpenAI avoids the stringent security measures required for apps on the App Store. Users are advised to update their app to the latest version to ensure that their conversations are encrypted.

It’s worth noting that sandboxing in macOS isolates an app and its data within a restricted environment, preventing unauthorized access from other apps. This system, mandatory for iOS apps, became optional for macOS with OS X Lion in 2011. However, macOS Mojave introduced a new security layer, prompting apps to request user permission to access data outside their sandbox.

“By looking at the preference and cache files stored by the app, I noticed that all conversations registered in the app are kept saved in plain text,” explained Vieito. “Anyone can find the conversations from the ChatGPT app by going to Library > Application Support > com.openai.chat.”

The implications are significant, especially for a widely-used tool like ChatGPT, which can often handle sensitive user data. While OpenAI claims that user data is collected to improve its language models, the additional risk of this data being accessed by malware or other processes is concerning.

OpenAI’s partnership with Apple to integrate ChatGPT capabilities into Siri queries was recently announced, with Apple showcasing stringent security measures during WWDC. However, the Mac App security laxity on OpenAI’s part showcased a stark contrast.

Despite this, Vieito’s discovery serves as a reminder of the important balance between innovation and security in AI applications. On a broader scale, ensuring that AI applications adhere to robust security protocols is crucial, especially as we see rapid advancements in the Future of AI Writing.

The controversy brings an essential discussion to the forefront – the AI Ethics in software deployment. As AI becomes more integrated into our daily tools, safeguarding user privacy becomes paramount.

While OpenAI has now moved to encrypt conversations, users and developers alike must remain vigilant. Encryption alone isn’t a foolproof solution if other security mechanisms are not in place.

For AI application developers, like those at Autoblogging.ai, the wide range of potential uses also brings an obligation to prioritize both innovation and security. As we continue to explore and deploy AI solutions, a balanced approach is crucial.

In conclusion, while the discovery of this security flaw in OpenAI’s ChatGPT Mac app cast a temporary shadow, the company’s quick response demonstrates its commitment to user security. Moving forward, both developers and users must prioritize and demand more secure AI applications.