Technology
27 min read
Apple's New Siri Powered by Gemini: Unpacking the Invisible Catch
thestreet.com
January 21, 2026•1 day ago

AI-Generated SummaryAuto-generated
Apple is integrating Google's Gemini models into its next-generation Siri, a move described as a multi-year collaboration. While Apple emphasizes data privacy through its Private Cloud Compute architecture, experts raise concerns about "behavioral sovereignty." The core issue is who controls Siri's decision-making logic, potentially inheriting biases and limitations from Google's models. Transparency regarding the third-party model's involvement is highlighted as crucial.
Apple and Google describe their Gemini deal as a multi‑year collaboration that will put Google’s models and cloud technology at the core of Apple’s next generation of foundation models, including a more personalized Siri arriving later this year.
In their joint statement, the companies stress that Apple Intelligence “will continue to run on Apple devices and Private Cloud Compute,” the architecture Apple pitches as its way to keep sensitive requests off generic public clouds.
That framing makes this sound like a plumbing change, not a shift in power.
TheStreet’s interview with Gal Nakash, chief product officer (CPO) of cybersecurity firm Reco, suggests something more structural is happening behind the scenes.
He argues that once Siri’s brain comes from Gemini, the real stakes move from where data sits to who controls the behavior of the model you talk to each day.
Apple’s version of the Siri upgrade
Apple is not pretending this is a minor tweak.
The company concluded after “careful evaluation” that Google’s technology offered “the most robust foundation” for Apple Foundation Models, according to the joint announcement covered by CNBC. Apple is framing the deal as a technical choice that accelerates its AI roadmap without abandoning its privacy posture, the Campus Technology outlet said.
The companies repeat the same line on privacy: Apple Intelligence, including the new Siri, will run on Apple devices or within Apple’s Private Cloud Compute, which is marketed as an Apple‑controlled environment rather than a generic Google data center.
That insistence on Private Cloud Compute is meant to calm users already nervous about how much data large language models can ingest and retain, Bitdefender noted in its coverage of the deal.
Apple’s message is simple. Gemini gives Siri better answers, while Apple’s walls keep your data safe. Nakash’s view is that those walls are only as strong as the weakest, least visible link.
“Private Cloud Compute is only as private as the weakest link“
When asked what would convince him that Gemini‑powered Siri is actually private, Nakash didn’t start with generic reassurances.
He listed concrete controls he would want to see inside Apple’s implementation.
“Private Cloud Compute is only as private as the weakest link,” he said, adding that if Google keeps any path to usage data “for model improvement or debugging, the privacy guarantee fundamentally breaks down.”
It will be critical to see how Apple actually implements the “walled” private data in the cloud and the access controls around that data, Ciphero CEO, CTO and co‑founder Saoud Khalifah told TheStreet, warning that models improve by collecting data in a loop and that reinforcement‑learning pipelines are “where private information can leak,” if not constrained.
The controls Nakash wants line up more with an enterprise security audit than a consumer feature checklist:
Cryptographic attestation that proves each Gemini inference really runs on Apple’s PCC, not silently routed to Google infrastructure.
Model weight isolation, where Apple receives frozen Gemini weights it can inspect, instead of a live API endpoint Google can alter at will.
A zero‑knowledge architecture that gives Google no logs, prompts, or telemetry from real users.
Independent audits of the PCC environment, focused on whether prompts and responses ever leave Apple’s systems in practice.
Contractual penalties with real financial teeth for any unwanted access or leakage.
He also wants a transparent playbook for model updates, so it’s clear when Apple can tune behavior on its own and when changes require Google’s involvement.
That is a higher bar than the one laid out in public statements so far, which lean heavily on Apple’s existing privacy reputation rather than verifiable controls.
Apple’s third-party model for Siri risks loss of “behavioral sovereignty“
Most consumer privacy debates focus on where data sits and who can read it. Nakash thinks the larger risk in this case sits one layer up. “The single biggest risk is loss of behavioral sovereignty,” he said when asked about Apple leaning on Gemini for Siri.
Even if Apple keeps Gemini workloads inside its own infrastructure and never pipes raw data back to Google, it is still delegating core decision‑making logic to an external system.
Naksh believes that creates a cascade of problems.
Apple cannot fully predict how Siri will behave in edge cases because the underlying reasoning comes from Gemini’s training, not Apple’s own stack.
Model biases, hallucinations, and refusal patterns are inherited from Google’s training choices and safety rules.
Apple’s ability to fine‑tune behavior for specific cultural or legal contexts is constrained by what the Gemini architecture allows.
If Gemini develops problematic behavior or security issues, Apple depends on Google’s release cycle to ship a fix.
“You can audit data flows, but you can’t audit the black‑box reasoning that determines user experience,” he said.
More Tech Stocks:
Morgan Stanley sets jaw-dropping Micron price target after event
Nvidia’s China chip problem isn’t what most investors think
Quantum Computing makes $110 million move nobody saw coming
Morgan Stanley drops eye-popping Broadcom price target
Apple analyst sets bold stock target for 2026
“[Apple doesn’t] control the biases of the model creators and, in result, how it thinks,” Khalifah said, arguing that this can produce “problematic experiences that do not align with Apple’s core values.”
That framing lines up with concerns raised in broader AI governance work, where researchers argue that model behavior can become a form of infrastructure risk in its own right. For users, it means the privacy story may hold while the personality, politics, and safety boundaries of Siri quietly shift.
What regulators should actually look at with Apple-Google partnership
Regulators will inevitably worry about data flows and market power when they see Apple and Google tying up around consumer AI.
Google’s role in Siri could echo its lucrative default‑search placement on the iPhone, a relationship that drew Justice Department scrutiny in the U.S., CNET noted in its coverage.
Nakash would start somewhere more basic: disclosure.
“Regulators should focus first on transparency and disclosure – not because it solves everything, but because it’s foundational,” he said.
His checklist for basic transparency looks like this.
Clear disclosure that Siri uses a third‑party model, including which company and which version.
Plain‑language explanations of when Siri relies on Gemini versus Apple’s own models.
Accessible descriptions of what that split means for privacy and data handling in everyday use.
In Nakash’s view, data‑flow questions are partly addressed by Apple’s Private Cloud Compute architecture, at least on paper, and antitrust issues around market power sit within existing search‑default cases.
Model oversight still matters, but he argues it is impossible to regulate fairly without basic transparency about who is supplying which model and when.
“Without disclosure, users can’t make informed choices, regulators can’t audit compliance, and competitors can’t challenge anti‑competitive behavior,” he said. That is the governance gap the Gemini deal risks widening if it stays mostly invisible to end users.
Practical steps for privacy‑conscious iPhone users
For someone who wants smarter Siri features but hates the idea of a third‑party model sitting between them and Apple, Nakash offers a short, specific playbook.
First, he would check Siri’s data sources.
“Go to Settings > Siri & Search and review what data sources Siri can access,” he said, pointing to Messages, Mail, and Contacts as examples. If any of those feel too sensitive to risk, turn off Siri access and keep those apps out of Gemini’s context window.
Second, he would look for any option that limits cloud processing.
“If Apple provides an option to use only on‑device Siri, likely more limited but using Apple’s own models, switch to that mode,” he said, and watch for toggles tied to Siri Suggestions or Private Cloud Compute.
Third, he would make a habit of cleaning up Siri history.
The path he recommends is Settings > Siri & Search > Siri History, with regular deletions as a simple hedge if the architecture doesn’t end up being as airtight as promised.
“While Apple claims data doesn’t go to Google, limiting what’s stored reduces your exposure if the architecture isn’t as private as advertised,” he said.
Khalifah also recommends turning off any settings that let assistants gather extra personal information “because that will be used for ads and recommendations,” and using any available “incognito” or privacy modes when testing new AI features.
He said users should assume their data “can be retained for many years” and adjust their Siri habits accordingly.
Rate this article
Login to rate this article
Comments
Please login to comment
No comments yet. Be the first to comment!
