- Smarter with AI
- Posts
- SunBrief#78: ️Musk admits Grok was trained on ChatGPT
SunBrief#78: ️Musk admits Grok was trained on ChatGPT
AI spots deadly cancer years early, Claude gets enterprise security powers, and Manus launches always-on cloud AI workflows.

Welcome to the SunBrief
Today in SunBrief 🌞
You Can't Automate Good Judgement
Musk Says xAI Trained Grok Using OpenAI Models
AI Catches Deadly Cancer Years Early
Stock Updates
Pentagon Brings Big Tech AI to Classified Networks
AI Highlights of the Week
Too Important to Miss
You Can't Automate Good Judgement
AI promises speed and efficiency, but it’s leaving many leaders feeling more overwhelmed than ever.
The real problem isn’t technology.
It’s the pressure to do more with less — without losing what makes your leadership effective.
BELAY created the free resource 5 Traits AI Can’t Replace & Why They Matter More Than Ever to help leaders pinpoint where AI can help and where human judgment is still essential.
At BELAY, we help leaders accomplish more by matching them with top-tier, U.S.-based Executive Assistants who bring the discernment, foresight, and relational intelligence that AI can’t replicate.
That way, you can focus on vision. Not systems.
Musk Says xAI Trained Grok Using OpenAI Models
Testimony in Musk v. Altman trial confirms xAI used distillation techniques on OpenAI systems
Elon Musk testified in federal court that xAI partly used OpenAI models to train Grok through “distillation,” a method where one AI system learns by querying another model and using its outputs to improve its own performance.
Key Points:
Courtroom admission: Asked whether xAI used OpenAI models to train Grok, Musk said distillation is common across AI companies and answered, “Partly.”
Distillation controversy: OpenAI and Anthropic have recently criticized third-party distillation, especially by Chinese firms building cheaper open-weight models from frontier chatbot outputs.
Industry irony: Musk’s admission suggests U.S. labs may also rely on similar techniques, even as leading AI firms try to prevent others from copying their models through mass querying.
Competitive pressure: xAI launched in 2023, years after OpenAI, making distillation a shortcut to close the capability gap with larger, more established labs.
Legal gray area: Distillation may not be explicitly illegal, but it can violate model providers’ terms of service if done through unauthorized or systematic querying.
AI ranking remark: Musk also ranked current AI leaders as Anthropic first, followed by OpenAI, Google, and Chinese open-source models, while describing xAI as much smaller.
Why It Matters:
This testimony reveals how frontier AI labs may quietly learn from each other. If distillation is common, it challenges the idea that top models rely only on proprietary data and compute, while raising questions about fair competition and whether AI leaders can stop copying when similar tactics may be widespread.
Does Musk’s admission change how you view xAI’s progress with Grok? |
AI Catches Deadly Cancer Years Early
REDMOD identifies subtle CT scan signals up to 3 years early, nearly doubling specialist detection rates
Mayo Clinic researchers validated an AI model called REDMOD that can detect early signs of pancreatic cancer on routine abdominal CT scans up to three years before clinical diagnosis, even when scans were originally read as normal.
Key Points:
Earlier cancer detection: REDMOD identified 73% of prediagnostic pancreatic cancers at a median of about 16 months before diagnosis.
Outperformed specialists: The AI nearly doubled the detection rate of specialists reviewing the same normal-appearing scans without AI help.
Stronger at earlier stages: For scans taken more than two years before diagnosis, the AI found nearly three times as many early cancers that would otherwise be missed.
Validated across settings: Researchers tested the model on nearly 2,000 CT scans from multiple institutions, imaging systems, and protocols, showing reliable performance beyond one dataset.
How it works: REDMOD analyzes hundreds of subtle imaging features related to tissue texture and structure, detecting biological changes before a visible tumor appears.
Next clinical step: Mayo Clinic is advancing the work through AI-PACED, a prospective study testing how AI-guided detection can be integrated into care for higher-risk patients.
Why It Matters:
Pancreatic cancer is often detected too late, when treatment options are limited. If proven in real-world care, REDMOD could help doctors spot risk earlier using scans patients already receive, giving them more time for monitoring, diagnosis, and potentially curative treatment.
Should hospitals use AI as a second reader for high-risk cancer scans? |
Stock Updates

Pentagon Brings Big Tech AI to Classified Networks
Defense Department expands classified-network AI access after earlier deals with Google, SpaceX, and OpenAI
The U.S. Defense Department has signed new agreements with Nvidia, Microsoft, Amazon Web Services, and Reflection AI to deploy their AI technologies and models on classified military networks for lawful operational use.
Key Points:
New AI vendors added: The Pentagon expanded its AI vendor base with deals covering Nvidia, Microsoft, AWS, and Reflection AI.
Classified network deployment: The technologies will be deployed in Impact Level 6 and 7 environments, which are high-security systems used for national-security-critical data.
AI-first military push: The DOD says the deals support its goal of becoming an AI-first fighting force, improving data synthesis, situational awareness, and warfighter decision-making.
Vendor diversification: The move follows agreements with Google, SpaceX, and OpenAI, and comes after the Pentagon’s public dispute with Anthropic over usage restrictions.
Avoiding lock-in: The Pentagon said it wants an architecture that prevents AI vendor lock-in and gives the military long-term flexibility.
GenAI.mil adoption: More than 1.3 million DOD personnel have used the Pentagon’s secure generative AI platform, GenAI.mil, mainly for approved cloud-based tasks like research, drafting, and data analysis.
Why It Matters:
The Pentagon is building a multi-vendor AI stack for classified military work, showing AI is moving beyond office tasks into secure defense systems while avoiding reliance on one AI lab.
Should Big Tech AI models be deployed on classified military networks? |
AI Highlights of the Week
Google DeepMind Introduces AI Co-Clinician for Healthcare
Google DeepMind announced an AI co-clinician research initiative to help doctors deliver better care under clinical supervision.The system can support clinicians with medical evidence, medication reasoning, and even live audio-video telemedicine tasks.
GPT-5.5 Shows Major Progress in Cybersecurity Tests
AISI says GPT-5.5 is one of the strongest models it has tested on advanced cyber tasks.The model completed a 32-step corporate network attack simulation end-to-end in 2 of 10 attempts.
Anthropic Opens Claude Security Beta for Enterprise Teams
Anthropic launched Claude Security in public beta for Claude Enterprise customers to scan code, validate findings, and help fix vulnerabilities.The tool runs on Claude Opus 4.7 and gives teams scheduled scans, confidence ratings, patch workflows, and audit exports.
Manus Launches Cloud Computer for Always-On AI Workflows
Manus introduced Cloud Computer, a dedicated cloud machine that can run bots, scripts, databases, and apps 24/7.It removes the need for server setup or coding skills, letting users build persistent automations that keep running even when their laptop is off.
Too Important to Miss
Last Week’s Poll Result
Will this trial damage public trust in OpenAI?
No, users won’t care → 48.65%
Yes, significantly → 29.73%
Somewhat → 21.62%

Do stronger cyber and bio safeguards make you more confident in powerful AI models?
Somewhat, but risks remain → 46.43%
Yes, safety matters as much as capability → 35.71%
No, powerful models are still concerning → 17.86%

Does agentic Copilot make Microsoft Office feel like a true AI workspace?
Somewhat, but still early → 40.54%
Yes, this is a major shift → 37.84%
No, it is still just assistance → 21.62%

Feedback
We’d love to hear from you!How did you feel about today's SunBrief? Your feedback helps us improve and deliver the best possible content. |
Know someone who may be interested?
And that's a wrap on today’s SunBrief!




Reply