April 26-May 2, 2026
AI’s Impact on Higher Education: An education technology analyst argues that AI is forcing those in higher education to have long-overdue conversations about what learning is and what it’s for. Rather than creating problems like grade inflation and transactional education, AI is exposing them, raising questions about whether institutions and educators will use this moment to address deeper problems or simply react to the technology. Read more.
New Pentagon Deals: The Pentagon announced additional deals with major tech companies including Amazon, Microsoft, and Nvidia. These companies agreed to allow the Defense Department to use their technology for “any lawful use.” The deals are the latest in an ongoing story about how AI companies are navigating the ethical limits (or lack thereof) of AI for military use, and what it means that many have chosen to partner with the Pentagon. Read more.
April 19-25, 2026
Maine Data Center Update: Gov. Janet Mills vetoed a bill that would have made the state the first in the country to temporarily ban new data center construction, citing the economic benefits of a specific project in Jay. The veto came despite strong bipartisan support for the bill, highlighting the tension between local economic interests and broader concerns about AI's environmental and energy impacts. Read more.
Meta's Workforce: Meta announced it is cutting 10% of its workforce to redirect resources to AI, while simultaneously tracking employee keystrokes and mouse movements across hundreds of websites to train its models. The two stories demonstrate how as companies invest heavily in AI, workers bear the cost both through job losses and the erosion of workplace privacy. Read more:
- "Meta to Cut 10% of Work Force in A.I. Push," The New York Times
- "Meta is tracking employee keystrokes on Google, LinkedIn, Wikipedia as part of AI training initiative," CNBC
April 12-18, 2026
Jagged Intelligence: AI systems can solve Math Olympiad problems yet struggle with simple common sense questions. Researchers use the term “jagged intelligence” to describe these uneven capabilities, arguing it’s more useful to understand where AI does well and where it fails rather than to compare it to human intelligence. This is particularly important when anticipating its impact on jobs and work. Read more.
Academic Integrity: Rather than relying on surveillance and AI detection, one university teaching and learning center director argues that the best defense against AI cheating is designing better courses. When assignments are boring, high-stakes, and with little feedback, students turn to AI not out of laziness, but because the system prioritizes efficiency over learning. Read more.
April 5-11, 2026
Gen Z on AI: A new Gallup survey found that while more than half of Gen Z uses AI, hopeful attitudes have dropped sharply and nearly a third report that AI makes them angry. Young adults are particularly concerned about AI’s impact on creativity and critical thinking. A common reason for hesitancy around AI was the threat to entry-level jobs, an especially relevant concern for graduating college students. Read more.
Medical Misinformation: Scientists published two fake preprint articles about a fictitious medical condition, bixonimania, to test whether AI chatbots would treat it as real. Major chatbots presented it as a legitimate medical concern, and the fake papers were even cited in peer-reviewed literature, demonstrating how AI can amplify misinformation, particularly when presented as an academic paper, with real consequences for medical guidance and the academic community. Read more.
Claude Mythos: Anthropic built a new AI model it considers too risky to release publicly due to its ability to find security vulnerabilities in widely used software. Rather than releasing it, Anthropic is making it available to a coalition of tech companies to patch vulnerabilities. The story highlights the risks of relying on companies to self-govern as AI becomes more powerful, and the urgent need for government regulation. Read more.
March 29-April 4, 2026
AI Regulation: As a Trump-aligned political group called Innovation Council Action plans to spend $100 million to block state-level AI regulations, Maine is poised to become the first state to ban new data center construction, pausing large projects until 2027 to assess environmental and energy impacts. Together, these stories reflect the growing tension between federal deregulation efforts and state-level pushback against the AI boom. Read more:
- "New Political Group to Push Trump's A.I. Agenda in Midterms," The New York Times
- "Maine Is About to Become the First State to Ban New Data Centers," The Wall Street Journal
Claude Code Leak: Anthropic inadvertently leaked the underlying instructions it uses to direct Claude Code, its popular AI coding tool, giving competitors and developers details to replicate its features. Although the leak did not expose any customer data, it raises questions about whether open-source developers will use this information to copy Anthropic, accelerating AI development. Read more.
OpenAI Buys Tech-Focused Show: OpenAI has acquired TBPN, a tech-focused streaming show, in what is openly described as a marketing move to shape public perception of AI. While the show claims it will remain editorially independent, it is important to consider what it means when the companies being covered start buying the outlets covering them. Read more.