From 37 items, 18 important content pieces were selected
- CERT Discloses Six Critical CVEs in dnsmasq ⭐️ 8.0/10
- DuckDB Introduces Quack Protocol for Remote and Scalable Access ⭐️ 8.0/10
- Canada’s Bill C-22 Revisits Controversial Surveillance Laws ⭐️ 8.0/10
- Google 将推出“Googlebook”取代 Chromebook,深度整合 Gemini AI ⭐️ 8.0/10
- Samsung Union Protest Drops Chip Production, Threatens Global Supply Chain ⭐️ 8.0/10
- Needle: A 26M Model for Efficient On-Device Tool Calling ⭐️ 7.0/10
- Major News Orgs Urged to Maintain Wayback Machine Access ⭐️ 7.0/10
- Rendering Sky, Sunsets, and Planets with Graphics Programming ⭐️ 7.0/10
- Obsidian Unveils New Plugin Ecosystem with Automated Reviews ⭐️ 7.0/10
- Bambu Lab Criticized for Breaking Open Source Social Contract ⭐️ 7.0/10
- llm 0.32a2 Alpha Supports OpenAI’s Responses API for Interleaved Reasoning ⭐️ 7.0/10
- South Korea Proposes National Dividend from AI & Semiconductor Profits ⭐️ 7.0/10
- Canvas LMS Hacked, Disrupting US Schools During Finals Week ⭐️ 7.0/10
- China Regulator Conditionally Approves Tencent’s Acquisition of Ximalaya ⭐️ 7.0/10
- Anthropic Rejects Chinese Think Tank’s Access to Latest AI Models ⭐️ 7.0/10
- US Commerce Dept. Removes AI Safety Testing Agreement Details ⭐️ 7.0/10
- SpaceX and Google Discuss Launching Orbital Data Centers ⭐️ 7.0/10
- Google Launches Gemini Intelligence AI Features for Pixel and Samsung Devices ⭐️ 7.0/10
CERT Discloses Six Critical CVEs in dnsmasq ⭐️ 8.0/10
The CERT Coordination Center has released six critical Common Vulnerabilities and Exposures (CVEs) for dnsmasq, a widely-used DNS and DHCP server, detailing serious security flaws. These vulnerabilities pose significant risks to networks relying on dnsmasq for critical services, and have sparked community debates on adopting memory-safe programming languages to enhance software security. The vulnerabilities include a heap out-of-bounds write via DNS queries, an infinite loop causing service denial, and a buffer overflow in DHCP requests, as noted in community discussions.
hackernews · chizhik-pyzhik · May 12, 18:12
Background: Dnsmasq is a lightweight network service tool that provides DNS, DHCP, and other functions for small networks, as described on its official website. Memory-safe programming languages like Rust and Go are designed to prevent memory-related security bugs, which are prevalent in languages such as C and C++.
References
Discussion: Community members express urgent concerns over the vulnerabilities, with some advocating for a shift to memory-safe languages like Rust or Go, while others criticize Linux distributions such as Debian for backporting patches instead of updating to newer versions, and users inquire about updates from projects like OpenWRT and mention alternatives like MaraDNS.
Tags: #security, #CVE, #dnsmasq, #memory-safety, #Linux-distributions
DuckDB Introduces Quack Protocol for Remote and Scalable Access ⭐️ 8.0/10
DuckDB announced Quack, its native client-server protocol, on May 12, 2026. This protocol enables remote connections to a DuckDB instance and supports multiple concurrent writers, a key step toward horizontal scaling. This addresses a major practical limitation of DuckDB, which was previously accessible only as an embedded library. It transforms DuckDB from a purely local analytical engine into one that can be shared across teams and applications, broadening its utility for internal platforms and collaborative data work. The protocol is designed to be simple to set up and is built on HTTP, aligning with DuckDB’s philosophy. Its focus on speed is intended to support a wide range of workloads, from interactive queries to bulk data operations.
hackernews · aduffy · May 12, 17:54
Background: DuckDB is an open-source, in-process columnar database management system optimized for online analytical processing (OLAP). It is often described as the ‘SQLite for analytics’ due to its embedded nature and high performance on complex analytical queries. Unlike traditional client-server databases, it was originally designed to run within a host process, like a Python or Node.js application.
Discussion: The community reception is largely positive, with users seeing Quack as a missing piece that completes DuckDB’s vision to be the embedded standard for analytics, similar to SQLite’s role. Specific comments highlight its immediate utility in solving problems like horizontal scaling for internal apps and enabling remote UI access to locally running database instances.
Tags: #database, #analytics, #protocol, #DuckDB, #client-server
Canada’s Bill C-22 Revisits Controversial Surveillance Laws ⭐️ 8.0/10
Canada has proposed Bill C-22, which reinstates previously controversial surveillance measures, including requirements for mandatory data retention and the potential for encryption backdoors in digital services. If enacted, the bill could force major encrypted messaging platforms like Signal and WhatsApp to withdraw service from Canada, significantly impacting user privacy and the availability of secure communications for both individuals and businesses. A key point of contention is the bill’s definition of ‘systemic vulnerabilities’; a potential ‘escape hatch’ clause suggests companies might not be required to implement backdoors if it compromises security, though interpretations of this clause vary widely among legal experts and the tech community.
hackernews · Brajeshwar · May 12, 17:35
Background: An encryption backdoor is a deliberate weakness intentionally built into a system to allow third-party access, often for law enforcement, which experts argue fundamentally undermines overall security. Mandatory data retention laws require telecommunications and internet service providers to store users’ communication metadata for specified periods, a practice that raises significant privacy concerns. Similar legislative attempts, such as the EU’s ‘Chat Control’ proposal and past U.S. debates during the ‘Crypto Wars,’ have faced strong opposition from security researchers and civil liberties groups.
Discussion: The online discussion shows significant concern, with users predicting that major encrypted services will block Canadian users and urging citizens to contact their representatives. Some commentators view the repeated legislative attempts as a tactic of persistence, while others debate the legal nuances, specifically questioning whether the bill’s ‘systemic vulnerabilities’ clause effectively negates the backdoor requirement.
Tags: #privacy, #surveillance, #encryption, #legislation, #Canada
Google 将推出“Googlebook”取代 Chromebook,深度整合 Gemini AI ⭐️ 8.0/10
Google plans to launch Googlebook devices, deeply integrating Gemini AI, to replace Chromebooks, featuring new hardware, AI-powered functions, and a potential Aluminium OS.
telegram · zaihuapd · May 13, 00:02
Tags: #Google, #Chromebook, #Gemini AI, #Operating Systems, #AI Hardware
Samsung Union Protest Drops Chip Production, Threatens Global Supply Chain ⭐️ 8.0/10
Samsung Electronics’ largest union reported that a mass walkout for wage protests caused a 58% drop in foundry chip output and an 18% decline in storage chip production during the Thursday night shift from 10 PM to 6 AM. This labor dispute could severely disrupt global semiconductor supply chains, affecting critical industries like artificial intelligence, machine learning, and consumer electronics that rely on Samsung’s chip production. The protest is centered on demands to remove bonus caps and substantially increase base pay, with the union threatening an 18-day strike starting May 21 if the company does not compromise, which could further exacerbate supply chain issues.
telegram · zaihuapd · May 13, 01:11
Background: A semiconductor foundry is a manufacturing facility that fabricates chips based on designs from other companies, such as TSMC or Samsung. Storage chips, including DRAM and NAND, are key components in electronic devices for data storage, with Samsung being a major producer in both foundry and memory sectors.
Tags: #Samsung Electronics, #semiconductor supply chain, #labor protest, #chip production
Needle: A 26M Model for Efficient On-Device Tool Calling ⭐️ 7.0/10
Cactus has open-sourced Needle, a 26-million-parameter model distilled from Gemini, specifically optimized for high-speed tool calling on consumer devices using a novel attention-only architecture that removes all feed-forward network (FFN) layers. This work demonstrates that complex agentic functionalities like tool calling can be achieved with extremely small, efficient models, making advanced AI features feasible on budget phones, wearables, and edge devices without relying on cloud APIs. The model achieves speeds of 6000 tokens/s for prefill and 1200 tokens/s for decoding on consumer hardware. It was pre-trained on 200B tokens and then fine-tuned on 2B tokens of synthesized function-calling data covering 15 tool categories.
hackernews · HenryNdubuaku · May 12, 18:03
Background: Tool calling allows a language model to invoke external functions or APIs to perform actions like checking the weather or sending messages, forming a core building block for “agentic AI”. Traditional transformer models consist of alternating attention and feed-forward network (FFN) layers. Model distillation is a technique to transfer knowledge from a large, powerful model (like Gemini) into a smaller, more efficient one. This work rethinks the standard architecture for a specific task.
References
Discussion: The community showed strong interest, with users suggesting practical applications like embedding the model into command-line interfaces for natural language arguments. Some discussion focused on the model’s ability to handle more complex, ambiguous tool selection beyond simple queries, and a popular suggestion was to publish a live demo playground to showcase its capabilities. One commenter humorously noted the subtle size distinction, proposing to use ‘0.026B’ instead of ‘26M’.
Tags: #model-distillation, #tool-calling, #edge-ai, #efficiency, #open-source
Major News Orgs Urged to Maintain Wayback Machine Access ⭐️ 7.0/10
A petition is being circulated that urges major news organizations like The New York Times, The Atlantic, and USA Today to not block the Internet Archive’s Wayback Machine from crawling and archiving their websites. The blocking of major news outlets from the Wayback Machine creates significant gaps in the digital historical record, impacting research, accountability, and the public’s ability to access past information. This case highlights the growing tension between commercial web practices and the mission of digital preservation. The core technical and ethical issue is that the Internet Archive (archive.org) traditionally respects the robots.txt protocol, a file that instructs crawlers which parts of a site to avoid, while some for-profit entities may ignore such directives for their own archives.
hackernews · doener · May 12, 23:11
Background: The Wayback Machine, operated by the Internet Archive, is a vast digital library that takes periodic snapshots of public websites to create a browsable historical archive. The robots.txt file is a standard used by website administrators to communicate with web crawlers, specifying which parts of the site should not be accessed or indexed. Web archives like the Wayback Machine typically use the WARC (Web ARChive) file format, an ISO standard, to store these harvested web pages.
References
Discussion: Community discussions express frustration that the Internet Archive is penalized for ethical behavior (respecting robots.txt) while others may profit by ignoring it. Commenters propose technical and policy solutions, such as implementing a cryptographically verifiable archive system or establishing an ‘escrow’ model where content is stored but published after a delay (e.g., one year or 30 days).
Tags: #web archiving, #digital preservation, #internet ethics, #Wayback Machine, #Hacker News
Rendering Sky, Sunsets, and Planets with Graphics Programming ⭐️ 7.0/10
Maxime Heckel published a detailed blog post explaining techniques for rendering realistic skies, sunsets, and planets using atmospheric scattering and volumetric effects in graphics programming. This post is significant for graphics programmers as it provides practical techniques for creating realistic atmospheric effects, which are essential for immersive visual experiences in games and simulations. The blog focuses on atmospheric scattering and volumetric rendering, and community feedback includes corrections on the sunset model, noting that the sky should not darken immediately after the sun sets due to continued light scattering in the atmosphere.
hackernews · ibobev · May 12, 13:26
Background: Atmospheric scattering is the process where light interacts with particles in the atmosphere, causing phenomena like blue skies and red sunsets through wavelength-dependent scattering. Volumetric rendering techniques are used to display 3D data such as clouds and fog, often involving methods like ray marching or texture-based sampling.
References
Discussion: The community discussion is enthusiastic, with users sharing related resources like Sebastian Lague’s video on atmospheric rendering and providing technical corrections, such as the need to model twilight after sunset. Some also mentioned combining atmospheric scattering with volumetric clouds for enhanced effects and referenced historical research papers.
Tags: #graphics-programming, #rendering, #atmospheric-scattering, #computer-graphics, #tutorials
Obsidian Unveils New Plugin Ecosystem with Automated Reviews ⭐️ 7.0/10
Obsidian has launched a new community site and an automated review system that scans every plugin version for security and code quality, replacing the previous manual review process to address scaling bottlenecks. This development resolves critical scaling issues in the plugin ecosystem by streamlining submissions, reducing team burnout, and improving security oversight, which is vital for the health and growth of Obsidian’s community-driven platform. The automated review system checks every plugin update for vulnerabilities and code quality, but it does not implement a sandboxing or permission system, leaving plugins with full disk and network access, which some view as a persistent security risk.
hackernews · xz18r · May 12, 15:45
Background: Obsidian is a note-taking application that supports a rich plugin ecosystem for extending functionality. Previously, all plugin submissions required manual review by a small team, leading to significant delays and developer frustrations due to scaling challenges. Plugins in Obsidian run with full system access, which has raised security concerns if malicious code is introduced.
Discussion: Community reactions include support from the Obsidian CEO and developers who praise the scaling improvements, but concerns are raised about security, with some users arguing that automated checks may not reliably detect malicious plugins and calling for a proper sandboxing system.
Tags: #obsidian, #plugin-ecosystem, #software-scaling, #security, #community-management
Bambu Lab Criticized for Breaking Open Source Social Contract ⭐️ 7.0/10
Bambu Lab is taking legal action against developers of third-party clients like OrcaSlicer, citing a threat to its network security and stability. The company has implemented new restrictions that force devices to connect to its cloud servers, framing unauthorized client usage as a security vulnerability. This controversy strikes at the heart of the open-source ethos in the 3D printing community, potentially setting a precedent where companies leverage legal threats to control the ecosystem around their hardware. It threatens to erode the collaborative spirit that has driven innovation and user empowerment in the desktop 3D printing space. Critics argue that Bambu Lab’s security justification is weak, as restricting access via a ‘user agent string’ is not robust authentication and their infrastructure issues should not be solved by locking out users. The move is seen as a shift from Bambu Lab’s earlier, more open approach and a regression towards a closed, restrictive ecosystem.
hackernews · rubenbe · May 12, 14:54
Background: In open source philosophy, a ‘social contract’ refers to the implicit agreement between a project and its community that upholds principles of transparency, collaboration, and user freedom. Bambu Lab, a popular 3D printer manufacturer, initially gained support partly due to its use of open-source software components, but has gradually implemented more restrictive controls, leading to accusations of abusing the community’s trust. This debate echoes broader industry tensions between proprietary ‘walled garden’ models and the open-source ecosystem.
References
Discussion: The community discussion is highly critical of Bambu Lab’s actions, with many users defending the open-source developers and questioning the company’s technical and security justifications. Some commentators note that Bambu Lab has previously reversed restrictive policies after facing similar public backlash, suggesting that user pressure can effectively influence the company’s direction. A few voices introduce more speculative geopolitical angles regarding the company’s servers and the ongoing conflict in Ukraine.
Tags: #open source, #3D printing, #ethics, #community debate
llm 0.32a2 Alpha Supports OpenAI’s Responses API for Interleaved Reasoning ⭐️ 7.0/10
The llm tool’s alpha version 0.32a2 now supports OpenAI’s new /v1/responses endpoint, replacing the older /v1/chat/completions for reasoning-capable models. This update enables the display of summarized reasoning tokens in the terminal output, which users can hide with the -R flag. This update is significant because it leverages OpenAI’s newer API to enable ‘interleaved reasoning’ for GPT-5 class models, allowing the AI to reason between tool calls, which can create more sophisticated and reliable agentic workflows. It keeps the llm tool aligned with the latest OpenAI platform advancements, benefiting developers building complex AI-powered applications. A key technical feature of the new /v1/responses endpoint is support for stateful interactions, where a previous response’s ID can be passed as input to maintain conversation context without manually managing message history. It is important to note that this is an alpha release (version 0.32a2), which may still contain bugs or undergo further changes.
rss · Simon Willison · May 12, 17:45
Background: The llm tool is a popular command-line utility created by Simon Willison for interacting with large language models from various providers. OpenAI’s traditional API for chat models was the /v1/chat/completions endpoint. The newer /v1/responses endpoint is designed for more advanced agentic workflows, supporting features like stateful interactions and integrated reasoning. ‘Interleaved reasoning’ refers to the model’s ability to perform a thinking or analysis step after receiving the result of a tool call, before deciding on the next action, which improves decision-making in multi-step tasks.
References
Tags: #llm, #OpenAI, #reasoning, #tool-calls, #AI-tools
South Korea Proposes National Dividend from AI & Semiconductor Profits ⭐️ 7.0/10
South Korean official Kim Yong-beom proposed establishing a national dividend system using excess profits from AI and semiconductor industries, citing the Norway oil fund as a model. This proposal highlights the growing debate on redistributing wealth from technological advancements to prevent inequality and could set a precedent for other technologically advanced nations. The proposal triggered a market panic in South Korea, causing the KOSPI index to drop over 5% intraday, before being clarified as targeting excess tax revenues rather than imposing a windfall tax on corporate profits.
telegram · zaihuapd · May 12, 04:42
Background: The Norway oil fund model, formally the Government Pension Fund Global, is a sovereign wealth fund designed to manage the nation’s oil and gas revenues for the benefit of current and future generations. The proposal assumes that AI and semiconductor industries are generating structural excess profits that are partly built on national industrial foundations.
References
Tags: #AI policy, #semiconductors, #economic redistribution, #South Korea, #market impact
Canvas LMS Hacked, Disrupting US Schools During Finals Week ⭐️ 7.0/10
Hackers from the ShinyHunters group breached the Canvas learning management system, deploying ransom messages and causing a service outage during the critical finals week for multiple US universities and school districts. The attack also resulted in a data leak containing usernames, email addresses, and student IDs. This incident is highly significant as it disrupts a core educational platform used by millions during a high-stakes academic period, directly impacting students’ ability to access materials and take exams. It also underscores serious cybersecurity vulnerabilities in widely-adopted edtech infrastructure. The hackers, ShinyHunters, claimed responsibility for two separate incidents targeting Instructure (Canvas’s parent company) this month, with the earlier May 1st incident involving a confirmed data breach. The outage forced institutions like James Madison University to postpone and reschedule final exams originally set for Friday.
telegram · zaihuapd · May 12, 09:16
Background: Canvas is a leading cloud-based learning management system (LMS) developed by Instructure, widely used by K-12 schools, universities, and corporations for managing courses, delivering content, and administering quizzes. ShinyHunters is a well-known cybercriminal group infamous for data breaches and ransomware attacks targeting various organizations across different sectors.
Tags: #cybersecurity, #education technology, #data breach, #Canvas, #hacking
China Regulator Conditionally Approves Tencent’s Acquisition of Ximalaya ⭐️ 7.0/10
China’s State Administration for Market Regulation conditionally approved Tencent’s acquisition of Ximalaya on May 11, imposing five restrictive commitments to prevent anti-competitive behavior and ensure market fairness. This decision preserves competition in China’s online audio streaming market, protecting consumers, content creators, and automotive partners, and sets a regulatory precedent for future tech acquisitions. The five conditions include prohibitions on raising prices, reducing free content, maintaining exclusive rights agreements, bundling platforms with automakers, and restricting creators’ multi-platform distribution.
telegram · zaihuapd · May 12, 09:55
Background: China’s State Administration for Market Regulation oversees mergers and acquisitions to ensure fair competition. Conditional approvals are common in antitrust cases, as seen in international mergers like Dow-DuPont and Bayer-Monsanto, where conditions are imposed to mitigate market concerns. This approval reflects China’s approach to regulating tech industry consolidations.
References
Tags: #antitrust, #tech acquisition, #China regulation, #audio streaming, #competition policy
Anthropic Rejects Chinese Think Tank’s Access to Latest AI Models ⭐️ 7.0/10
Anthropic declined a request from a Chinese think tank for access to its latest AI models during a conference in Singapore organized by the Carnegie International Peace Foundation. This incident underscores geopolitical tensions in AI development and access, with US officials viewing it as a potential security risk that could affect the global AI competition. The request was not an official one from the Chinese government but was sufficient to alert the US National Security Council, indicating increased vigilance over AI security measures.
telegram · zaihuapd · May 12, 12:57
Background: Large language models (LLMs) are advanced AI systems trained on vast datasets to generate human-like text, with Anthropic and OpenAI leading US development in this field. AI safety and alignment are critical concerns, as models can exhibit flaws or engage in behaviors that challenge trustworthiness. The US and China are in a competitive race for AI supremacy, prompting governments to monitor access to prevent misuse and protect national security.
References
Tags: #AI policy, #geopolitics, #Anthropic, #China-US relations, #AI security
US Commerce Dept. Removes AI Safety Testing Agreement Details ⭐️ 7.0/10
The U.S. Commerce Department’s website quietly removed details of an agreement with Google, xAI, and Microsoft concerning safety testing of AI models before public deployment. The original page is now gone and redirects to the Center for AI Standards and Innovation (CAISI) site, with no official explanation provided. This removal raises concerns about transparency and accountability in AI governance, as it involves critical pre-deployment safety protocols for major tech companies. The unclear reasoning behind the deletion could signal shifting priorities, internal confusion, or a retraction of commitments made during a period of active AI safety policy development. The removed agreement pertained to allowing government scientists to test new AI models for safety vulnerabilities before they are released to the public. This action occurred under the Trump administration, and neither the Commerce Department nor the White House immediately commented on the matter.
telegram · zaihuapd · May 12, 13:38
Background: Pre-deployment safety testing is a key component of AI governance frameworks, aiming to have independent experts evaluate a model’s safety and security before its public release. In 2023, the U.S. established its own AI safety institute, which was later rebranded as the Center for AI Standards and Innovation (CAISI) by 2025. Concerns have grown that AI models might ‘game’ or cheat safety evaluations, making transparent and robust testing protocols a subject of international debate.
Tags: #AI safety, #government policy, #tech companies, #Reuters news, #AI governance
SpaceX and Google Discuss Launching Orbital Data Centers ⭐️ 7.0/10
Google is in negotiations with SpaceX for a rocket launch agreement to advance its ‘Project Suncatcher,’ which aims to deploy prototype data center satellites in orbit by 2027, while SpaceX is also planning to provide massive compute resources to Anthropic as part of its IPO strategy. This collaboration represents a significant step toward deploying space-based AI infrastructure, potentially disrupting the cloud computing industry by offering a more sustainable, solar-powered alternative to terrestrial data centers that face escalating energy demands. Project Suncatcher envisions a distributed network of solar-powered satellites connected via free-space optical links, but it faces substantial engineering challenges; the project is in partnership with Planet Labs for satellite development, and SpaceX’s parallel deal with Anthropic involves delivering over 220,000 Nvidia GPUs by late May.
telegram · zaihuapd · May 12, 16:28
Background: Global AI data center power consumption is projected to increase fivefold by 2030, making sustainable alternatives like orbital solar power increasingly attractive. ‘Project Suncatcher’ is Google’s ambitious initiative to place AI compute infrastructure in space, leveraging continuous solar energy and the vacuum of space for cooling, a concept often referred to as ‘space-based cloud computing.’ Planet Labs is a commercial Earth imaging company that operates a large constellation of small satellites, providing relevant expertise for satellite deployment.
References
Tags: #SpaceX, #Google, #Orbital Data Centers, #AI Infrastructure, #Space Technology
Google Launches Gemini Intelligence AI Features for Pixel and Samsung Devices ⭐️ 7.0/10
Google announced Gemini Intelligence, a suite of AI features for high-end Android devices, which will begin rolling out this summer to the latest Pixel and Samsung Galaxy phones before expanding to watches, cars, glasses, and laptops later in the year. This launch represents a significant step in deeply integrating advanced, context-aware AI directly into the core mobile experience, potentially setting new standards for task automation and interaction on high-end smartphones and across the wider Android ecosystem. Key features include task automation based on screen context, an AI-backed ‘Rambler’ voice input for Gboard that distills messy spoken thoughts into polished text, and ‘Create My Widget’ for generating custom widgets from descriptions; the voice input feature emphasizes privacy by not storing audio recordings.
telegram · zaihuapd · May 13, 00:32
Background: Material Design is Google’s design language for creating user interfaces, with Material 3 (Material You) being its latest iteration focused on personalization. Gemini is Google’s family of large AI models, and this announcement shows how these models are being packaged into practical, on-device features for consumers. Gboard is Google’s widely used virtual keyboard application for Android.
References
Tags: #AI, #Android, #Google, #Mobile AI, #Software Updates