Tag: mint – technology

  • ASUS’ 2026 Zenbook and Vivobook laptops bring Intel Core Ultra Series 3 and Snapdragon X2 Elite “AI-ready” chips to India

    This article was generated by AI and cites original sources.

    ASUS has launched a new set of 2026 Zenbook and Vivobook laptops in India, positioning the lineup around “AI-ready” processors from both Intel and Qualcomm. The models range from the entry-level Vivobook 14 at ₹98,990 to the flagship dual-screen Zenbook DUO at ₹299,990, with pre-orders running until 20 April and sales starting April 21 through ASUS Exclusive Stores, the ASUS E-shop, Flipkart, Amazon, and authorized partners.

    What ASUS is shipping: two brands, multiple “AI-ready” platforms

    According to the launch details reported by mint, ASUS’ new machines are powered by the latest AI-ready processors, including Intel Core Ultra Series 3 and Qualcomm Snapdragon X2 Elite platforms. The lineup spans both mainstream Vivobook models and the premium Zenbook range.

    In the Vivobook lineup, ASUS uses Intel Core Ultra Series 3 processors across multiple tiers: the Vivobook 14 and Vivobook 16 are powered by Intel Core Ultra 5 Series 3, while the Vivobook S14 and Vivobook S16 move to Intel Core Ultra 7 chips. ASUS also highlights that the Vivobook S series includes OLED displays, up to 1TB PCIe 4.0 storage, and up to 49 TOPS of NPU performance, alongside an FHD IR AI camera with Windows Hello support and a physical privacy shutter.

    On the Zenbook side, ASUS mixes Intel and Snapdragon configurations. The Zenbook S14 and Zenbook DUO are Intel-based, while the Zenbook A14 and Zenbook A16 use Snapdragon X2 Elite and Snapdragon X2 Elite Extreme chips, respectively. ASUS’ reported NPU performance figures—such as up to 50 TOPS for Zenbook S14 and up to 80 TOPS for the Snapdragon-powered Zenbook A14/A16—underscore how the “AI-ready” positioning is expressed in hardware terms.

    Pricing, pre-orders, and launch offers

    The reported pricing structure shows ASUS segmenting the market across both brand lines and display classes. On the Vivobook side, mint lists the following starting prices: Vivobook 14 at ₹98,990, Vivobook 16 at ₹101,990, Vivobook S14 at ₹128,990, and Vivobook S16 at ₹131,990.

    In the premium Zenbook category, the Zenbook S14 is priced at ₹179,990, the Zenbook A14 at ₹185,990, the Zenbook A16 at ₹199,990, and the flagship dual-screen Zenbook DUO at ₹299,990.

    ASUS is also offering limited-period pre-order benefits worth up to ₹11,598. The reported offer includes a 2-year extended warranty and 3-year Accidental Damage Protection for ₹999. For Zenbook DUO customers, ASUS includes an ASUS Vigour Backpack as part of launch offers. Pre-orders “have gone live” and run until 20 April, with the new series going on sale starting April 21.

    Analysis: While the offer details focus on warranty and protection, the broader launch timeline suggests ASUS is aligning the product availability window across major channels—ASUS Exclusive Stores, the ASUS E-shop, Flipkart, Amazon, and authorized retail partners. For buyers and channel partners, that can affect inventory planning and promotional timing; for ASUS, it can also help standardize demand capture across price tiers.

    Key hardware specifications: displays, NPUs, and connectivity

    The specifications reported for the top models show ASUS leaning into both display capabilities and dedicated compute for on-device AI workloads, using NPU performance figures as a common thread.

    Zenbook S14 (UX5406AA): It features a 14-inch 3K OLED display with a 120Hz refresh rate and 1100 nits HDR peak brightness. ASUS reports a thickness of 1.1cm and a weight of 1.2kg (with reported dimensions in the table listed as 1.19 ~ 1.29 cm and 1.20 kg). It supports up to Intel Core Ultra 9 386H processors, with 50 TOPS NPU performance listed in the table. For battery and charging, mint specifies a 77 Wh battery and a 68W Type-C adapter. Connectivity includes Wi‑Fi 7, Bluetooth 6.0, 2x Thunderbolt 4, USB 3.2 Type-A, and HDMI 2.1. The camera is listed as an FHD 3DNR IR AI camera with ambient light sensor.

    Zenbook DUO (UX8407AA): The standout feature is the dual 14-inch 3K OLED touchscreens with a 144Hz variable refresh rate. ASUS lists the processor as Intel Core Ultra 7 Processor 355 with 49 TOPS NPU performance. The battery and charging are listed as 99 Wh and a 100W Type-C adapter. Connectivity is similar in class—Wi‑Fi 7, Bluetooth 5.4, 2x Thunderbolt 4, USB 3.2 Type-A, and HDMI 2.1. ASUS also lists the camera as an FHD 3DNR IR AI camera with ambient light sensor. The reported thickness range is 14.56–23.34mm, with an approximate weight of 1.35 kg (without keyboard).

    Snapdragon-powered Zenbook A14 and A16: mint reports that the Snapdragon-powered models use Snapdragon X2 Elite for A14 and Snapdragon X2 Elite Extreme for A16, delivering up to 80 TOPS of NPU performance. While the source excerpt includes the NPU claim, it does not provide the full display, battery, or connectivity tables for these exact models in the visible content.

    Analysis: ASUS’ spec sheet approach ties “AI-ready” branding to measurable hardware indicators—especially NPU performance (TOPS). Observers may watch how these NPU figures translate into software experiences, since the source focuses on hardware capabilities rather than specific AI applications or benchmarks.

    Why this matters for the laptop market

    ASUS’ 2026 India launch reflects a broader hardware shift: laptop vendors are framing new CPU platforms as “AI-ready,” and they are making NPU performance a central part of the pitch. In this case, ASUS is spanning Intel Core Ultra Series 3 and Qualcomm Snapdragon X2 Elite families inside the same overall product event, with both Zenbook and Vivobook models carrying AI-camera features (including FHD IR AI cameras with Windows Hello support) and privacy shutters on the Vivobook S series.

    For buyers, the reported pricing and pre-order timeline create a structured way to compare what each model class includes—OLED tiers, NPU performance ranges, and connectivity options such as Wi‑Fi 7 and Thunderbolt 4 on key Zenbook configurations. For the industry, the dual-platform strategy—Intel across multiple Zenbooks and Vivobooks, plus Snapdragon in Zenbook A models—could suggest that manufacturers are continuing to hedge across chip ecosystems while standardizing the “AI-ready” messaging through TOPS and on-device camera features.

    Analysis: The presence of both single-screen high-refresh OLED models (Zenbook S14 with 120Hz) and a dual-screen OLED configuration (Zenbook DUO with two 14-inch touchscreens and 144Hz variable refresh rate) indicates that “AI-ready” is being paired with multiple form factors. This could influence how software vendors design experiences for NPUs—potentially requiring compatibility across different display layouts and performance envelopes—though the source does not describe any specific software.

    Source: mint – technology

  • Google DeepMind Hires Philosopher Henry Shevlin to Focus on Machine Consciousness and Human-AI Relationships

    This article was generated by AI and cites original sources.

    Google DeepMind has appointed Henry Shevlin to a philosopher position focused on machine consciousness, human-AI relationships, and AGI readiness. The hire signals that leading AI labs are integrating academic expertise from philosophy and related fields into their research operations.

    The Appointment

    According to mint, DeepMind’s new hire is not an AI engineer or researcher. Instead, the lab has created a role explicitly titled as a philosopher position. Shevlin will work on topics including “machine consciousness,” “human-AI relationships,” and “AGI readiness.”

    In a post on X (formerly Twitter), Shevlin announced that he would be joining DeepMind in May. He also indicated he would continue his research and teaching at Cambridge on a part-time basis. This part-time arrangement suggests DeepMind is integrating the role into ongoing academic and industry work streams rather than relocating the entire research agenda around this position.

    Who Henry Shevlin Is

    Shevlin currently serves as Associate Director (Education) at the Leverhulme Centre for the Future of Intelligence, University of Cambridge. According to mint, he has expertise across cognitive science, AI ethics, animal minds, and consciousness. He has published multiple papers in journals including the Journal of Consciousness Studies.

    Originally from rural England, Shevlin earned a BA in Classics and a BPhil in Philosophy from the University of Oxford. He later completed his PhD in philosophy at the CUNY Graduate Center between 2010 and 2016, and served as a lecturer at Baruch College during that period.

    Research Focus Areas

    DeepMind’s stated focus areas—machine consciousness, human-AI relationships, and AGI readiness—form a cluster of research themes. The mint article does not provide technical deliverables, evaluation methods, or specific integration points with DeepMind’s model development process.

    The choice of topics reflects a pattern in the AI industry: as systems become more capable, labs increasingly discuss not only performance but also interpretation, interaction, and readiness for more general capabilities. A philosopher role could help operationalize questions that are difficult to reduce to standard benchmarks.

    For example, “machine consciousness” is presented as a research area rather than a specific engineering feature or measurement. Similarly, “human-AI relationships” and “AGI readiness” are listed as focus topics without technical definition in the source material.

    Industry Precedent

    This hiring move reflects a broader trend in AI research. According to mint, this is “not the first time that an AI company has hired a philosopher.” Late last year, Anthropic hired Amanda Askell, a PhD philosopher and AI researcher, to work as an in-house philosopher on areas including AI alignment and fine-tuning.

    The Anthropic example suggests that philosopher roles in AI labs can be tied to technical work such as alignment and fine-tuning, rather than serving only public relations or ethics functions. For DeepMind’s appointment, the source material does not specify whether Shevlin’s work will connect to model training, alignment methods, or evaluation.

    What This Signals

    DeepMind’s appointment of Henry Shevlin indicates that “human-AI relationships” and “machine consciousness” are being treated as research topics worth staffing at a major AI lab. The practical impact—what changes in systems, processes, or evaluation—remains unspecified in the source material. However, the creation of a philosopher position suggests that DeepMind is investing in conceptual frameworks that could influence how teams reason about advanced AI capabilities and their interaction with people.

    Industry observers may watch whether the role produces publications, technical guidance, or internal frameworks that align the lab’s engineering work with the stated research focus areas.

    Source: mint – technology

  • Booking.com breach exposes reservation data and enables targeted phishing attacks

    This article was generated by AI and cites original sources.

    Booking.com confirmed that hackers breached its systems and accessed customers’ personal data, warning that “unauthorised third parties may have been able to access certain booking information associated with your reservation.” The company said it noticed “suspicious activity affecting a number of reservations” and took steps to contain the issue, including updating the PIN number for affected reservations. While Booking.com told The Guardian that “financial information was not accessed,” the incident highlights how reservation platforms can become targets for data theft and follow-on social engineering.

    What Booking.com says was accessed

    In its confirmation, Booking.com did not disclose the exact number of people affected, the regions impacted, or the timeframe of the breach. However, it did clarify that “financial information was not accessed,” according to reporting by mint. The company’s message to customers, as shared in notifications circulated on social media, focused on the scope of booking-related data that could have been exposed.

    Based on customer notifications discussed in the mint report (including a screenshot shared by a Reddit user), Booking.com said that unauthorised parties may have accessed “certain booking information associated with your reservation.” The company warned that hackers may have gained access to names, email addresses, phone numbers, and specific booking details. It also stated that attackers could view “anything that you may have shared with the accommodation.”

    That last point is significant from a data-security standpoint because it suggests the breach may not have been limited to a narrow set of database fields. Instead, the notification language indicates that data flows between Booking.com and accommodations—such as messages or other content shared in the context of a stay—may have been accessible to the attackers under the compromised access.

    Containment steps: PIN resets and direct guest notification

    Booking.com said it “recently noticed suspicious activity affecting a number of reservations and we immediately took action to contain the issue,” as quoted in the customer notification message shared on Reddit and reported by mint. Booking.com spokesperson Courtney Camp told TechCrunch (as referenced by mint) that the company noticed “suspicious activity involving unauthorised third parties being able to access some of our guests’ booking information.” She added that Booking.com “took action to contain the issue,” updated PIN numbers for affected reservations, and directly informed guests.

    Updating reservation PINs serves as a security control: it can disrupt attacker attempts to authenticate or apply changes tied to those reservations. The company’s approach reflects how reservation systems often rely on secondary verification beyond passwords—especially when customers manage bookings through confirmations, links, or reservation-specific credentials.

    At the same time, Booking.com’s decision not to disclose the breach window, impacted regions, or affected population size leaves outside observers with fewer technical details about how long the attackers may have had access and how widely the exposure may have spread across systems.

    Stolen booking data enables targeted phishing

    According to the mint report, a user who posted the notification screenshot said they received a targeted phishing message via WhatsApp two weeks earlier. The message reportedly included personal information and booking details that matched what the company later said could have been accessed.

    This suggests attackers may be using stolen reservation data to make social engineering more convincing—an approach that does not require direct access to payment systems to be harmful. Even if “financial information was not accessed,” attackers could still attempt to redirect payments, harvest additional credentials, or manipulate communications between travelers and accommodations.

    The mint report notes Booking.com’s guidance for staying safe: if users were affected, they should look for an official confirmation in their mailbox. For recent bookings, the report advises travelers to be “extremely wary of urgent payment requests from hoteliers” and to prefer payment only through Booking.com’s official portals. That advice aligns with a common pattern in incident responses for consumer platforms: when attackers can reference real booking details, urgency-based prompts can become a tactic to bypass normal verification steps.

    Prior breach and regulatory context

    Booking.com’s history provides context for the current incident. According to the mint report, Booking.com suffered a phishing attack in 2018 that compromised booking data of over 4,000 customers. In that earlier case, the platform reportedly had login credentials stolen from hotel employees in the UAE. Booking.com was later fined €475,000 by the Dutch Data Protection Authority for reporting the breach 22 days late, exceeding the 72-hour legal limit.

    While the mint summary does not provide technical details on how the 2018 attack operated beyond the credential theft mechanism, it underscores a recurring pattern: phishing remains an entry point into larger reservation ecosystems, and data exposure can extend beyond a single user account to include booking-associated records and partner interactions.

    Looking forward, observers may watch how Booking.com’s incident response is operationalized—particularly the speed and completeness of customer communications, the effectiveness of PIN resets in thwarting account-linked changes, and how the company validates whether shared content with accommodations was accessed. The lack of disclosed details about the breach timeframe and affected regions in the current reporting may also affect how quickly security researchers and affected users can assess impact.

    What this means for reservation platforms

    The confirmed breach, the specific categories of data mentioned in customer notifications, and the reported WhatsApp phishing tie-in point to a security challenge that extends beyond perimeter defense. Reservation systems handle identity attributes (names, emails, phone numbers), itinerary context (specific booking details), and potentially communication artifacts (“anything that you may have shared with the accommodation”). If attackers can access those records, they can increase the credibility of downstream scams even when direct payment systems are not compromised.

    Booking.com’s stated control—updating PIN numbers for affected reservations—shows how platform-specific authentication mechanisms can be used to contain harm after unauthorized access is discovered. Meanwhile, the company’s consumer-facing guidance to use official payment portals and to scrutinize urgent requests reflects the reality that attackers can exploit real booking context to drive fraudulent actions.

    Source: mint – technology

  • Motorola Edge 70 Pro teaser points to camera, battery, and durability focus—via Flipkart microsite

    This article was generated by AI and cites original sources.

    Motorola has begun teasing the Edge 70 Pro in India, with a dedicated microsite on Flipkart that indicates the phone will be sold through the e-commerce platform. The company has not provided a launch date yet, but the microsite URL includes “moto-coming-soon-apr26”, which suggests a potential reveal on 26 April. The teaser confirms three color options—Blue, Green, and White—and leaks point to a focus on low-light imaging, battery capacity, charging speed, and durability certifications.

    Flipkart microsite and the implied reveal window

    Motorola’s Edge 70 Pro tease is tied to a Flipkart microsite that went live after the company launched the Edge 70 Fusion. The microsite does not explicitly name the Edge 70 Pro, but it provides a brief glimpse of the phone along with its color variants, confirming the availability of Blue, Green, and White options in India.

    The microsite URL reads ‘moto-coming-soon-apr26’. While the exact announcement date has not been confirmed, the URL structure suggests that the upcoming phone could be revealed on 26 April. Microsite naming conventions of this type often function as internal schedule markers, though the timing remains unconfirmed.

    Design and imaging: curved display, triple cameras, and low-light capabilities

    Motorola’s official India handle has begun teasing a new phone with low-light capabilities. A leaked poster has surfaced on social media showing a curved screen—presumably pOLED—and a triple camera system on the back.

    On the camera side, previous leaks describe a Sony Lytia-powered primary camera system with support for ‘Super Zoom’. The phone is expected to feature a 50MP selfie shooter on the front with autofocus support. These specifications suggest Motorola is targeting both rear and front capture capabilities.

    For context, the Edge 70 Fusion predecessor features a 50MP main camera using a Sony LYTIA 700C sensor with OIS, a 50MP autofocus ultra-wide camera with macro mode, and a 10MP 3x telephoto camera with OIS that supports 50x Super Zoom. The Edge 70 Pro could follow a similar imaging strategy, though these remain expectations derived from leaks rather than confirmed specifications.

    Battery, charging, and durability: certifications and capacity upgrades

    Battery performance and charging speed are areas where the Edge 70 Pro is expected to improve. Previous leaks suggest the Edge 70 Pro could feature a 6,500mAh battery, up from 6,000mAh on its predecessor, with 90W fast charging support.

    Durability expectations include IP68 and IP69 ratings for water and dust resistance, along with MIL-STD-810H certification. These certifications reference standardized test frameworks rather than general claims of ruggedness. The Edge 70 Fusion carries the same IP68 + IP69 and MIL-STD-810H specifications, indicating that the Edge 70 Pro may maintain this baseline while improving internal components such as battery capacity and charging.

    Display and performance expectations

    The Edge 70 Fusion spec sheet provides context for potential Edge 70 Pro specifications. The predecessor features a 6.7-inch display with 1.5K resolution, 10-bit pOLED, a 120Hz refresh rate, and up to 4,500 nits peak brightness. It uses a Dimensity 8350 Extreme 4nm processor paired with a Mali-G615 MC6 GPU.

    Software includes Android 15 with 3 OS upgrades + 4 Years SMR. Memory and storage options are 8GB/12GB LPDDR5X RAM with 256GB UFS 4.0 storage. Connectivity features include 5G SA/NSA, dual 4G VoLTE, Wi-Fi 6E, Bluetooth 5.4, GPS, NFC, and dual SIM. The Edge 70 Pro specifications have not been confirmed, but the predecessor’s specs provide a baseline for understanding likely upgrades.

    Potential Edge 70 Pro+ variant

    Motorola could also be launching an Edge 70 Pro+ model this year. This possibility was earlier spotted on HDR10+ certifications alongside the Edge 70 Pro. Based on Motorola’s phased launch strategy for the Edge series, the Edge 70 Pro+ could have a separate launch from the standard Pro model. The HDR10+ certification signals attention to display-related performance features, though specifications for the Pro+ variant have not been disclosed.

    Source: mint – technology

  • Anthropic Discusses Mythos Model with Trump Administration Amid Pentagon Contract Dispute

    This article was generated by AI and cites original sources.

    Anthropic says it is in discussions with the Trump administration about its frontier AI model Mythos and future releases, even as the Pentagon has barred the company from doing business following a contract dispute over guardrails for military AI tool use. In remarks at the Semafor World Economy event in Washington, Anthropic co-founder Jack Clark said the company’s contracting disagreement should not overshadow its focus on national security, while indicating that the government needs visibility into Anthropic’s frontier systems.

    Mythos: Coding and Autonomous Capabilities

    The model at the center of the dispute is Anthropic’s frontier AI system, Mythos. Announced on April 7, Anthropic described it in a blog post as its “most capable yet for coding and agentic tasks,” emphasizing the model’s ability to act autonomously.

    This “agentic” capability is significant because it changes how an AI system can be deployed in software workflows. According to experts cited in the source, Mythos’s high-level coding abilities could enable a “potentially unprecedented ability” to identify cybersecurity vulnerabilities and devise ways to exploit them. The combination of autonomous agent behavior with strong coding performance points to a system that can move beyond answering questions to take actions resembling software engineering and security testing.

    The Pentagon’s concern appears tied to how such autonomy and coding power are constrained in military contexts. The source does not provide technical details about Mythos’s internal architecture, guardrail mechanisms, or evaluation methods, but connects the model’s “agentic tasks” framing to outcomes that security experts say it could produce.

    Pentagon Contract Dispute and Supply-Chain Risk Designation

    The Pentagon’s stance stems from a contract dispute between Anthropic and the U.S. military over guardrails—specifically, how the military could use AI tools. According to the source, the agency labeled Anthropic a supply-chain risk last month, resulting in the Pentagon cutting off business with the company. The Pentagon barred Anthropic’s use by the Pentagon and its contractors.

    The supply-chain risk designation is notable in technology procurement because it treats an AI vendor as a risk to operational inputs, not merely as an isolated model. While the source does not detail the Pentagon’s exact risk criteria, it indicates the government’s review is tied to deployment safety and control—particularly the guardrails governing what an AI system can do and under what conditions.

    The source notes that a Washington, D.C., federal appeals court last week declined to block the Pentagon’s national security blacklisting of Anthropic “for now,” described as a win for the Trump administration. This decision came after another appeals court had ruled the opposite in a separate legal challenge by Anthropic.

    Anthropic Co-founder: Government Discussions on Mythos and Future Models

    Against this backdrop, Anthropic co-founder Jack Clark said the company is discussing Mythos with the Trump administration. Speaking at the Semafor World Economy event in Washington, Clark acknowledged “a narrow contracting dispute” and said he did not want it “to get in the way” of national security priorities.

    Clark framed the company’s position as requiring government awareness of the technology. He stated: “Our position is the government has to know about this stuff … So absolutely, we’re talking to them about Mythos, and we’ll talk to them about the next models as well.

    The source notes that the nature and details of these talks were not immediately clear, including which agencies are involved. This lack of clarity leaves open questions about whether conversations focus on procurement terms, safety evaluation, operational deployment constraints, or broader policy alignment.

    Implications for AI Deployment and Cybersecurity

    Based on the source, several industry-relevant implications emerge, though the facts do not fully resolve all questions.

    Guardrails are becoming a central procurement requirement. The Pentagon’s decision to cut off business following a guardrails dispute suggests that model capability alone may not determine vendor eligibility. The ability to agree on constraints for autonomous behavior appears to be a gating factor. Future contracts may emphasize guardrails as a technical specification or as a governance mechanism for monitoring and controlling deployments.

    Autonomy combined with coding performance raises dual-use concerns. Experts cited in the source note that Mythos could identify cybersecurity vulnerabilities and devise ways to exploit them. This indicates that capabilities supporting defensive tooling—finding weaknesses, understanding code paths—can also support offensive activity. This may explain why the guardrails dispute could be particularly challenging when an AI system is designed to act autonomously in coding tasks.

    Government engagement may continue despite procurement pauses. Clark’s remarks indicate that Anthropic is engaging with the government about Mythos and future models, even after the Pentagon’s cutoff. The combination of ongoing discussions and the Pentagon’s blacklisting suggests a distinction between procurement decisions and information-sharing or evaluation discussions.

    Legal outcomes could influence technical and contractual design. The source notes conflicting appeals outcomes: one court declined to block the national security blacklisting “for now,” while another appeals court had ruled the opposite in a separate legal challenge. If litigation remains active, companies may adjust how they negotiate guardrails, define acceptable uses, and structure contracts to reduce supply-chain restrictions.

    For the AI industry, the central story involves not only Mythos’s “agentic tasks” positioning, but also how governments are treating autonomous coding models as sensitive systems requiring enforceable constraints. As Anthropic discusses Mythos and “the next models” with the Trump administration, the next technical and contractual steps—particularly around guardrails—may signal how frontier AI systems are integrated into high-stakes environments.

    Source: mint – technology

  • Redmi A7 Pro 5G launches in India at ₹12,499 with 6.9-inch 120Hz display and 6,300mAh battery

    This article was generated by AI and cites original sources.

    Xiaomi has launched the Redmi A7 Pro 5G in India, positioning a new “Pro” model in its A Series lineup within the sub-₹15,000 smartphone segment. According to mint, the phone starts at ₹12,499 for the 4GB RAM/64GB storage variant and features a 6.9-inch display with a 120Hz refresh rate, a 6,300mAh battery, and an octa-core Unisoc T7250 processor. It goes on sale from April 15 through Amazon India, Mi.com, and offline retail stores, with launch offers including a ₹1,000 introductory discount and up to three months of no-cost EMI.

    Display and design

    The Redmi A7 Pro 5G features a 6.9-inch display with a 120Hz refresh rate and peak brightness up to 800 nits. The device includes Wet Touch technology 2.0, which allows users to continue operating the phone when fingers are damp. For physical protection, the phone has an IP52 rating for splash and dust resistance. The device measures 8.15mm in thickness.

    Battery and charging

    The Redmi A7 Pro 5G includes a 6,300mAh battery that supports 15W charging via an in-box charger. The phone also includes 7.5W wired reverse charging, allowing it to serve as a power source for other devices.

    Performance and software

    The phone is powered by an octa-core Unisoc T7250 processor and runs Xiaomi HyperOS 3.0. It supports up to 8GB of virtual RAM expansion and includes a microSD card slot supporting up to 2TB of expandable storage. The device comes in two configurations: 4GB RAM/64GB and 4GB RAM/128GB.

    Camera and connectivity

    The Redmi A7 Pro 5G features a 32MP AI dual rear camera setup with HDR support. The camera app includes AI Sky for image enhancement and a Document Mode for digitizing receipts and notes. The device has an 8MP front-facing camera for selfies and video calls. Additional features include a side-mounted fingerprint sensor, a 3.5mm headphone jack, and a speaker with a 200% volume boost feature. The phone supports 5G connectivity.

    Pricing and availability

    The Redmi A7 Pro 5G is priced at ₹12,499 for the 4GB RAM/64GB variant and ₹13,499 for the 4GB RAM/128GB model. With the ₹1,000 introductory discount, the effective starting price is ₹11,499 for the 64GB model and ₹12,499 for the 128GB model. The phone is available in Black, Mist Blue, and Sunset Orange. It will be sold via Amazon India, Mi.com, and offline retail stores starting April 15. Additional purchase incentives include up to three months of no-cost EMI.

    Source: mint – technology

  • Apple tests four frame designs for display-free Siri smart glasses, targeting 2027 release

    This article was generated by AI and cites original sources.

    Apple is developing display-free smart glasses designed to compete with Meta’s Ray-Ban-style wearables, according to a report by Bloomberg’s Mark Gurman cited by mint. The report indicates Apple has internally codnamed the glasses “N50” and is testing at least four different frame designs—an approach that emphasizes how much of the product’s differentiation is expected to come from form factor, materials, and the software integration with Apple Intelligence.

    Four frame styles in testing, with a 2027 target release

    According to the report, Apple could unveil its smart glasses by the end of this year or in early next year, with the actual release date targeted for 2027. The glasses are described as display-free, aligning them with Meta’s Ray-Ban smart glasses rather than conventional headsets that rely on visible displays.

    Apple’s internal development includes a design process with multiple form factors. The report states Apple’s design team has created at least four different styles and plans to launch them in multiple color options. The frames are described as being made of acetate, a material noted as more durable and more premium than standard plastic used by most brands.

    While the report does not detail each of the four designs individually, the emphasis on multiple frame styles indicates that Apple is treating the wearable’s physical design as a key variable during development—likely to balance comfort, durability, and integration with the broader system.

    Siri-powered features: photos, calls, music, notifications, and voice interaction

    The report describes Apple’s smart glasses as addressing everyday user requirements. These functions reportedly include capturing photos and videos, syncing with an iPhone for editing and sharing, handling phone calls, listening to music, receiving notifications, and hands-free voice interaction.

    The voice assistant is reported to be an upcoming version of Siri, which could be revealed with iOS 27 in June. This timing is significant for how the glasses’ software experience could be structured: the glasses may depend on a newer Siri foundation delivered through the iPhone operating system rather than relying solely on on-device processing.

    The described workflow—capture on glasses, edit and share via iPhone—suggests a design where the wearable functions as a sensor-and-input device, while the phone serves as the primary compute and distribution hub.

    Computer vision and Apple Intelligence: contextual awareness features

    The smart glasses are described as part of a three-pronged AI wearables strategy that also includes new AirPods and a camera-equipped pendant. The report states each of these devices is designed to leverage computer vision to interpret the user’s surroundings and provide contextual awareness to Apple Intelligence.

    The report points to specific feature examples expected from this approach: improved turn-by-turn map directions and visual reminders. The emphasis on computer vision indicates that the glasses’ core differentiation may center on understanding what the user is looking at and translating that into assistance, rather than relying on a visible display.

    The stated reliance on Apple Intelligence suggests the glasses experience may be integrated with the broader Apple AI ecosystem, potentially shaping how quickly new capabilities arrive through iOS releases and Apple Intelligence updates.

    In-house design strategy and manufacturing approach

    The report contrasts Apple’s plan with other companies’ approaches to smart glasses design. Unlike Meta, which relies heavily on its partnership with EssilorLuxottica, Apple is said to be planning to handle the design of its smart glasses entirely in-house to offer higher-end build quality.

    This approach differs from Google and Samsung, which are using Warby Parker for frames. Apple’s in-house approach could affect how the company iterates on hardware form factors: changes to materials, hinge design, weight distribution, and accessory ecosystems may be controlled within Apple’s engineering cycle rather than coordinated through a third-party partner.

    From a strategy perspective, this could allow Apple to reduce constraints that come with external frame supply decisions—particularly relevant when testing multiple frame styles and targeting multiple color options. The in-house approach may also be important given the display-free design, where mechanical design and user interaction with audio and voice input become central to usability.

    Source: mint – technology

  • Amazon’s Project Houdini targets faster AI data centres by moving construction off-site

    This article was generated by AI and cites original sources.

    Amazon is reportedly developing an internal initiative called Project Houdini to speed up how it builds the data centres that support cloud and AI workloads. According to internal documents reported by Business Insider and summarized by mint, the effort focuses on shifting much of data-centre construction into factory settings—turning portions of the main server space into preassembled modules—so that Amazon Web Services (AWS) can bring new computing capacity online faster.

    The scale of the problem is clear in the numbers described in the report. Traditional on-site construction for a data hall is characterized as a largely “stick-built” process that can require 60,000 to 80,000 labour hours and take about 15 weeks before servers can even be installed. The initiative’s goal, as described in the leaked estimates, is to reduce the time from that baseline to two to three weeks after construction starting, while also eliminating up to 50,000 on-site electrician hours.

    What Project Houdini changes: from stick-built halls to factory modules

    The core technology shift in Project Houdini is not a new server or a new chip; it is a change in data-centre construction methodology. The report describes the “stick-built” approach for building a data hall as a sequence of on-site tasks—installing racks, running cabling, and wiring power systems—performed in order by workers. In that model, the main server space is built on-site, which increases both labour intensity and schedule risk.

    Project Houdini, by contrast, is said to move “various DC build scopes to a factory setting.” The described end state is that the most time-sensitive or labour-heavy portions of the data hall are built off-site in controlled environments, then delivered for final assembly. In the document described by mint, Amazon is exploring ways to “take various DC build scopes to a factory setting,” with the intent of accelerating “DC delivery.”

    One of the key mechanical concepts mentioned in the report is a modular approach using large preassembled sections of the data hall. These large sections are referred to as “skids.” Each module is described as roughly the size of a semi-trailer—about 45 feet long and weighing around 20,000 pounds—and is said to arrive on-site with multiple systems already installed. The report lists items that could be included on the skid: racks, power distribution, cabling, lighting, and fire and security systems.

    From a technology operations perspective, that bundling matters because it replaces a long on-site integration chain with a more standardized production-and-install sequence. The report also frames the factory approach as a way to standardize builds, reduce errors, and depend less on local labour markets—factors that are often tightly coupled to schedule variability in large-scale infrastructure projects.

    Schedule impact: compressing the path to installed servers

    In the report’s description of traditional construction, the timeline is dominated by the period before servers can be installed. The “stick-built” data-hall process is said to take roughly 15 weeks before servers can even be installed, and it can demand 60,000 to 80,000 labour hours. That implies that, even if servers and other components are available, the critical path can be the physical readiness of the hall.

    Project Houdini’s reported plan aims to shorten that critical path. The leaked internal estimates described by mint say that with the new approach, AWS could begin installing servers within two to three weeks of construction starting—down from around 15 weeks under traditional methods. The report also ties the schedule reduction to a labour shift: it estimates the approach could eliminate up to 50,000 on-site electrician hours.

    Amazon’s own public framing of the broader issue, as included in the report, is that it faces “capacity constraints that yield unserved demand.” In its recent annual shareholder letter, CEO Andy Jassy is quoted as describing those constraints. While the report does not attribute that quote specifically to Project Houdini, it places the construction acceleration in the context of AWS needing to expand computing capacity faster.

    As analysis, observers may view Project Houdini as an attempt to convert construction throughput into more immediate capacity availability. If the bottleneck is the time required to prepare halls for server installation, then reducing that time could help AWS respond to demand more quickly—assuming the supply chain for modules, transport, and on-site completion can scale at the same pace.

    Why off-site fabrication is a technical lever for data centres

    The report describes Project Houdini as relying on controlled factory environments. That emphasis points to a recurring theme in large infrastructure engineering: when work that is normally performed on-site is moved into a factory, the process can become more repeatable. According to the summary in mint, Amazon expects the factory approach to help by standardizing builds and reducing errors, while also reducing reliance on local labour markets.

    Even with those advantages, the approach changes the technology stack of the construction process. Instead of coordinating many sequential on-site activities—rack installation, cabling runs, power wiring, and other systems—Amazon would need a manufacturing process that can reliably produce skids with integrated systems. The report’s description that each skid could include racks, power distribution, cabling, lighting, and fire and security systems suggests a higher level of pre-integration than is typical in purely on-site builds.

    Because the report is based on leaked internal documents, it does not provide engineering details such as tolerances, testing procedures, or how connections between skids are handled after delivery. Still, the described module scope indicates a move toward treating parts of a data hall as a packaged subsystem rather than a set of individually assembled components.

    From an industry standpoint, this is also a signal about how cloud providers may treat infrastructure as a production problem. The report notes that Amazon alone is spending around $20 billion on capital expenditure, much of it linked to AWS data centres, and that building these facilities remains slow and complex. Project Houdini is framed as an attempt to address that complexity by changing where and how work happens.

    What to watch next for AWS and data-centre engineering

    The information in mint centers on reported internal documents and estimates. That means the most concrete items are the described construction methodology and the reported timeline and labour reductions: 15 weeks and 60,000 to 80,000 labour hours in the traditional process, versus two to three weeks and the potential elimination of up to 50,000 on-site electrician hours under Project Houdini’s approach.

    As analysis, the industry implications are likely to cluster around execution and scaling. If AWS can reduce the time to begin installing servers, it could reduce the delay between capital deployment and usable capacity—directly relevant to the “capacity constraints” described by Andy Jassy. At the same time, the modular strategy would require consistent factory output and on-site integration that can preserve the gains from off-site standardization.

    For tech enthusiasts tracking AI infrastructure, the story matters because it targets the physical layer that often sets the pace for AI compute expansion. The report suggests that, alongside server and networking advances, data-centre construction logistics may become a competitive factor—especially when demand for capacity is described as unserved.

    Source: mint – technology

  • Rockstar Games Confirms Data Breach via Third-Party Provider; ShinyHunters Demands Ransom

    This article was generated by AI and cites original sources.

    Rockstar Games confirmed it suffered a data breach tied to a third-party provider. The ransomware group ShinyHunters has demanded payment by April 14, 2026, or threatened to leak stolen data. In a statement shared with Kotaku, Rockstar said the incident involved “a limited amount of non-material company information” and that it “has no impact” on the company or its players. The case highlights how modern game-development environments—often built on external cloud and monitoring tools—can expand the attack surface beyond a single organization.

    Breach routed through third-party cloud service

    According to the report, Rockstar linked the incident to a third-party data breach, describing it as an intrusion “in connection with a third-party data breach.” The company confirmed that “a limited amount of non-material company information was accessed” and stated that the incident “has no impact on our organisation or our players.” This distinction matters technically because it separates what was accessed from what operational systems were affected. Even when player-facing services are not impacted, stolen corporate data can create downstream risks for incident response, legal exposure, and future targeted attacks.

    The ransomware group’s messaging ties the entry point to a specific service. ShinyHunters posted a message stating that “Rockstar Games, your Snowflake instances were compromised thanks to Anodot.com.” The group demanded payment and referenced a deadline of “14 Apr 2026,” along with threats of “several annoying (digital) problems.”

    Operationally, the mention of “Snowflake instances” and “Anodot.com” points toward a common enterprise pattern: data and analytics platforms, including cloud data warehouses, are monitored and cost-managed through third-party tooling. If credentials, access paths, or misconfigurations exist in that chain, attackers may reach data stores without breaching internal developer networks directly.

    Ransom demand and unclear scope

    ShinyHunters has demanded a ransom by April 14 and threatened to publish stolen data if Rockstar does not pay. The group’s post urged Rockstar to “reach out” before the deadline, stating “Make the right decision, don’t be the next headline.”

    However, the technical scope remains uncertain. It is not yet clear what kind of data ShinyHunters has access to, though reports suggest the hack may have targeted corporate data rather than player information. That distinction aligns with Rockstar’s statement about “non-material company information,” but the specific records involved remain unclear.

    According to The Verge, possible leaked data could include financial records, marketing data, or contracts with companies such as Sony and Microsoft. Even if player systems are unaffected, documents related to finance, marketing, and contracts can be used for follow-on attacks such as targeted social engineering, vendor impersonation, or further compromise attempts.

    Third-party and data warehouse access patterns

    This incident is not presented as a direct breach of Rockstar’s player infrastructure. Instead, the reported path runs through a third-party provider used for “cloud cost monitoring and analytics software service,” identified as Anodot. The group’s claim that “Snowflake instances were compromised” suggests that the attacker may have targeted the data layer—where analytics, reporting, and operational insights often consolidate information from multiple systems.

    From a security architecture perspective, this combination—external monitoring and analytics tooling plus a cloud data platform—can create multiple technical risk points: integration permissions, credential lifecycles, logging visibility, and the way access to data warehouses is brokered. The available reports do not provide details about which controls failed or how access was obtained, but they establish that the breach involved a third-party connection and a cloud analytics environment.

    Rockstar’s statement that the incident has “no impact” on the organization or players may reduce immediate operational disruption, but it does not remove the broader technology implications. If data access was limited to “non-material company information,” the immediate business impact may be smaller. However, the presence of a ransomware threat and the possibility of leaked corporate files indicate that the attacker obtained enough access to monetize or pressure the victim. In the industry, this can shape how teams evaluate third-party risk, monitor data warehouse access, and handle incident response when the initial foothold is outside the primary corporate boundary.

    Rockstar’s prior security incidents

    This is not the first time Rockstar has faced a cybersecurity incident. In 2022, Rockstar suffered a major security breach carried out by an 18-year-old member of the hacking collective LAPSUS$. That attacker reportedly gained access to Rockstar’s Slack service, resulting in over 90 early development videos of GTA 6 leaking online. The hackers also reportedly stole source code for GTA 5 and GTA 6 and attempted to blackmail Rockstar for its return.

    The contrast between 2022’s Slack-mediated access and the current incident’s third-party cloud monitoring and Snowflake involvement underscores a recurring theme in enterprise security: attackers can shift methods while targeting valuable assets. In both cases, the likely value is tied to development and corporate data. The persistence of extortion—leak threats paired with a ransom deadline—also suggests that ransomware groups may seek both direct payment and leverage through public disclosure.

    ShinyHunters has previously been linked to ransomware attacks on major companies including Google, Gucci, Balenciaga, Alexander McQueen, Louis Vuitton, IKEA, Adidas, McDonald’s, KFC, and Walgreens. The available reports do not provide technical details for those other incidents, but the list situates ShinyHunters as a group associated with repeat targeting across sectors.

    Source: mint – technology

  • Anthropic’s Claude for Word brings document-aware AI to Microsoft Word workflows—beta for Team and Enterprise

    This article was generated by AI and cites original sources.

    Anthropic has launched Claude for Word, a beta add-in that brings Claude AI directly into Microsoft Word document workflows. As described in a Microsoft Marketplace listing and reported by mint, the tool is available only to Team and Enterprise subscribers and is designed to help users draft, edit, and revise documents from a Word sidebar—while preserving formatting and enabling Word-native review flows such as tracked changes.

    For organizations already evaluating AI assistants, the technical question is less about whether AI can write text and more about how it integrates with existing document structures: citations that jump to specific sections, semantic navigation across provisions, and editing that remains compatible with Word’s formatting and revision model. Claude for Word’s feature set points to a workflow-first approach to AI assistance rather than a standalone chatbot.

    What Claude for Word does inside Microsoft Word

    According to Anthropic’s description in a Microsoft Marketplace listing, Claude for Word “reads complex multi-section documents, works through comment threads, and edits clauses while preserving your formatting, numbering, and styles.” The add-in lets users perform those actions without leaving Word by working from the sidebar.

    mint reports that Claude for Word can draft, edit, and revise documents directly from that sidebar. One of the key integration details is that the assistant is intended to preserve the user’s formatting. In Word terms, this matters because document editing is often tightly coupled to styles, numbering schemes, and layout conventions—especially in legal and finance work.

    The tool also supports multiple interaction modes that map to common professional tasks:

    • Ask questions about documents, including summarizing commercial terms or locating specific clauses.
    • Iterative editing, where a user selects a passage and instructs Claude to revise it.
    • Tracked changes via a “suggested edits mode,” so edits appear in Word’s native review pane.
    • Comment-driven editing by reading comment threads, editing anchored text, and replying to the thread with explanations.

    These features suggest a design goal: keep the AI’s output aligned with the same mechanisms users already rely on for collaboration and review in Word, rather than forcing a separate export-and-repaste process.

    Document-aware Q&A and semantic navigation

    Claude for Word includes a Q&A workflow that mint describes as producing answers with clickable citations. The citations are intended to navigate directly to the referenced section, which is a notable difference from generic chat responses that may not provide direct traceability to source text.

    mint also highlights semantic navigation. In this mode, users can find provisions by theme using prompts such as “Find every provision touching data retention” and “Where does this agreement address termination?” The presence of theme-based prompts implies that the assistant is expected to interpret document structure and meaning well enough to retrieve relevant clauses, not just search for surface keywords.

    For teams that work with contracts, policies, or other multi-section documents, this kind of navigation could reduce time spent manually scanning long files. However, the source also frames Claude for Word as beta, so observers may watch for how consistently citations and clause retrieval work across different document types and formatting conventions.

    Editing that preserves structure, plus Word-native review

    Beyond Q&A, Claude for Word is built around editing flows that attempt to respect document structure. Anthropic says the assistant can perform iterative editing by selecting a passage and issuing instructions. The example prompt provided in the source—“tighten this paragraph and drop the passive voice”—illustrates how users can target a specific area while asking for stylistic or grammatical changes.

    mint reports that Anthropic’s approach is to have Claude edit only the given section while keeping surrounding styles, formatting, and numbering unchanged. In professional documents, this kind of “localized edits” behavior is important because global formatting changes can create downstream issues for later revisions, numbering, and consistency.

    The add-in also integrates with Word’s review mechanics. In “suggested edits mode,” Claude’s edits appear as tracked revisions: the original text is shown as a deletion and the new text as an insertion. This is designed to let users accept or reject each change in Word’s native review pane, preserving the familiar human-in-the-loop editing pattern.

    Separately, Claude for Word supports comment-driven editing. mint says it can read comment threads, understand the anchored text, and then systematically work through open comments—editing the passage and replying to the thread with an explanation of changes. In practice, this could help align AI assistance with team review processes where comments are the coordination unit.

    Cross-app context, beta limits, and security warnings

    Claude for Word is not isolated to Word. mint reports cross-app functionality in which Claude for Word shares context with Excel and PowerPoint add-ins. The source gives examples: asking the AI to pull numbers from an Excel model into a Word memo, or summarising a document into presentation slides without manual copy-pasting.

    That cross-app context matters because document work frequently depends on data already structured in spreadsheets and existing slide decks. While the source does not provide performance metrics, the stated capability indicates an intent to reduce friction between tools in a Microsoft-centric workflow.

    At the same time, Anthropic’s beta positioning comes with constraints. mint says Claude for Word is not recommended for final client deliverables, litigation filings, or documents containing highly sensitive data without proper human verification. These limits reflect a cautious approach to AI-assisted document production when stakes are high.

    The source also warns about “prompt injection attack risks.” Anthropic advises users to only use the AI tool with trusted documents, since files from external sources could contain hidden malicious instructions designed to trick the AI into modifying critical content or extracting sensitive information. This is a concrete reminder that integrating AI into document editing pipelines changes the threat model: the document itself can act as an input vector.

    For users setting up the add-in, mint outlines a straightforward installation path. Individual users can navigate to the Claude for Word listing on the Microsoft Marketplace, click “Get it now”, then open Microsoft Word and activate the add-in (Tools > Add-ins on Mac or Home > Add-ins on Windows). Users then sign in with their Claude account.

    Overall, Claude for Word’s feature set—citations with navigation, theme-based clause retrieval, section-level editing that preserves formatting, and tracked changes—suggests an effort to make AI assistance fit inside established Word workflows. The beta status and security guidance also indicate that practical deployment will likely depend on organizational review processes and document trust boundaries.

    Source: mint – technology