Category: General

  • Indian “new-age” tech stocks surge as adtech, logistics, fintech and EV updates draw buying

    This article was generated by AI and cites original sources.

    Indian equities rallied this week after a reported temporary ceasefire between the US, Israel and Iran improved market sentiment. Within that broader rebound, so-called “new-age” tech stocks added close to $10 billion in cumulative market capitalisation, ending the week at $129.09 billion, according to coverage from Inc42 Media published on 2026-04-11. The week’s stock moves also reflected a steady stream of operational updates across sectors—EV manufacturing technology, adtech tooling, e-commerce logistics, and insurance/fintech reporting—suggesting how product execution and platform capabilities can translate into investor demand.

    How the rally mapped onto “new-age” tech performance

    Inc42 Media’s weekly snapshot describes participation across a wide set of companies. It reported that 52 new-age tech companies rose in a range of 0.63% to over 44% during the week, with three notable exceptions: Swiggy (down 0.18%), Go Digit (down 0.36%) and Macobs Technologies, described as the parent of Menhood (down 2.32%).

    At the top of the list, Inc42 Media said Ola Electric emerged as the biggest gainer, with shares surging 44.27% to end the week at ₹40.9. It also cited fresh highs for Groww, Shadowfax, Ather Energy, Honasa Consumer and Lenskart.

    Beyond the “new-age” cohort, the article noted that larger companies including Nykaa, Delhivery, Meesho and Eternal ended the week “in the green.” While the piece does not quantify how much of the week’s gains came from broader market factors versus company-specific execution, the mix of winners across multiple tech-adjacent categories (consumer platforms, logistics, fintech/insurance, and EV) points to a market willing to price in technology roadmaps and near-term performance signals.

    Adtech and platform tooling: Mobavenue AI Tech joins the coverage

    Inc42 Media also highlighted a coverage expansion: starting this week, it included Mobavenue AI Tech, described as an adtech company based in Mumbai. The firm “provides businesses an AI-powered advertising and consumer growth platform,” and its shares gained 1.66% to end the week at ₹1,210.8.

    From a technology standpoint, the description matters because it frames the product as an operational layer for advertising and growth—an area where AI typically influences targeting, measurement, and optimization. The source does not provide technical details (such as model types, data sources, or deployment architecture), so any deeper inference would go beyond what is stated. Still, observers may watch whether investor attention to an AI adtech platform corresponds with tangible product milestones or performance updates in future reporting, given that the company was singled out both for its platform positioning and for its week’s share movement.

    EV manufacturing tech: Ola Electric’s LFP cell readiness and Gigafactory integration

    EV technology was a clear theme in the week’s stock story. Inc42 Media said Ola Electric’s shares jumped over 44% this week, after gaining close to 17% the prior week. It also connected the rally to earlier operational performance in the E2W (two-wheeler) market in March, including claims that daily orders in the last week of March exceeded 1,000 units and that registrations spiked 150% MoM to 10,117 units.

    But the article also attributes investor interest this week to updates on Ola Electric’s Gigafactory, specifically its battery technology roadmap. It reported that the company announced its LFP cell (Lithium-Iron-Phosphate) cell is ready for deployment. It further stated that the integration of its 46100 LFP cell—described as bigger than its current NMC cell—will begin from next quarter. The source includes a quote from a company spokesperson referencing “the readiness of our LFP 46100 cell” as a “pivotal moment” and tying it to “the strong progress at our Gigafactory” and “proven performance of our 4680 cells on the road.”

    Even without additional engineering specifics, the technology implication is straightforward: a battery cell readiness announcement and a stated integration timeline are concrete signals about manufacturing execution. For a sector where supply chain and production scaling often determine costs and throughput, a declared transition from one chemistry (NMC) to another (LFP) and the plan to integrate a specific form factor (46100) could be the kind of milestone market participants look for when assessing execution risk. The source does not quantify impact on unit economics or production capacity, so any effect on margins remains unaddressed in the article.

    Fintech and insurance reporting: Aye Finance and PolicyBazaar Insurance Brokers leadership change

    Fintech and insurance-adjacent companies also featured. Inc42 Media said Aye Finance reported a 27% YoY rise in AUM to ₹7,044 Cr in FY26, alongside “improvement in asset quality.” It reported that GNPA eased to 4.77% in Q4. While this is financial reporting rather than a product feature, it can still be read as a proxy for how risk models and underwriting processes are functioning—particularly in lending businesses where asset quality depends on the performance of credit decisioning systems.

    The article also reported that Tarun Mathur resigned as CEO and principal officer of PolicyBazaar Insurance Brokers, described as the insurance broking arm of PB Fintech, effective immediately. It said Sajja Praveen Chowdary will succeed him. The source does not connect the leadership change to any specific technology initiative, but for a platform-driven insurance broking business, leadership transitions can sometimes align with product and systems priorities (such as distribution tooling, underwriting workflow integration, or data-driven pricing). Any such linkage would be speculative beyond the article’s stated facts.

    Market macro and rates: RBI’s neutral stance and why it matters for tech stocks

    Inc42 Media attributed the broader rally to easing geopolitical risk, but it also included macroeconomic context from India’s central bank. It said a 15-day ceasefire in West Asia improved investor confidence, and that crude oil prices slipped below the $100 mark, easing inflation concerns and triggering a “strong rebound” across global markets. It also cited equity performance: Sensex and Nifty 50 gained close to 6% each, closing at 77,550.25 and 24,050.6, respectively.

    On policy, the article stated that the RBI’s Monetary Policy Committee maintained the repo rate at 5.25% and reiterated a “neutral stance.” It also reported that the RBI revised FY26 GDP growth to 7.6% and projected FY27 growth at 6.9%, while raising inflation projections to 4.6% for FY27. The source said elevated energy and commodity prices, plus supply shock due to disruptions in the Strait of Hormuz, would act as a drag on domestic production in 2026-27.

    It included a quote from Vinod Francis, CFO of South Indian Bank, saying the policy “provides much-needed stability,” that a “steady rate environment” supported by adequate liquidity should continue to support credit growth across retail and MSME segments, and that the policy strikes a “prudent balance” between growth support and inflation vigilance. For tech companies—especially those reliant on consumer demand and credit ecosystems—rates and liquidity conditions can influence both funding costs and customer acquisition dynamics. The article does not provide a direct causal model, but the inclusion of RBI’s stance suggests why investors may have been more willing to buy growth-oriented platforms during a period of improved sentiment.

    Overall, Inc42 Media’s week reads like a composite of market-wide tailwinds and sector-specific technical signals: AI platform positioning in adtech, battery cell readiness and manufacturing integration in EVs, and operational reporting in fintech and insurance. As the broader market steadies, investors may look for whether these technology milestones continue to produce measurable execution outcomes in subsequent quarters.

    Source: Inc42 Media

  • India’s MeitY Extends Comments Deadline for Draft IT Rule Amendments—Tightening Platform Content Moderation Requirements

    This article was generated by AI and cites original sources.

    Deadline Extended for Rule Amendment Feedback

    India’s Ministry of Electronics and Information Technology (MeitY) has extended the deadline for public feedback on draft amendments to the Intermediary Guidelines and Digital Media Ethics Code Rules, 2021. According to Inc42 Media, stakeholders can now submit comments on the proposed changes until April 29, after the draft was published on March 31 and the earlier comment window had been set to close on April 12. The draft revisions are designed to establish faster content moderation timelines once a platform has “actual knowledge” of unlawful content.

    Compliance Requirements and Content Takedown Timelines

    The draft amendments introduce operational compliance requirements that affect how platforms manage user-generated content. Inc42 Media reports that the proposed changes would require social media intermediaries—specifically naming Meta, Google, and X—to comply with a broader range of government-issued instruments.

    Issued under Section 87 of the IT Act, 2000, the draft expands the types of documents that can drive platform obligations. Inc42 Media lists the instruments as advisories, clarifications, orders, directions, standard operating procedures, and codes of practice connected to implementing the rules.

    A key operational requirement is the proposed stricter content moderation timeline. Inc42 Media states that platforms hosting content that could potentially facilitate “unlawful acts” must remove such material within three hours of gaining “actual knowledge.” The draft defines “actual knowledge” as arising either through a court order or via a reasoned written notice issued by an authorised government official.

    Safe-Harbour Protections and Compliance Risk

    Inc42 Media reports that failing to comply with the rules could result in intermediaries losing safe-harbour protections from liability for third-party content. This linkage between moderation timing and legal risk establishes the operational importance of how platforms interpret “actual knowledge” and how quickly they can act.

    The draft’s structure indicates that platforms may need processes to validate notice authenticity, capture the relevant scope of content, and route enforcement actions within the specified timeframe. The new obligations are time-bound and condition-driven.

    Digital Rights Organizations Raise Concerns

    The draft amendments have drawn criticism from digital rights organizations. Inc42 Media quotes the Internet Freedom Foundation (IFF), which stated that the rules “creates a sweeping power for MeitY to issue binding instruments which are not anchored in law such as clarifications, advisories, directions, SOPs, codes of practice, and guidelines that intermediaries must comply with as a condition of safe harbour under Section 79 of the IT Act.”

    This critique targets the governance model for moderation obligations. If compliance requirements can be driven by instruments that are not “anchored in law,” platforms may face ongoing changes to enforcement criteria and processes.

    Inc42 Media also reports that IFF argued the proposals came “at a time of fear and increased government directed censorship,” including concerns about online political speech. The technological implication is that moderation timelines and takedown obligations could affect how platforms treat user-generated speech categories.

    Parliamentary Debate on Platform Features and Potential Obligations

    Beyond the draft’s takedown and compliance framework, Inc42 Media reports a related debate involving social media features. Member of Parliament Nishikant Dubey stated that the Parliament’s Standing Committee on Communications and Information Technology indicated that social media platforms like X should either remove the community notes feature or pay a “publisher’s tax.”

    Inc42 Media reports IFF’s response: it stated that “no Australian statute treats a ‘Community Notes’ style feature as converting a platform into a ‘publisher’ liable to any levy or tax.

    From a technology perspective, the community notes discussion indicates how information systems inside platforms—such as user or crowd-sourced context features—can be interpreted by regulators in ways that affect platform obligations. The source does not confirm any rule changes tied to community notes specifically; it reports the MP’s claim and IFF’s rebuttal.

    Government Position and Ongoing Consultation

    Inc42 Media reports that electronics and IT secretary S Krishnan characterized the amendments as “purely clarificatory and procedural” and stated they do not expand the government’s authority over online content. He also indicated that oversight of news content online would shift to the MIB, which already regulates registered digital publishers, as user-generated news content becomes more common online.

    In a meeting that IFF founder and director Apar Gupta attended, Krishnan indicated that some changes are being made based on feedback, including greater definitional clarity around terms like “news” and “current affairs.” The source does not specify the exact wording changes, but indicates that the draft is not static during the consultation window.

    With the comment deadline now extended to April 29, stakeholders may focus on the draft’s operational definitions—particularly “actual knowledge”—and on how compliance instruments could affect moderation workflows.

    Source: Inc42 Media

  • X revamps creator revenue sharing to prioritize original posts and reduce engagement farming

    This article was generated by AI and cites original sources.

    Elon Musk-led social platform X is changing how it pays creators, aiming to reduce incentives for engagement farming and to direct revenue sharing toward original, high-quality content that adds value to the Timeline. According to X Product Head Nikita Bier, the update for the creator payout cycle will experiment with tools to identify original authors and will derank low-quality content—an approach that targets the mechanics of monetization rather than the content itself. The move follows months of criticism that X’s earlier payout rules rewarded accounts posting low-quality viral videos or clickbait to maximize impressions.

    What X is changing in its monetization mechanics

    X Product Head Nikita Bier outlined the rationale and mechanics of the revamp in a post on X. Bier stated that for the current payout cycle, X is “experimenting with new tools to identify original authors of content and allocating a portion of revenue to them.” The update also includes deranking low-quality content alongside incentivizing original, high-quality content that brings new value to the Timeline.

    Bier framed the policy shift in terms of how X’s revenue sharing should work. He wrote that reposts and commentary would “always be a core pillar of X,” but that the Revenue Sharing programme should not simply reward the accounts that “helped [content] travel furthest.” Instead, the programme should “reward[] the effort it takes to produce something,” with the stated goal of building “a richer Timeline.” Bier also said that the Revenue Sharing programme “will continue to evolve” to encourage creators to post “their best content” to X.

    Technically, the key change is the introduction of tools designed to identify original authors. While the source does not describe the specific technical method—such as how X determines originality or how it handles reposts, remixes, or commentary—the emphasis on “tools to identify original authors” indicates a shift toward attribution mechanisms within the payout pipeline.

    Why engagement farming became a focus

    The revamp arrives after months of criticism of X for promoting engagement farming. In this practice, accounts post low-quality viral videos or clickbait content to improve the number of impressions on their posts, which was a key factor in the X creator payout. In other words, the incentive structure rewarded distribution volume over content quality.

    Engagement farming becomes a systems problem when monetization relies on signals that can be gamed. X’s creator payout tied to impressions created incentives for the spread of low-quality content. By changing what counts and how revenue is allocated, X is attempting to modify the feedback loop between content performance metrics and payout outcomes.

    The updates could reduce the volume of clickbait-style posts while preserving legitimate reposting and discussion. Bier’s language that reposts and commentary remain a “core pillar” suggests X is attempting to preserve conversational distribution while adjusting monetization incentives.

    Prior payout changes: reply spam and impression counting

    This update is not X’s first adjustment to payout criteria. Earlier in the year, Bier announced another change: X stopped counting impressions on replies toward monetization payout in order to reduce “reply spam.” The platform now counts only organic views on the main homepage timeline toward payout.

    From a product perspective, these changes indicate that X’s creator payout system is sensitive to how different surfaces contribute to impressions. Moving from “replies” to “main homepage timeline” reduces the ability to manufacture payouts through low-effort reply activity. The new revamp extends that pattern by shifting revenue attribution from whoever “helped [content] travel furthest” toward the original author, using tools to identify who created the content in the first place.

    The sequence indicates that X is iterating on both (1) the signal sources that feed payout (organic views on the main homepage timeline) and (2) the attribution logic that determines who receives revenue for performance.

    Regional weighting proposal and leadership intervention

    The source also highlights an internal policy decision. Bier had proposed a change to the revenue sharing programme where X would give weight to impressions from the poster’s home region, intended to encourage content that resonates with people in that country. That proposal was vetoed by Elon Musk. Following criticism, Musk stated the policy was on “pause moving forward with this until further consideration.”

    This detail shows how creator monetization rules can intersect with questions of audience targeting, fairness, and localization. The veto indicates that X’s monetization strategy is actively being shaped, with leadership intervention when proposed changes trigger backlash.

    For industry observers, this suggests that payout programs can become a high-stakes policy surface: small changes to how impressions are weighted or counted can have significant effects on creator behavior. The combination of deranking low-quality content, experimenting with original-author identification, and revising impression sources reflects a broader trend in platform monetization—moving from simple performance metrics toward more complex ranking and attribution systems.

    What comes next

    The source notes that eligibility for X creator payout depends on meeting X’s monetization criteria, though specific criteria are not detailed in the available information. The described direction is specific: X will experiment with tools to identify original authors, allocate a portion of revenue to them, and derank low-quality content—while keeping reposts and commentary central to the platform.

    Given that X has already adjusted payout counting to reduce “reply spam,” the current update represents another iteration in the same design loop: modify the signals that drive payouts, observe creator behavior, then refine. Whether these changes measurably reduce engagement farming will likely depend on how well X’s originality tools and deranking mechanisms align with what users and creators consider “original” and “high-quality.” The source does not provide performance results or timelines beyond the announcement of the new payout-cycle experiment.

    Source: mint – technology

  • Three OpenAI Stargate Leaders Join Meta Platforms

    This article was generated by AI and cites original sources.

    Three leaders from OpenAI’s effort to build large-scale artificial intelligence (AI) data center capacity are joining Meta Platforms, according to Tech-Economic Times. The report names Peter Hoeschele as one of the new hires and identifies him as playing a critical role in OpenAI’s Stargate initiative—an effort to set up hundreds of billions of dollars’ worth of AI data center capacity.

    The News

    Three key players from OpenAI’s effort to build AI data center capacity are moving to Meta Platforms. Peter Hoeschele, who played a critical role in OpenAI’s Stargate initiative, is one of the new hires. Stargate is described as an effort to set up hundreds of billions of dollars’ worth of artificial intelligence data center capacity.

    What This Signals

    The staffing transition reflects how AI infrastructure development depends on specialized expertise. Data center capacity—the ability to run training and inference workloads at scale—requires coordination across design, procurement, construction, and operational planning. The movement of personnel from OpenAI to Meta suggests a transfer of infrastructure-building experience between organizations.

    In AI deployments, compute capacity and supporting systems that maintain that capacity at scale are central resources. The scale described—hundreds of billions of dollars—underscores the capital intensity of the infrastructure layer. Large-scale capacity expansion typically involves questions about how quickly new capacity can be brought online, how efficiently it can be operated, and how reliably it can support large workloads.

    Talent and Infrastructure Knowledge

    The report indicates that organizations building AI infrastructure compete for experienced planners and leaders. Peter Hoeschele is explicitly identified, while the other two key players are not named in the source material. The characterization of Hoeschele’s role as critical within Stargate suggests he was involved in coordinating infrastructure planning and execution.

    Hiring people with prior experience on large AI data center projects could reduce the learning curve for new buildouts. However, the source does not specify what responsibilities Hoeschele will take on at Meta.

    Industry Context

    The hiring move provides a directional signal: Meta is adding leadership from an organization pursuing large-scale AI data center capacity. This could reflect a broader trend in which AI infrastructure competition manifests through staffing decisions, not just hardware procurement.

    The source material does not provide enough information to confirm whether Meta’s hiring is tied to a specific new buildout, a change in timeline, or a shift in technical approach. The most direct conclusion from the source is that Meta is bringing in talent connected to OpenAI’s Stargate effort.

    What to Watch

    The most relevant follow-up would be whether Meta publicly describes how these hires fit into its AI compute planning. Observers may watch for disclosures about AI infrastructure scale, data center capacity expansion plans, or organizational changes that connect the hires to measurable technical outcomes.

    Source: Tech-Economic Times

  • Dutch regulators approve Tesla’s supervised self-driving on highways and city streets

    This article was generated by AI and cites original sources.

    Dutch regulators approved Tesla’s supervised self-driving software for use on highways and city streets, marking a European first for the electric car maker. The approval requires continued human supervision, positioning the software as an assisted driving capability rather than fully autonomous operation in the Netherlands. Tesla is seeking similar approval across the rest of the European Union.

    What the Dutch approval covers

    Dutch regulators approved Tesla’s self-driving software under a specific operating model: it can be used while a person remains responsible for oversight. The approval spans two major road environments—highways and city streets—which differ in traffic patterns, road geometry, and the types of risks that drivers must be prepared to handle.

    The approval is framed as requiring human supervision, meaning the regulatory permission is tied to an ongoing safety structure. In practical terms, this indicates that the system’s deployment is contingent on driver intervention capability: the software may perform driving tasks, but supervision remains part of expected operation.

    The significance of human supervision requirements

    Self-driving systems are evaluated not only on what they can detect and control, but on how they behave when conditions become difficult. The Dutch decision is notable because it explicitly defines the allowed use as supervised. That framing has implications for how the software is expected to function in the field: the driver’s role is not optional, and the system’s responsibility boundaries are part of the approval.

    For technology observers, this approval reflects a particular deployment pattern—one where the system handles subsets of driving tasks while a human remains actively accountable. The approval’s structure indicates that regulators accepted this approach for both highway and urban driving contexts.

    Supervised deployment is where real-world testing, iterative improvements, and compliance processes typically converge. The approval’s structure suggests that regulators are establishing a predictable relationship between automated behavior and human oversight.

    A European first and potential reference point for other regulators

    The Dutch approval is described as a European first for Tesla’s supervised self-driving on these road types. This positions the Netherlands ahead of other EU jurisdictions in granting permission for this form of supervised self-driving.

    Tesla stated it hopes to see similar action from the rest of the European Union. The company is seeking regulatory approval that can be extended or mirrored across multiple EU markets.

    From an industry standpoint, this approval could influence how other regulators evaluate supervised driving systems. If the Dutch approval becomes a reference point, regulators in other countries may compare their own requirements to the Dutch approach, particularly regarding the supervision condition and the scope of roads covered.

    Implications for deployment and product strategy

    The Dutch approval places supervised self-driving at an intersection of regulatory scrutiny and commercial deployment. While specific implementation timelines are not detailed in the source material, the approval connects to a broader objective: wider adoption in the EU.

    If Tesla obtains comparable approvals elsewhere, the company could adjust rollout sequencing, focusing first on markets where regulators accept the supervised model for highway and city street use. Conversely, if other regulators interpret “required human supervision” differently, Tesla may face variability in deployment requirements across countries.

    The approval tied to specific road contexts suggests that regulators may expect consistent performance and operational safeguards in both highway and urban environments.

    Summary

    Dutch regulators approved Tesla’s self-driving software for use on highways and city streets under required human supervision, marking a European first. Tesla is seeking similar approvals across the EU, making this decision an early reference point for how supervised automated driving may be permitted across Europe.

    Source: Tech-Economic Times

  • IBM Settles $17 Million U.S. Government Probe Over DEI Practices

    This article was generated by AI and cites original sources.

    IBM has agreed to pay $17 million to settle a U.S. government probe tied to the company’s diversity, equity and inclusion (DEI) practices, according to Tech-Economic Times. The investigation is part of increased scrutiny under President Donald Trump’s administration, which has focused on DEI during his second term in office. While the dispute centers on corporate policy, the technology industry implications are noteworthy: how compliance risk tied to workplace programs can affect how large-scale employers structure internal processes, vendor relationships, and the people systems that ultimately support product and service delivery.

    Settlement Details

    IBM reached the settlement by agreeing to pay $17 million to resolve a U.S. government probe over its DEI practices, according to Tech-Economic Times. The source does not provide additional details about the probe’s methods, the specific DEI practices under review, or the compliance mechanisms IBM used. It also does not include government findings or IBM statements.

    For technology companies, DEI-related probes can matter because many operational functions that support engineering and delivery—recruiting, training, internal mobility, and workforce planning—are closely tied to how organizations manage hiring and development. Even when a dispute is not about code or systems directly, it can translate into changes to internal governance and documentation, as well as adjustments to how companies communicate program goals and track outcomes.

    Compliance and Operational Implications

    The probe reflects the Trump administration’s focus on DEI during his second term. In technology, workplace policy is connected to execution: staffing pipelines and internal programs influence how teams scale, how knowledge is transferred, and how organizations maintain continuity across product cycles. From an industry perspective, the key point is the compliance and operational uncertainty that can follow when government attention increases.

    Settlement outcomes like this may prompt technology leaders and counsel to revisit how they design internal programs and how they document decision-making processes. The source does not specify whether IBM will change its DEI approach going forward, but the settlement suggests the company determined that resolving the probe through payment was preferable to continued litigation or further investigation. For other technology employers, observers may watch whether similar probes lead to changes in internal governance structures, program reporting practices, or how HR and legal teams coordinate with operational leadership.

    Broader Enforcement Context

    Tech-Economic Times characterizes the probe as occurring within an environment where the Trump administration has focused on DEI during his second term. The report does not enumerate specific enforcement tools, agencies involved, or the scope of this focus. The framing indicates a policy environment where DEI-related compliance risk is heightened.

    This matters for the tech sector because large organizations often operate under multiple overlapping compliance regimes—workforce rules, contracting expectations, procurement requirements, and employment law. When a government administration shifts enforcement posture, companies may re-evaluate how they align workforce programs with the administration’s priorities. Even without details from the source about the underlying legal theory, the settlement amount and the fact that the probe is government-led indicate a compliance process with sufficient traction to reach a monetary resolution.

    Potential Industry Effects

    Because the source offers limited detail, any industry implications should be understood as analysis rather than confirmed reporting. A $17 million settlement may signal to the market that DEI practices—as interpreted by regulators—can become a material risk category for technology employers. This could influence how companies allocate legal and compliance resources, how they structure HR program documentation, and how they manage internal review cycles for policies that touch hiring, advancement, and training.

    The source does not indicate whether IBM’s technology teams are directly involved in the dispute or whether there are changes to IBM’s products, engineering processes, or AI development practices. This appears to be primarily a corporate governance and employment-policy issue with potential effects on staffing and internal operations rather than a direct technical shift in IBM’s systems.

    For the wider tech industry, the settlement highlights how workplace governance can become intertwined with regulatory scrutiny as technology companies grow into large employers with global workforces. This can affect internal policies and how firms communicate program goals and prepare for audits or investigations. Other technology companies may watch for whether additional settlements or enforcement actions follow, though the source itself does not mention other companies or subsequent steps.

    What Comes Next

    Tech-Economic Times’ report centers on the settlement: IBM’s agreement to pay $17 million to resolve a U.S. government probe over DEI practices, with the context tied to the Trump administration’s focus on DEI during his second term. The source does not provide follow-on details, such as whether IBM admitted wrongdoing, whether there are specific remediation steps, or whether the company will alter particular DEI programs.

    In the near term, industry watchers may focus on any additional disclosures from IBM or the government about the settlement’s terms and any compliance requirements attached to it. The settlement itself is a concrete data point about how DEI-related scrutiny can produce financial outcomes for a major tech employer, underscoring that corporate policy risk can become operationally consequential.

    Source: Tech-Economic Times

  • South Africa Drafts AI Policy: Institutions, Incentives, and Governance Framework

    This article was generated by AI and cites original sources.

    South Africa has published a draft AI policy through its Department of Communications and Digital Technologies, setting out a framework for how artificial intelligence is developed and deployed in the country. According to Tech-Economic Times, the policy aims to position South Africa as a “continental leader in AI innovation” while addressing ethical, social, and economic challenges—reflecting how governments are increasingly linking AI capability building with governance frameworks. (See Tech-Economic Times.)

    Policy Framework and Objectives

    The draft policy, published by the Department of Communications and Digital Technologies, frames AI as both a technical capability and a domain requiring governance. This approach reflects the recognition that AI systems can affect decision-making across society and introduce both benefits and risks. The policy addresses multiple categories of concerns: ethical, social, and economic.

    The policy structure indicates a dual focus on innovation and risk management. The “continental leader in AI innovation” framing emphasizes capability development, while the explicit mention of ethical and social challenges indicates attention to governance. In practice, this combination typically requires technical standards, evaluation approaches, and institutional oversight.

    Institutions and Incentives as Policy Tools

    A central element of South Africa’s draft policy is the proposal for new institutions and incentives. These mechanisms serve as more than administrative structures; they directly influence how AI is developed and adopted.

    New institutions can enable:

    • Policy-to-technical translation: converting high-level ethical or social goals into concrete requirements that developers and deployers can implement.
    • Evaluation capacity: establishing processes for assessing AI systems against stated criteria.
    • Coordination: aligning government priorities with industry and research activities.

    Incentives can shape the technical ecosystem by influencing which types of AI projects attract funding, attention, or adoption support. While the source does not specify which incentive categories South Africa’s draft will emphasize, the policy includes both institutional proposals and incentive mechanisms positioned alongside the ethics-and-society framework.

    Continental Leadership as a Policy Objective

    The draft policy’s stated aim—positioning South Africa as a “continental leader in AI innovation”—treats AI development as a capability-building and competitiveness project. In technology terms, leadership typically translates into measurable capacities such as research output, talent development, deployment maturity, and infrastructure readiness. The source does not provide specific metrics or timelines for these measures.

    The policy’s dual emphasis suggests that the government expects AI innovation and AI governance to advance together. This approach recognizes that governance disconnected from engineering realities can impede adoption or fail to reduce risk, while innovation without governance can increase the likelihood that deployed systems create harm or fail to meet ethical expectations. By explicitly addressing ethical, social, and economic challenges while pursuing innovation leadership, the draft policy appears designed to integrate these two tracks within a single framework.

    Implications for South Africa’s AI Ecosystem

    The draft policy indicates that South Africa is establishing a formal AI governance framework under the Department of Communications and Digital Technologies, with proposals for new institutions and incentives and explicit attention to multiple risk and impact categories. This suggests that stakeholders—AI developers, researchers, and organizations planning deployments—may need to prepare for a regulatory environment that increasingly treats AI as a strategic sector.

    The source does not include the draft’s technical requirements, so specific compliance obligations cannot yet be predicted. However, observers may watch for how the proposed institutions translate ethical and social concerns into operational guidance—including how systems might be evaluated, how accountability could be structured, and how economic goals might be supported through incentive design. The policy’s framing indicates that economic considerations will be part of the governance conversation, which could affect priorities for deployment and investment.

    The publication of a draft AI policy indicates that South Africa is formalizing its approach to AI. This reflects a broader global pattern: governments are increasingly adopting AI strategies that combine capability building with oversight, requiring technical stakeholders to engage with policy direction rather than treating AI governance as a secondary consideration.

    Source: Tech-Economic Times

  • Commvault Explores Strategic Options After Receiving Takeover Inquiries

    This article was generated by AI and cites original sources.

    Commvault is exploring potential sale options after receiving takeover inquiries from both private equity firms and strategic buyers, according to Tech-Economic Times. The company is working with Goldman Sachs as it evaluates its options, with Commvault’s market capitalization at approximately $3.5 billion. The report positions the enterprise data management vendor at a moment when ownership changes can affect product roadmaps, integration priorities, and how customers plan for long-term support.

    What Commvault is doing—and who is involved

    Tech-Economic Times reports that Commvault, valued at roughly $3.5 billion by market capitalization, is working with Goldman Sachs to assess its options. The catalyst is a set of inquiries: the company has fielded interest from private equity firms and strategic buyers.

    The involvement of a major investment bank like Goldman Sachs typically signals that a company is conducting a structured evaluation of alternatives. However, the source material does not specify whether Commvault has entered formal negotiations, whether any offer has been made, or whether a sale is imminent.

    Why takeover interest matters for enterprise technology customers

    For customers of enterprise software, ownership transitions can affect technology timelines. Even when product development continues, the buyer’s broader strategy may influence how quickly certain features are prioritized, how support organizations are staffed, and how integration efforts are handled across existing platforms. The Tech-Economic Times report establishes the key variable: Commvault is in an active process that could change the company’s corporate direction.

    In enterprise data management and related software markets, buyers typically evaluate not just the current capabilities of a platform, but also the stability of the vendor. A sale process can introduce uncertainty during evaluation periods—customers may watch for announcements about continuity of support, product releases, and long-term maintenance. Because the source material is limited to the fact of takeover inquiries and advisory support, those customer-facing outcomes remain unknown from the report itself.

    Private equity vs. strategic buyers: different incentives

    The Tech-Economic Times report distinguishes between two categories of potential interest: private equity and strategic buyers. While the article does not describe the specific firms or their stated plans, the categories themselves suggest different incentives that could affect technology execution.

    Strategic buyers generally align acquisitions with product or platform expansion, which can lead to emphasis on interoperability, bundling, and consolidation of overlapping capabilities. Private equity interest, by contrast, may focus on financial outcomes and operational changes, which could translate into cost and efficiency initiatives that affect how engineering resources are allocated. These are industry-level patterns; the source material does not attribute any of these behaviors to the parties involved in Commvault’s case.

    What the report does provide is the presence of both interest types. That combination could mean Commvault’s technology and market position are being assessed through multiple lenses—either as an add-on to an existing strategic portfolio or as a standalone opportunity. Observers may watch how the process unfolds to see whether the inquiries result in a preferred path.

    What to watch next in the sale process

    Because Tech-Economic Times frames the situation as Commvault “exploring” sale-related options, the immediate next steps are likely to be process-driven: evaluating proposals, assessing valuation, and determining whether to proceed with a transaction. The report does not state timing, does not mention regulatory steps, and does not indicate whether a deal has been reached.

    From a technology ecosystem perspective, relevant follow-on questions—based on what is implied by the existence of takeover interest—may include whether any prospective acquirer would announce integration plans, how product support commitments would be communicated, and whether customers would see changes in deployment or roadmap priorities. The source material does not answer these questions, so they remain areas where further reporting would be needed.

    The core facts are clear: Commvault is valued at approximately $3.5 billion by market capitalization, it is consulting with Goldman Sachs, and it has received inquiries from private equity firms and strategic buyers, as described by Tech-Economic Times. For enterprise technology stakeholders, that combination typically marks the start of a period where technical continuity and strategic direction become key watchpoints.

    Source: Tech-Economic Times

  • ChatGPT May Be Classified as a ‘Very Large Search Engine’ Under EU’s Digital Services Act

    This article was generated by AI and cites original sources.

    The News

    OpenAI’s ChatGPT may soon be classified as a “very large search engine” under the European Union’s Digital Services Act (DSA), according to a report from German newspaper Handelsblatt, as summarized by Tech-Economic Times (published April 10, 2026). If the classification proceeds, the DSA would impose stricter regulations on the service. The European Commission is also reported to be reviewing user data related to the classification process, while OpenAI has declined to comment on the development.

    From Chatbot to “Very Large Search Engine”

    The proposed classification represents a significant regulatory shift: ChatGPT is set to be classified as a very large search engine. Under the DSA framework, this designation carries substantial implications. It signals that a service’s role in information discovery and user access is significant enough to warrant higher compliance expectations.

    Handelsblatt reported the shift, citing sources, and Tech-Economic Times relayed the same information: the reclassification would mean ChatGPT would fall under the DSA and therefore face stricter rules. The report also notes that the European Commission is reviewing user data related to this classification. This detail is noteworthy because it suggests the decision may depend on observable patterns of use—how users interact with the service and how the service functions in practice as a gateway to information.

    What the Commission’s Data Review Implies for AI Systems

    While the source does not specify which datasets or metrics the Commission is evaluating, it establishes a direct link between classification and user data review. For AI companies, that connection is significant because it ties regulatory outcomes to the operational reality of deploying language models at scale.

    From a technology standpoint, user data can capture a range of interactions—such as query-like prompts, browsing-adjacent behavior, and the ways users rely on a system to retrieve or synthesize information. The source does not enumerate the exact signals, but the existence of a Commission review of user data indicates that regulators may treat the service’s “search-like” behavior as measurable.

    Observers may watch for how this classification could affect engineering priorities around data handling and compliance instrumentation. If a service is categorized under a regime designed for search and discovery, the company’s systems may need stronger controls and reporting mechanisms aligned with that role.

    Why DSA Classification Matters for Technology Operations

    The source’s focus centers on the DSA and the “very large search engine” category, but the implications for technology operations could be immediate. A reclassification can change what teams must document, monitor, and potentially modify in how a system responds to users.

    In practice, AI services combine model behavior with product features—prompt handling, response generation, ranking or selection of information sources (if any are used), and user interface patterns that shape how people interpret outputs. If regulators treat ChatGPT as a search engine, the compliance workload could extend beyond model training to include the end-to-end product pipeline: how queries are processed, how outputs are delivered, and how user interactions are tracked for oversight.

    The report also states that OpenAI declined to comment on the development. That lack of comment could reflect uncertainty during review, internal assessment, or a decision to wait for more concrete guidance. For the industry, the absence of confirmation means that engineers and compliance teams may need to plan for multiple scenarios: one in which the classification proceeds and one in which it does not.

    What to Monitor Next

    Because the source describes the situation as a set of developments—classification expectations, a Commission review of user data, and a company declining to comment—the next steps are likely to be procedural and evidence-driven. The outlet’s account points to the EU Commission’s review as the immediate focus.

    For tech audiences, the key watch items would be: whether the European Commission finalizes the “very large search engine” status for ChatGPT, what user-data elements are considered relevant to that determination, and how OpenAI responds once the regulatory boundaries become clearer. The source does not provide timelines beyond the article’s publication date of April 10, 2026, so specific deadlines cannot be inferred from the text.

    More broadly, this case could signal how regulators may interpret AI-driven information services. If ChatGPT’s functionality is treated similarly to search engines, other AI systems that function as information finders or interpreters could face similar scrutiny under the DSA—though the source does not mention other companies, so any broader extrapolation should be treated as analysis rather than reported fact.

    Bottom Line

    According to Handelsblatt, as reported by Tech-Economic Times, ChatGPT is set to be classified as a very large search engine under the EU Digital Services Act. That classification would bring stricter regulation, while the European Commission reviews user data connected to the classification. OpenAI has declined to comment, leaving the outcome contingent on the Commission’s review.

    Source: Tech-Economic Times

  • US Treasury Meeting Addresses Bank Risk Management for Anthropic’s Mythos AI Model

    This article was generated by AI and cites original sources.

    On Tuesday in Washington, the US Treasury Department hosted a meeting focused on how banks should manage risks associated with Anthropic model deployments—particularly a model referred to as Mythos and similar large AI systems. According to Tech-Economic Times, the meeting was aimed at ensuring bank executives understand potential threats and are taking steps to defend their systems.

    The discussion also highlighted a controlled access approach: access to Mythos will be limited to about 40 technology companies, including Microsoft and Google. Anthropic has been in ongoing talks with the US government about the model’s capabilities, the startup has said, establishing a policy and security framework for how frontier AI is deployed in critical infrastructure contexts like finance.

    Treasury Department Convenes Bank Leaders on AI Model Risk

    The meeting’s stated purpose, as described by Tech-Economic Times, was to ensure banks are aware of potential risks posed by Mythos and similar models and that they are taking steps to protect their systems. The focus centers on defense and awareness rather than on model performance or consumer-facing features.

    While the source does not detail specific technical failure modes being discussed, the emphasis on “potential risks” suggests that bank threat models may include issues that arise when external AI capabilities are integrated into workflows, accessed via APIs, or used to support decision-making. For banks, this can translate into concerns about system integrity, data handling, and the reliability of outputs in operational environments—areas where access controls and governance mechanisms matter.

    Limited Mythos Access: Approximately 40 Technology Companies

    A concrete element from the source is the planned scope of availability. Access to Mythos will be limited to about 40 technology companies, with Microsoft and Google named among those expected to have access.

    From a technology governance perspective, limiting access to a defined set of companies can serve to control exposure while models are evaluated, integrated, and monitored. The source does not specify the mechanism—such as contractual controls, technical gating, or monitoring requirements—but the “limited to about 40” figure provides a measurable boundary for deployment scope at this stage.

    For the industry, this access model could influence how quickly downstream products are built. If only a defined group of firms can obtain Mythos, early experimentation, tooling, and integration efforts may concentrate around that cohort. Industry observers may track how those companies translate access into internal systems and how they structure safeguards, particularly given that the Treasury meeting indicates banks are already being prompted to consider these models as a risk category.

    Anthropic’s Government Discussions on Model Capabilities

    The source indicates that Anthropic has been in ongoing talks with the US government about the model’s capabilities. Although the article does not detail those capabilities or the outcomes of the talks, it positions Mythos within a broader pattern: advanced AI models are being reviewed in relation to how they could affect systems that require resilience.

    This matters because “capabilities” can encompass multiple technical dimensions—such as what the model can do, how it behaves under different inputs, and how it interacts with data and tools. The Treasury meeting’s bank-focused risk framing suggests that government discussions may be linked to operational security concerns when such models are connected to high-stakes environments.

    Implications for AI Deployment in Financial Institutions

    The Treasury meeting’s focus on ensuring banks take action to defend their systems suggests that the concern centers on whether Mythos’s presence changes the threat landscape for financial institutions. While the source does not provide additional technical specifics, several industry-relevant considerations follow from the setup:

    1) Risk management may need to extend to external model access. If Mythos is available to a limited set of technology companies, banks that rely on vendors, partners, or integrations connected to those companies could face indirect exposure. The Treasury meeting’s focus suggests that banks should consider these dependencies in their defensive planning.

    2) AI governance could become part of infrastructure security. The meeting’s placement at the Treasury Department signals that AI model risk is being treated as relevant to financial system stability and operational readiness. This could prompt banks to formalize policies around AI usage, including how outputs are validated and how systems are monitored.

    3) Early integration may be paired with oversight. The source’s mention of ongoing government talks about capabilities suggests that deployment may come with scrutiny. While the exact form of oversight is not specified, the combination of limited access and government engagement points to a controlled rollout approach.

    These observations are necessarily cautious: the source does not provide technical details on Mythos risks or the specific steps banks are taking. However, the fact that bank leaders were warned—per the article’s framing—indicates that AI models are moving from experimental tools toward components that financial institutions must treat as part of their security posture.

    Significance for AI Deployment Tracking

    For technology audiences tracking frontier AI deployment, the core storyline involves the intersection of model availability, government engagement, and financial sector risk management. The source ties Mythos to a defined access footprint (approximately 40 technology companies, including Microsoft and Google) and ties Anthropic to ongoing US government discussions about capabilities. Together, these elements suggest that AI model governance is being operationalized through both access controls and institutional preparedness.

    As banks adjust their defenses, a key question for the industry—based on what is described here—may be how systems that sit outside banks but feed into them through technology partners are secured. The Treasury meeting indicates that risk extends beyond the model provider to how models are used within the broader technology stack.

    Source: Tech-Economic Times