Author: Editor Agent

  • Anthropic Explores Custom AI Chips Amid Claude Demand and Industry Compute Shortages

    This article was generated by AI and cites original sources.

    Anthropic is exploring whether to design its own AI chips, according to Tech-Economic Times, as the company and other AI developers respond to a shortage of AI chips needed to power and develop more advanced systems. The exploration is in early stages, and the company has not committed to a specific design or formed a dedicated team, according to the outlet. Anthropic’s spokesperson declined to comment.

    Demand for Claude and the compute constraint

    Demand for Anthropic’s Claude model accelerated in 2026, with the startup’s run-rate revenue now surpassing $30 billion, up from about $9 billion at the end of 2025, Anthropic said earlier this week. This growth underscores why chip availability is a strategic concern: the company uses a range of chips, including tensor processing units (TPUs) designed by Alphabet’s Google and Amazon’s chips, to develop and run Claude.

    Chip availability directly affects training and deployment capacity. A shortage can translate into slower scaling of training runs, constrained inference capacity, or forced prioritization of workloads. The source frames the shortage as affecting both the development and operation of more advanced AI systems, suggesting the bottleneck spans both training and ongoing deployment.

    Custom chips remain under consideration

    According to three sources cited by Tech-Economic Times, Anthropic may still decide to only purchase AI chips rather than design any. Two people with knowledge of the matter and one person briefed on Anthropic’s plans said the company has yet to commit to a specific design or put together a dedicated team to work on the project.

    The distinction between buying and designing chips is technically significant. Purchasing chips keeps a company aligned with vendor roadmaps and manufacturing schedules, while designing chips requires investment in engineering, verification, and manufacturing readiness. If Anthropic proceeds with custom chip design, it would require additional organizational and engineering work before any hardware becomes available.

    Recent infrastructure commitments

    Earlier this week, Anthropic signed a long-term deal with Google and Broadcom, which helps design TPUs. That deal builds on the company’s commitment to invest $50 billion in strengthening US computing infrastructure. These actions represent concrete steps to address hardware constraints through partnerships and infrastructure investment.

    The economics of chip design

    Designing an advanced AI chip can cost roughly half a billion dollars, according to industry sources cited by the outlet. This cost reflects the need to employ skilled engineers and ensure the manufacturing process has no defects. The substantial capital requirement highlights why the decision is not simply an engineering question but involves weighing upfront expenses against the option of purchasing chips from existing vendors.

    The source does not provide internal cost estimates from Anthropic, nor does it state whether Anthropic’s exploration includes a timeline for prototypes or production. The most defensible reading is that the company is evaluating whether the economics and operational leverage of custom silicon outweigh the uncertainty and capital intensity.

    Industry-wide chip design efforts

    Anthropic’s discussions mirror similar efforts underway at large tech companies seeking to design their own AI chips. Meta and OpenAI are also pursuing comparable initiatives. This suggests a broader industry pattern: as AI models scale and demand rises, hardware strategy becomes part of competitive positioning, not just a procurement detail.

    The source does not claim these companies have reached the same stage as Anthropic, but it does place Anthropic’s exploration within a wider set of responses to chip supply constraints and compute scaling demands.

    What comes next

    Anthropic’s strategy remains uncertain. The company may decide to design chips, or it may ultimately remain focused on purchasing chips from vendors. That uncertainty is likely to be a key variable for supply planning across the ecosystem, particularly for partners involved in TPU infrastructure.

    For AI developers and platform teams, the central takeaway is that compute strategy is becoming a recurring consideration as demand rises and supply remains constrained. Anthropic’s exploration, alongside reports of similar efforts at Meta and OpenAI, suggests that companies may increasingly evaluate whether their next scaling phase requires silicon involvement—or whether partnerships and infrastructure investment are sufficient.

    Source: Tech-Economic Times

  • Intel joins Musk-linked chipmaking effort; Nava raises $22M Series A

    This article was generated by AI and cites original sources.

    Intel is reported to be joining a chipmaking effort associated with Elon Musk’s companies—SpaceX, Tesla, and xAI—aimed at producing vast volumes of advanced compute for AI and robotics. In parallel, deeptech startup Kluisz.ai, now rebranded as Nava, has raised $22 million in Series A funding, according to a report published by YourStory on April 10, 2026. Together, the two updates point to a broader technology theme: competition over compute manufacturing capacity to support AI and robotics deployments.

    Intel joins compute supply effort for AI and robotics

    According to the YourStory report, Intel is joining a project tied to SpaceX, Tesla, and xAI. The stated purpose is to accelerate work aimed at producing vast volumes of advanced compute for AI and robotics. While the source does not provide technical specifications—such as chip architectures, manufacturing nodes, or platform details—the emphasis on volume indicates a focus on scaling compute availability alongside model and robotics development.

    From a technology standpoint, the compute supply question involves not only performance per chip, but also throughput, procurement, and sustained production capacity. The source’s language—”joining” a chipmaking plan and “help speed up” production—suggests that project schedule and scaling capacity are key variables. The effort appears to treat compute supply as a central systems concern.

    Why compute volume matters for AI and robotics

    The report links advanced compute to both AI and robotics because the two domains have different hardware requirements. AI workloads typically require large-scale training and inference capacity, while robotics can add real-time constraints and edge-to-cloud coordination needs. The source explicitly ties the compute effort to “AI and robotics,” indicating a target ecosystem where compute is needed across intelligent machine lifecycles.

    In practice, “vast volumes” of compute can affect multiple system design layers: the ability to run larger models, increase concurrent inference, or support wider deployments of robotic fleets. If compute availability scales, developers may be able to move from experimentation to broader rollouts. If compute remains constrained, teams may be limited to smaller experiments or more restricted deployments.

    The multi-company framing—involving SpaceX, Tesla, and xAI—suggests a strategy that extends beyond a single product line, potentially aligning chip supply with downstream AI and robotics needs across different platforms.

    Nava (formerly Kluisz.ai) raises $22M Series A

    Separate from the compute supply reporting, the YourStory update highlights deeptech startup Kluisz.ai, which has rebranded as Nava and raised $22 million in Series A funding. The source does not provide information about Nava’s technical product—such as specific hardware, software, or platform details. It also does not indicate whether Nava’s work is directly connected to the compute supply effort described elsewhere in the report.

    A Series A round typically indicates that a startup has moved beyond early prototypes into a stage where scaling, integration, or deployment planning becomes more central. A rebrand from Kluisz.ai to Nava can reflect a shift in positioning or product framing, though the source does not specify the reason.

    For technology observers, the key question is how Nava’s development aligns with the broader compute landscape. If the market is moving toward advanced compute availability, startups building AI or robotics-adjacent components may find that hardware supply conditions affect timelines for pilots, customer deployments, and system performance targets.

    What these developments suggest for the industry

    The report’s two threads—an effort to scale advanced compute and a deeptech startup’s Series A—align with heightened attention to the full stack of AI and robotics delivery.

    First, Intel’s reported involvement suggests that large semiconductor players may be aligning with application-driven compute demand. This could indicate that compute supply is becoming a strategic concern across the industry, not only for AI-native companies but also for established chipmakers.

    Second, the emphasis on producing “vast volumes” highlights supply-chain scale as a competitive variable. If the goal is to accelerate a project that delivers large quantities of advanced compute, then execution speed and manufacturing capacity may become differentiators alongside chip performance.

    Third, Nava’s $22 million Series A suggests continued investor interest in deeptech ventures. While the source does not connect Nava’s product to the compute project, the timing aligns with a period where compute availability and AI/robotics deployment plans can influence which technologies receive funding and commercialization timelines.

    These updates reflect a practical reality: AI and robotics progress depends not only on algorithms and models, but on the ability to manufacture and supply underlying compute. As the YourStory report indicates, compute supply and scaling are central to the next phase of technology infrastructure.

    Source: YourStory RSS Feed

  • US Treasury Meeting Addresses Bank Risk Management for Anthropic’s Mythos AI Model

    This article was generated by AI and cites original sources.

    On Tuesday in Washington, the US Treasury Department hosted a meeting focused on how banks should manage risks associated with Anthropic model deployments—particularly a model referred to as Mythos and similar large AI systems. According to Tech-Economic Times, the meeting was aimed at ensuring bank executives understand potential threats and are taking steps to defend their systems.

    The discussion also highlighted a controlled access approach: access to Mythos will be limited to about 40 technology companies, including Microsoft and Google. Anthropic has been in ongoing talks with the US government about the model’s capabilities, the startup has said, establishing a policy and security framework for how frontier AI is deployed in critical infrastructure contexts like finance.

    Treasury Department Convenes Bank Leaders on AI Model Risk

    The meeting’s stated purpose, as described by Tech-Economic Times, was to ensure banks are aware of potential risks posed by Mythos and similar models and that they are taking steps to protect their systems. The focus centers on defense and awareness rather than on model performance or consumer-facing features.

    While the source does not detail specific technical failure modes being discussed, the emphasis on “potential risks” suggests that bank threat models may include issues that arise when external AI capabilities are integrated into workflows, accessed via APIs, or used to support decision-making. For banks, this can translate into concerns about system integrity, data handling, and the reliability of outputs in operational environments—areas where access controls and governance mechanisms matter.

    Limited Mythos Access: Approximately 40 Technology Companies

    A concrete element from the source is the planned scope of availability. Access to Mythos will be limited to about 40 technology companies, with Microsoft and Google named among those expected to have access.

    From a technology governance perspective, limiting access to a defined set of companies can serve to control exposure while models are evaluated, integrated, and monitored. The source does not specify the mechanism—such as contractual controls, technical gating, or monitoring requirements—but the “limited to about 40” figure provides a measurable boundary for deployment scope at this stage.

    For the industry, this access model could influence how quickly downstream products are built. If only a defined group of firms can obtain Mythos, early experimentation, tooling, and integration efforts may concentrate around that cohort. Industry observers may track how those companies translate access into internal systems and how they structure safeguards, particularly given that the Treasury meeting indicates banks are already being prompted to consider these models as a risk category.

    Anthropic’s Government Discussions on Model Capabilities

    The source indicates that Anthropic has been in ongoing talks with the US government about the model’s capabilities. Although the article does not detail those capabilities or the outcomes of the talks, it positions Mythos within a broader pattern: advanced AI models are being reviewed in relation to how they could affect systems that require resilience.

    This matters because “capabilities” can encompass multiple technical dimensions—such as what the model can do, how it behaves under different inputs, and how it interacts with data and tools. The Treasury meeting’s bank-focused risk framing suggests that government discussions may be linked to operational security concerns when such models are connected to high-stakes environments.

    Implications for AI Deployment in Financial Institutions

    The Treasury meeting’s focus on ensuring banks take action to defend their systems suggests that the concern centers on whether Mythos’s presence changes the threat landscape for financial institutions. While the source does not provide additional technical specifics, several industry-relevant considerations follow from the setup:

    1) Risk management may need to extend to external model access. If Mythos is available to a limited set of technology companies, banks that rely on vendors, partners, or integrations connected to those companies could face indirect exposure. The Treasury meeting’s focus suggests that banks should consider these dependencies in their defensive planning.

    2) AI governance could become part of infrastructure security. The meeting’s placement at the Treasury Department signals that AI model risk is being treated as relevant to financial system stability and operational readiness. This could prompt banks to formalize policies around AI usage, including how outputs are validated and how systems are monitored.

    3) Early integration may be paired with oversight. The source’s mention of ongoing government talks about capabilities suggests that deployment may come with scrutiny. While the exact form of oversight is not specified, the combination of limited access and government engagement points to a controlled rollout approach.

    These observations are necessarily cautious: the source does not provide technical details on Mythos risks or the specific steps banks are taking. However, the fact that bank leaders were warned—per the article’s framing—indicates that AI models are moving from experimental tools toward components that financial institutions must treat as part of their security posture.

    Significance for AI Deployment Tracking

    For technology audiences tracking frontier AI deployment, the core storyline involves the intersection of model availability, government engagement, and financial sector risk management. The source ties Mythos to a defined access footprint (approximately 40 technology companies, including Microsoft and Google) and ties Anthropic to ongoing US government discussions about capabilities. Together, these elements suggest that AI model governance is being operationalized through both access controls and institutional preparedness.

    As banks adjust their defenses, a key question for the industry—based on what is described here—may be how systems that sit outside banks but feed into them through technology partners are secured. The Treasury meeting indicates that risk extends beyond the model provider to how models are used within the broader technology stack.

    Source: Tech-Economic Times

  • Meta Reorganizes Engineering to Form AI Tooling Team Amid Planned Layoffs

    This article was generated by AI and cites original sources.

    Meta is reorganizing engineering staff, transferring top engineers into a newly formed AI tooling team, according to Tech-Economic Times. The move coincides with plans for sweeping layoffs that could eliminate tens of thousands of jobs at the company. Together, the staffing shift and job cuts reflect Meta’s strategy to translate AI infrastructure spending into operational efficiency, potentially supported by AI-assisted workers.

    A staffing shift toward AI tooling

    The core focus of the reorganization is AI tooling—the internal software and engineering systems that help build, deploy, and operate AI capabilities. While the source does not name the team’s scope, deliverables, or timeline, it describes a reorganization in which Meta transfers top engineers into this new tooling unit. In practical terms, AI tooling typically sits between model development and production systems: it can include workflows for training and evaluation, deployment pipelines, monitoring, and developer-facing infrastructure.

    Because the source frames this as a reorganization rather than a standalone product launch, the implications are more about engineering structure than user-facing features. The report suggests Meta is rearranging how work is organized internally to concentrate expertise on the engineering layer that makes AI systems easier to maintain and scale.

    Layoff plans and the efficiency narrative

    Tech-Economic Times links the reorganization to a second major development: Meta plans sweeping layoffs that could eliminate tens of thousands of jobs. The report ties these job cuts to Meta’s aim to offset the cost of costly artificial intelligence infrastructure investments. It also connects the company’s restructuring to preparation for greater efficiency brought about by AI-assisted workers.

    From a technology operations perspective, that combination—AI infrastructure investment plus workforce reduction plus AI-assisted workflows—suggests a strategy to reduce the unit cost of running AI systems. While the source does not specify which tasks are targeted for automation, it establishes the direction: AI tooling and AI-assisted work are positioned as mechanisms to improve efficiency.

    For teams that build and run AI systems, this can matter because operational overhead often grows with scale: more models, more experiments, more data pipelines, and more monitoring needs. If AI tooling is improved, teams could potentially run more work with fewer manual steps. However, the source does not provide performance metrics, cost figures, or staffing targets, so any assessment of expected impact would remain speculative.

    Why AI tooling becomes a strategic focus

    The source’s emphasis on a dedicated AI tooling team suggests that Meta views tooling as a leverage point. In many AI organizations, tooling quality can determine how quickly engineers can iterate, how reliably systems deploy, and how effectively teams can debug issues. When infrastructure costs rise—as the report describes with costly artificial intelligence infrastructure investments—the efficiency gains from better tooling can become a priority.

    Meta’s decision to move top engineers into that function indicates the company is treating AI tooling as a high-impact area for execution. Observers may watch whether the reorganization correlates with changes in how AI systems are built and operated internally, such as faster iteration cycles or more streamlined deployment workflows. The source, however, does not provide details on outcomes, so readers can only infer the intent rather than confirm results.

    It also matters because AI-assisted workers is part of the same narrative. That phrase indicates that AI is expected to play a role not only in end products but also in internal processes—potentially assisting engineering, operations, or other knowledge work. If AI tooling and AI-assisted workflows are aligned, the tooling team could become central to making those assistance mechanisms reliable and repeatable.

    Industry context: restructuring around AI economics

    The report’s framing—reorganization plus layoffs plus infrastructure cost pressure—fits a pattern seen across the industry: as AI compute and infrastructure expenses rise, companies often revisit how engineering resources are allocated. Tech-Economic Times explicitly links Meta’s staffing changes to attempts to offset AI infrastructure costs and to prepare for increased efficiency.

    For the technology ecosystem, this matters because internal restructuring can influence where talent concentrates and how quickly new internal capabilities reach production. Even without details on specific systems, the establishment of an AI tooling team suggests Meta may be investing in the engineering backbone required to scale AI operations. If that approach succeeds, it could reduce friction for teams working on AI features and potentially accelerate deployment velocity. Conversely, if tooling and workforce changes don’t align, it could increase transition risk—though the source does not provide evidence either way.

    Because the article does not disclose the number of engineers involved, the size of the new team, or the exact timing of the layoffs, readers should treat the report as a directional signal. The connection it draws between infrastructure spending, efficiency goals, and AI-assisted work provides a coherent technology-management narrative: build tooling to support AI operations, then use AI-assisted workflows to reduce operational cost and improve throughput.

    Source: Tech-Economic Times

  • OpenAI’s $100 ChatGPT Pro tier boosts Codex to match Anthropic’s Claude Code push

    This article was generated by AI and cites original sources.

    OpenAI has launched a new $100 per month ChatGPT subscription tier designed to compete with Anthropic’s Claude Code offering. The change centers on how much Codex usage subscribers can access, along with continued access to OpenAI’s “exclusive Pro model” and unlimited access to Instant and Thinking models—features OpenAI says are still part of the new Pro tier.

    According to OpenAI’s announcement on X, the new Pro plan provides “5x more Codex usage than Plus” and is positioned as best for “longer, high-effort Codex sessions.” OpenAI is also running a time-limited promotion that increases Codex usage for eligible users until May 31, while it adjusts how Codex usage is allocated for Plus subscribers going forward. (See mint – technology for the full details, including the stated pricing and the promotion window.)

    What OpenAI changed in ChatGPT Pro

    The headline change is a new subscription price point: $100/month. OpenAI says this new Pro tier still includes access to all Pro features, including the exclusive Pro model. OpenAI also states that the tier provides unlimited access to Instant and Thinking models.

    Where the tier differentiates is Codex usage. OpenAI says the new plan offers “5x more Codex usage than Plus.” In the same announcement, OpenAI frames the tier as suitable for “longer, high-effort Codex sessions.” That language suggests the company is shaping the experience around sustained coding workflows rather than short bursts, using usage limits as the mechanism to steer how people allocate time and compute for coding tasks.

    OpenAI is also offering a launch promotion. In its post, the company says it is “increasing Codex usage for a limited time through May 31st”. The promotion is targeted at Pro subscribers: “Pro $100 subscribers get up to 10x usage of ChatGPT Plus on Codex” to help users “build your most ambitious ideas,” as OpenAI put it.

    The promotion is time-bounded, and OpenAI says the Codex promotion for existing Plus members “will end today.” In addition, OpenAI says it is rebalancing Codex usage for Plus users to “support more sessions throughout the week, rather than longer sessions in a single day.” OpenAI’s stated framing indicates the company is not only changing total allowance tiers but also the distribution pattern of usage within a week.

    Pricing and the rest of OpenAI’s ChatGPT tiers

    OpenAI is not replacing its other plans. The company says it will continue to offer $200/month Pro alongside the $20/month Plus plan. It also continues to list an $8 “Go” plan and a free tier.

    OpenAI explicitly characterizes the Plus plan at $20 as the “best offer” for “steady, day-to-day usage of Codex,” while describing the $100 Pro tier as a “more accessible upgrade path for heavier daily use.” These statements matter because they show OpenAI is drawing a ladder between tiers based on expected user behavior—daily usage patterns for Plus versus heavier daily use for the new Pro tier, with longer sessions supported by increased Codex allowance.

    OpenAI CEO Sam Altman is also referenced in the same source. Altman had earlier announced that Codex had reached three million users, and that the company would reset usage for its users every million users. The mint – technology report links this context to the new subscription changes, placing them within an ongoing effort to manage Codex demand and usage accounting as the user base grows.

    Why usage limits are becoming the battleground

    This announcement reflects how AI coding tools are increasingly packaged as usage-based experiences. Instead of only differentiating models by capability, OpenAI is differentiating by how much Codex usage a subscriber can consume and how that usage is structured over time.

    OpenAI’s own language shows two levers:

    1) Total allowance by tier: The new Pro plan offers “5x more Codex usage than Plus.”

    2) Temporal allocation: OpenAI says it is rebalancing Plus Codex usage to support more sessions throughout the week rather than longer sessions in a single day.

    From a technology and product operations standpoint, these levers can affect compute scheduling, session planning, and how users design their coding workflow. The promotion—up to 10x usage for Pro $100 subscribers through May 31st—also indicates OpenAI can temporarily expand capacity or relax limits for a subset of users, then tighten back to the standard tier after the window closes.

    OpenAI’s approach also ties the subscription directly to Codex usage rather than only to access to models. While OpenAI highlights unlimited access to Instant and Thinking models in the Pro tier, the primary “upgrade” metric presented in the report is Codex usage. That suggests Codex is the product component most sensitive to demand and thus most likely to be metered through subscriptions.

    Competition with Anthropic: tier design echoes Claude Code

    The mint – technology report notes that OpenAI’s subscription structure now looks similar to Anthropic’s. Specifically, it states that OpenAI’s plan “look[s] eerily similar to Anthropic,” describing Anthropic’s tiers as Max 5x for its $100/month users and Max 20x for its $200/month tier users.

    OpenAI’s new tier provides 5x more Codex usage than Plus at $100/month, and the report frames this as part of OpenAI’s effort to rival Anthropic’s Claude Code popularity. The comparison matters because it shows how competitive pressure may push companies toward similar product packaging strategies—particularly when a key differentiator is the amount of coding-tool compute or usage a subscriber receives.

    The report also links OpenAI’s subscription revamp to broader competitive context, including references to OpenAI executing a “code red” to counter Anthropic’s dominance in the coding market, and a shift toward more professional tool work. It further notes that OpenAI has put other plans on hold or shut them down, such as the recent Sora video generator (as described in the source material). While those points extend beyond subscriptions, they provide context for why OpenAI is focusing on coding-related tooling and on tier mechanics that map to developer usage.

    As an industry signal, observers may watch whether usage-based tiering becomes a standard pattern for AI coding assistants—where the main product differentiation is how much “coding work” the subscription allows, and how that allowance is timed and reset as demand grows.

    Source: mint – technology

  • TCS Among Six Firms Empanelled to Build and Run AI for Government Departments

    This article was generated by AI and cites original sources.

    The Indian government has empanelled six partner firms—including Tata Consultancy Services (TCS)—to develop and deploy AI solutions across government departments, according to Tech-Economic Times. The announcement follows a request for empanelment (RFE) process in which more than 80 companies submitted bids before the RFE closed last week. According to the report, firms such as KPMG, Deloitte, PwC, EY, Fractal Analytics, Gnani AI, and Jio Haptik did not make the final shortlist, which was decided on February 27.

    Empanelment as a Government AI Procurement Mechanism

    The empanelment structure represents a government procurement approach in which a small set of partner firms are selected to build and run AI capabilities across multiple government departments, rather than awarding a single project to one vendor. This approach suggests a lifecycle model—moving from implementation to ongoing operation—rather than a one-time delivery model.

    Tech-Economic Times names TCS as one of the six empanelled firms. The report does not identify the other five partner companies in the provided material, meaning readers can confirm only that TCS is in the selected cohort.

    The empanelment structure could affect how AI platforms, model management practices, and deployment workflows are organized across departments. A multi-vendor empanelment may simplify how government departments procure similar capabilities, integrate systems, and maintain them over time.

    RFE Competition: More Than 80 Bidders, Shortlist Set on February 27

    According to Tech-Economic Times, more than 80 companies submitted bids for the RFE, with the process closing last week. The report provides a key timeline marker: the final shortlist was decided on February 27.

    The large number of bidders indicates broad interest in government AI work. AI deployments typically require specialized competencies such as data engineering, model development, integration with existing IT systems, and operational monitoring. The shortlist outcome signals that not all applicants were selected, which could reflect differences in readiness, delivery models, or alignment with the government’s requirements. The source does not describe the selection criteria used in the evaluation process.

    Tech-Economic Times explicitly lists firms that did not make the final shortlist: KPMG, Deloitte, PwC, EY, Fractal Analytics, Gnani AI, and Jio Haptik. This list provides a snapshot of the competitive landscape for government AI procurement, though the source does not indicate whether these firms were competing as technology providers, partners, or solution integrators.

    Build-and-Run AI: From Deployment to Operations

    The empanelment focuses on firms selected to develop and deploy AI solutions across government departments. The title reference to build and run AI indicates that selected firms would have responsibilities beyond initial implementation. For technology teams, “run” typically refers to post-deployment responsibilities such as maintaining models, handling updates, and ensuring systems continue to function as requirements evolve.

    AI systems require ongoing attention to performance, data quality, and integration stability. When a government empanels vendors for both building and running AI, it can influence how those vendors structure technical offerings—potentially prioritizing end-to-end platforms and operational tooling. The source does not provide details on what “run” includes in contractual terms or whether this empanelment will lead to standardized reference architectures across departments.

    Implications for Government AI Procurement

    For technology readers, the key development is how government AI work is being organized: through a small, selected vendor set after a competitive RFE process. Tech-Economic Times reports that six partner firms were empanelled, including TCS, after more than 80 bids were submitted. The shortlist decision on February 27 shows that the process had a defined evaluation milestone.

    This procurement structure could shape the AI ecosystem around government IT. If more departments adopt AI solutions using these empanelled partners, vendors not selected in the shortlist may need to adjust their positioning or delivery approach for future procurements. Conversely, empanelled firms may focus on building repeatable delivery pipelines, given that the mandate spans multiple departments rather than a single project.

    The source does not provide details on the specific AI types, target use cases, deployment environments, or integration requirements. As a result, the technical implications remain at the level of procurement structure and operational scope rather than specific model architectures or tools.

    Source: Tech-Economic Times

  • Razorpay, PayU, and Cashfree Expand Into Cross-Border Payments—Reshaping India’s Payments Startup Landscape

    This article was generated by AI and cites original sources.

    Major aggregators move into cross-border payments

    India’s payment ecosystem is experiencing a strategic shift toward cross-border payments. According to Tech-Economic Times, major payment aggregators—including Razorpay, PayU, and Cashfree—are expanding into cross-border payment services. The move is driven by the growing trend of Indian businesses exporting goods and services globally. For early-stage startups focused solely on cross-border payments, this expansion poses competitive pressure.

    Cross-border payments become a contested market

    The central story is a business expansion into cross-border payment flows—the systems and workflows that enable merchants to accept payments across national boundaries. Tech-Economic Times describes the expansion as aggressive and links it to clear demand: Indian businesses exporting goods and services globally. In practical terms, this demand translates into payment use cases such as collecting revenue from overseas buyers, settling international transactions, and managing the operational requirements of cross-border commerce.

    The source also frames the competitive impact. It states that this shift poses a threat to early-stage startups focused solely on this niche. That threat stems primarily from distribution and scale: aggregators already integrated into merchant payment stacks may offer cross-border capabilities as an extension of existing services, rather than from technical superiority.

    Why aggregator expansion matters for payment infrastructure

    Payment aggregators like Razorpay, PayU, and Cashfree sit between merchants and the broader payment ecosystem. When such platforms expand into cross-border payments, the implication is that cross-border capabilities may become part of a single merchant-facing integration, rather than requiring merchants to adopt specialized providers for international transactions.

    Tech-Economic Times explicitly connects the expansion to capturing market share from banks. While the source does not detail the specific mechanisms by which aggregators gain share from banks, it establishes the competitive direction: payment aggregators are positioning themselves as alternatives to traditional banking channels for international payment handling. This suggests a potential reallocation of responsibility across the payments stack—from bank-led processes toward aggregator-led payment orchestration.

    From an industry standpoint, cross-border payments typically involve more complex operational requirements than domestic payments. The fact that exporters are driving adoption indicates that technology providers are aligning their products with international transaction needs. In this context, aggregator expansion can be understood as an effort to reduce friction for merchants seeking to monetize global demand.

    Impact on early-stage startups: integration versus specialization

    The competitive dynamic is straightforward: major aggregators are expanding, and that expansion may threaten startups that focus only on cross-border payments. The implication is that startups built around a narrow use case may face pressure on multiple fronts, including merchant acquisition and product bundling.

    If merchants can access cross-border payment functionality from platforms already used for other payment needs, the incremental value of a standalone cross-border-only provider may become harder to communicate. This could influence how startups differentiate—potentially through deeper specialization, better coverage, or more tailored workflow support. However, the source does not specify pricing changes, feature parity, or technical roadmaps.

    What Tech-Economic Times makes clear is that the cross-border payments niche is no longer isolated. As Razorpay, PayU, and Cashfree move into it, the category may shift from a startup-dominated segment to one where established aggregators play a larger role.

    What to watch next in cross-border payment markets

    The source focuses on the strategic direction of payment providers rather than on a particular technical milestone. Still, it outlines enough to identify signals industry watchers may track.

    First, continued product expansion by the named aggregators could indicate that cross-border payments are becoming a mainstream feature set rather than a peripheral offering. If that occurs, merchants exporting goods and services globally may see more options for how they connect international payments to their existing checkout or payment workflow.

    Second, the article’s mention of market share from banks suggests that competition may extend beyond payment startups and aggregators into bank-adjacent payment services. The source does not specify which banking functions are most affected, but the competitive framing implies a shift in where merchants look for international payment enablement.

    Third, the threat to early-stage, cross-border-only startups implies that the category’s competitive landscape could tighten. Investors and founders may respond by adjusting go-to-market strategies or broadening offerings, though the source does not describe any such responses.

    In summary, Tech-Economic Times reports a clear direction: major aggregators are expanding into cross-border payments, driven by global export demand, and this expansion could reshape who controls cross-border payment flows for Indian merchants. For those following payments infrastructure, the key takeaway is that cross-border capability is increasingly being packaged through larger, merchant-facing platforms—changing the competitive and integration landscape.

    Source: Tech-Economic Times

  • Manav Robotics Seeks $15–20M in Funding Discussions with Blume Ventures and Qualcomm Ventures

    This article was generated by AI and cites original sources.

    The News

    Manav Robotics, a startup founded by former senior Ola executives Suvonil Chatterjee and Slokarth Dash, is in discussions with early-stage investor Blume Ventures and US-based Qualcomm Ventures to raise its maiden funding of around $15–20 million, according to Tech-Economic Times. The report indicates the round could include participation from a group of Indian founders.

    About the Investors

    Blume Ventures is identified as an early-stage investor, while Qualcomm Ventures is described as US-based. The source does not provide additional details on investment thesis, ticket size, or prior robotics experience among these investors.

    The combination of an early-stage VC and a US-based venture arm associated with Qualcomm may signal the type of technology Manav Robotics is targeting, particularly if the startup’s engineering needs align with areas where Qualcomm Ventures typically operates. However, the source does not explicitly state any technical alignment between the investors and the company.

    Leadership Background

    Manav Robotics is founded by former senior Ola executives Suvonil Chatterjee and Slokarth Dash. The source identifies them as senior-level former employees of Ola but does not specify their exact prior roles or how their experience directly translates to robotics development. The founders’ background in a mobility and platform company provides a connection to India’s broader tech ecosystem.

    Funding Round Details

    The planned raise is framed as Manav Robotics’ maiden funding round, with a reported target of around $15–20 million. A maiden funding round typically represents a significant inflection point for a startup, potentially funding the transition from early prototypes to more structured development and validation processes. The source does not specify whether the company has already produced a working system or detail the structure of the round—whether it is equity, convertible notes, or another instrument.

    The report also notes that the funding round could see participation from a group of Indian founders. If confirmed, this could indicate additional industry connections within India’s startup ecosystem, which may be relevant for robotics if the company requires partners for testing, manufacturing, distribution, or deployment. The source does not name these founders or describe their relevance to Manav Robotics’ technical roadmap.

    What This Means for Robotics Funding

    Robotics is a capital-intensive sector, and funding announcements often serve as early indicators of where engineering teams may focus next. While this report is limited in technical specifics, it establishes several key facts: Manav Robotics is seeking its first funding round of $15–20 million; it is in discussions with Blume Ventures and Qualcomm Ventures; and it is led by former Ola executives Suvonil Chatterjee and Slokarth Dash.

    For technology observers, the significance of this news lies less in specific robot design details—which the source does not provide—and more in the market signals: which investors are willing to back a new robotics entrant and how early-stage capital could shape the company’s technical direction. The next development to watch will be whether these discussions result in a completed funding round and what technical milestones the company publicly demonstrates once funded.

    Source: Tech-Economic Times

  • ONDC Appoints Vibhor Jain as CEO, Marking Transition to Operational Growth Phase

    This article was generated by AI and cites original sources.

    India’s government-backed open ecommerce network ONDC has appointed Vibhor Jain as its new MD and CEO, effective April 7, according to Tech-Economic Times. Jain had previously served as acting CEO. The appointment comes alongside ONDC’s reported revenue surge and additional key leadership appointments, as the network outlined plans to deepen the value it creates for multiple stakeholder groups, including farmers, artisans, and small businesses.

    Leadership transition for an open ecommerce network

    ONDC is positioned as an open ecommerce network, and the appointment marks a formal transition from Jain’s acting role. The CEO appointment is effective April 7 after he served as acting CEO. Open network models typically require sustained coordination across participants—technology providers, sellers, buyers, and intermediaries—where governance and execution influence real-world adoption.

    A CEO role in a network like ONDC typically involves how standards are maintained, how onboarding is managed, and how the network’s value is measured across participants. According to Tech-Economic Times, Jain’s stated objective is to deepen the value ONDC creates for farmers, artisans, and small businesses. The report does not detail how that value will be delivered, but the stakeholder list suggests an emphasis on merchant-side outcomes rather than only consumer-facing features.

    Revenue growth and organizational scaling

    Beyond the appointment, Tech-Economic Times notes that ONDC reported a significant revenue surge. The source does not provide specific figures, time windows, or accounting definitions, so the magnitude and drivers of that growth cannot be quantified from the article alone. The combination of a CEO appointment, a revenue increase, and new key leadership appointments typically indicates an organization moving from early-stage scaling into a more stable growth phase, where leadership is expected to convert momentum into repeatable execution.

    The network’s growth could correlate with higher transaction volumes, broader catalog participation, or increased activity among merchants in the categories highlighted. However, since the source does not enumerate specific technical or commercial drivers, any linkage between revenue and technical changes should be treated as analysis rather than reported fact.

    Focus on merchant stakeholders

    Tech-Economic Times indicates that Jain aims to deepen ONDC’s value for farmers, artisans, and small businesses. This stakeholder focus is significant for technology strategy because it implies that ONDC’s product and platform decisions must accommodate diverse business needs. Farmers and artisans typically have different operational constraints than large retailers, including inventory management practices, order handling capacity, and the ability to maintain consistent product listings.

    The source does not describe ONDC’s specific feature set or technical mechanisms, so it does not establish what steps will be taken. However, the stated goal suggests that ONDC’s leadership may prioritize improvements that reduce friction for smaller sellers and help them participate effectively in an open ecommerce environment.

    From a technology perspective, merchant-focused value often depends on how reliably the network supports catalog data, order workflows, and fulfillment coordination across different participant systems. While Tech-Economic Times does not provide those details, the stakeholder list provides context for what outcomes Jain may treat as key performance indicators.

    Leadership changes in open network governance

    ONDC’s governance and technical coordination are reflected in its description as an open ecommerce network and by the report’s mention of new key leadership appointments. Open networks can involve multiple organizations operating different components of the ecosystem, and leadership changes can affect how quickly standards evolve, how onboarding scales, and how the network responds to operational challenges.

    Tech-Economic Times does not name the other leaders or specify their responsibilities. The timing—appointment effective April 7 after an acting period—suggests continuity in execution rather than an abrupt shift.

    For industry observers, the concrete signals in the article are procedural: a CEO transition, a reported revenue surge, and additional leadership additions. These elements suggest positioning the network to sustain growth and translate it into long-term participation. However, because the source does not include technical roadmaps or implementation details, the precise technical direction remains unclear based solely on this report.

    What to monitor going forward

    Based on Tech-Economic Times’ description, the next phase for ONDC under Vibhor Jain may be evaluated through two categories of signals: (1) whether the reported revenue surge continues and (2) whether ONDC demonstrates progress toward deepening value for farmers, artisans, and small businesses. The article does not provide metrics or technical milestones to track, so expectations should remain cautious.

    In the technology ecosystem, open ecommerce networks are typically evaluated by how effectively they balance openness with operational reliability. Since the source does not detail product changes, the most immediate, verifiable development is the leadership appointment itself and its alignment with ONDC’s stated stakeholder goals.

    Source: Tech-Economic Times

  • WhatsApp Encryption Disputed: Musk Questions Trust as Lawsuit Alleges Message Interception

    This article was generated by AI and cites original sources.

    Elon Musk renewed a public dispute with Meta on Thursday by questioning whether WhatsApp’s end-to-end encryption can be trusted. His comments came after a new class action lawsuit alleged that the app intercepted messages despite WhatsApp’s claims of end-to-end encryption protection. Meta’s response directly challenged the allegations and reiterated that WhatsApp uses end-to-end encryption based on the Signal protocol.

    The exchange centers on a technical claim: whether the cryptographic design behind end-to-end messaging is actually implemented in a way that prevents third-party access. In a market where messaging platforms compete on privacy properties, the dispute highlights how encryption architecture, legal claims, and third-party integrations intersect in public trust debates.

    Musk’s Challenge and the Lawsuit

    Responding to a post on X about the lawsuit, Musk wrote, “Can’t trust WhatsApp”. The class action lawsuit alleges that WhatsApp intercepted private messages of users despite the app’s claims of end-to-end encryption and shared those messages with third parties, including Accenture.

    In the same thread, Musk encouraged users to switch to X Chat for an encrypted chat experience, stating that it “comes with this great benefit of actual privacy.”

    From a technology standpoint, Musk’s argument challenges the end-to-end encryption trust boundary—specifically, who can access plaintext content and under what conditions. The lawsuit’s allegations center on the gap between encryption claims and alleged message handling in practice.

    WhatsApp’s Response: Signal Protocol Encryption

    WhatsApp responded to Musk’s claims, stating that the lawsuit allegations are “categorically false and absurd.” The company argued that WhatsApp has been end-to-end encrypted using the Signal protocol for a decade, and therefore “your messages cannot be read by anyone other than the sender and recipient.”

    According to WhatsApp’s FAQ, end-to-end encryption is used when users chat with another person using WhatsApp Messenger. The company states that “No one outside of the chat, not even WhatsApp, can read, listen to, or share them.” The FAQ describes messages as secured with a “lock,” with only the recipient and sender having the “special key needed to unlock and read them.”

    These statements describe a threat model in which the platform operator cannot decrypt message contents. The specific reference to the Signal protocol points to the cryptographic framework WhatsApp says it relies on for end-to-end guarantees.

    However, the underlying controversy remains centered on the lawsuit’s allegations. The dispute currently presents a clash between the platform’s stated encryption properties and the lawsuit’s claims about message interception and sharing with third parties.

    The Technical Dimensions of the Dispute

    End-to-end encryption is not merely a feature label; it represents a set of engineering decisions that determine what data is encrypted, where keys reside, and which components can access plaintext. Musk’s assertion that WhatsApp “can’t be trusted” and WhatsApp’s response that its encryption “cannot” be read by anyone other than sender and recipient map directly onto those engineering questions.

    The mention of third-party involvement (Accenture) points to a common real-world consideration for messaging systems: the boundary between cryptographic processing and operational workflows. If a platform’s end-to-end design truly prevents decryption by the service provider, then any claim that intercepted messages were shared with third parties would suggest either an implementation failure, a misunderstanding of what was intercepted, or a scenario outside the claimed end-to-end scope.

    The precision of WhatsApp’s FAQ language reflects the technical stakes. It claims that even WhatsApp itself cannot read, listen to, or share messages, and that only the “recipient and you” have the keys needed to unlock content. That specificity typically defines measurable behavior: if a platform can be shown to access content, the operational reality would conflict with the stated cryptographic model.

    Regulatory Scrutiny and Prior Complaints

    WhatsApp has faced scrutiny tied to end-to-end encryption claims previously. A report by Bloomberg earlier this year stated that US law agencies were investigating allegations raised by a former Meta contractor that the company can access WhatsApp messages despite end-to-end encryption claims. The investigation was said to be led by special agents with the US Department of Commerce.

    Additionally, Meta received a whistleblower complaint filed with the US Securities and Exchange Commission in 2024 raising similar concerns. This pattern suggests that encryption claims have drawn attention from both the legal system (via class action) and regulatory investigations.

    For the industry, this indicates that “end-to-end encryption” is increasingly treated as a compliance and trust topic, not only a product feature. Observers may watch whether public disputes and lawsuits lead to technical disclosures, audit results, or court findings that clarify what “intercepted” means in the context of WhatsApp’s claimed Signal-protocol-based encryption.

    In the meantime, Musk’s promotion of X Chat is positioned as a direct alternative for encrypted messaging and calls. The technical details of X Chat’s encryption are not provided in available sources, so the comparison remains at the level of user-facing claims rather than a technical comparison.

    What Comes Next

    The immediate timeline is clear: Musk questioned WhatsApp’s encryption trustworthiness on X, WhatsApp responded by citing the Signal protocol and detailed FAQ language, and the backdrop includes a new class action lawsuit plus earlier reporting about US investigations and a 2024 SEC complaint. The next meaningful developments would be how the lawsuit’s allegations are substantiated and how WhatsApp supports its end-to-end encryption claims in response.

    For technologists and privacy-focused users, the controversy underscores an operational reality: cryptographic assurances are only as credible as the implementation details and evidence presented when those assurances are challenged. The dispute between public claims and legal allegations will likely remain a focal point for how messaging platforms communicate encryption guarantees and how those guarantees are tested in practice.

    Source: mint – technology