Brazil’s electoral court has banned AI chatbots from offering voting tips ahead of the upcoming presidential election, citing disinformation risks. However, tests described by Tech-Economic Times show that leading chatbots—including ChatGPT, Grok, and Gemini—can still produce candidate rankings, raising questions about how election-related information can be shaped even when explicit “voting advice” is restricted. (See Tech-Economic Times for the underlying report.)
What Brazil’s electoral court targeted
According to Tech-Economic Times, Brazil’s electoral court issued a ban on AI chatbots providing voting tips for the upcoming presidential election. The stated rationale is to prevent disinformation, particularly in a political environment described as highly polarized.
From a technology standpoint, the key issue is not whether chatbots can answer questions at all, but how their responses are framed and what kinds of outputs they generate. A “voting tips” restriction is typically aimed at steering user behavior—something that can be done through recommendations, persuasion, or guidance on how to vote. The court’s approach, as summarized by Tech-Economic Times, is therefore focused on output categories that could be used to influence voter decisions.
Why candidate ranking can still matter
Even with the ban in place, Tech-Economic Times reports that tests found leading chatbots continue to rank candidates. The article specifically names ChatGPT, Grok, and Gemini as examples of models that can still produce comparative results among candidates.
This distinction matters for how election integrity is handled in AI systems. Candidate ranking can function as a form of influence even if it is not labeled as “voting tips.” In practice, ranking outputs can be interpreted as guidance, especially when users treat the chatbot’s ordering as a proxy for credibility or suitability. While Tech-Economic Times does not provide the exact prompt formats or the precise ranking behavior, the report’s emphasis on “continue to rank candidates” suggests that the models’ underlying capabilities—summarizing, comparing, and generating structured outputs—remain available.
Tech-Economic Times also frames the concern as one of biased or incorrect information influencing voters. In technical terms, this points to two overlapping risks: (1) bias that may be introduced by training data, model behavior, or response templates, and (2) incorrectness that can arise when models generate or infer information that is incomplete, outdated, or wrong. The report’s warning implies that even when a system is not explicitly giving “tips,” it may still generate content that users treat as decision-relevant.
From rules to model behavior: the enforcement gap
The Tech-Economic Times summary highlights a practical compliance challenge: a ban on one class of outputs may not automatically eliminate other classes of influence. If chatbots are prevented from offering direct recommendations, they may still respond to election-related queries by producing alternatives such as comparisons, rankings, or summaries. Those outputs can still be used to shape perceptions.
In that sense, the report suggests an enforcement gap between what regulators try to constrain (explicit voting advice) and what models can still generate (structured candidate comparisons). Observers may watch for how enforcement works in practice: whether systems are blocked entirely, whether they are required to refuse certain categories of prompts, or whether developers adjust model behavior to avoid ranking-style outputs.
Because Tech-Economic Times focuses on the outcome of tests rather than on the technical mechanism of compliance, the article does not specify what changes—if any—were made by chatbot providers after the ban. That limitation is important: it means the report is describing a behavioral persistence problem rather than documenting a particular technical fix.
Implications for AI systems used around elections
Tech-Economic Times describes the political context as highly polarized and ties it to the risk of disinformation. For AI builders and operators, the broader implication is that election-related restrictions likely need to be designed around output behavior, not just around specific wording like “voting tips.” If a model can still rank candidates, then policies may need to address comparative and evaluative response modes—especially those that can be interpreted as endorsement.
At the same time, the report’s mention of multiple major chatbots—ChatGPT, Grok, and Gemini—suggests that this is not isolated to a single vendor or model family. This could indicate a systemic challenge for generative AI in election contexts: when users ask for candidate comparisons, the model’s general-purpose design may naturally produce ordered lists or comparative judgments unless it is specifically constrained.
Tech-Economic Times does not detail how the tests were conducted, what exact prompts were used, or what guardrails (if any) were expected to prevent candidate ranking. Still, the outcome it reports—continued ranking despite a ban—suggests that election rules may need more granular definitions of prohibited content and more robust controls to ensure that disallowed influence does not reappear in different forms.
Source: Tech-Economic Times