In a move that has sparked mixed reactions, Canva’s AI-powered design tool, Magic Layers, was recently found to be replacing the word "Palestine" in user-generated content with unrelated terms. The incident, uncovered by a user on X (formerly Twitter), underscores the unexpected—and sometimes problematic—behaviors that can emerge from AI systems designed to interpret and modify text within images.
The issue surfaced when a designer noticed that the phrase "cats for Palestine" in their Canva project was automatically converted to "cats for Ukraine." While the AI feature, Magic Layers, is intended to separate flat images into editable layers, it appears to have misinterpreted the text as needing modification. Notably, other place names like "Gaza" remained unaffected, suggesting a targeted but unintended alteration rather than a broader systemic error.
How the AI tool’s glitch came to light
The discovery was made by user @ros_ie9, who shared a side-by-side comparison of the original and modified designs. The alteration occurred without any explicit input from the user, raising concerns about the tool’s ability to discern context or intent in text-based designs. Canva, a widely used platform for creating social media graphics, presentations, and marketing materials, has not yet detailed the technical mechanism behind the error.
Canva responded swiftly to the issue, acknowledging the problem and assuring users that it had been resolved. In an official statement, the company emphasized its commitment to addressing unintended behaviors in AI-driven features. "We became aware of an issue where Magic Layers was making unprompted changes to text labels in designs," a Canva spokesperson confirmed. "We’ve fixed the issue and are implementing additional safeguards to prevent similar occurrences in the future."
The broader implications of AI in design tools
This incident is not an isolated one. AI tools, while powerful, often operate in ways that can produce surprising or undesirable results. Design platforms like Canva leverage AI to automate repetitive tasks, such as background removal or layout adjustments, but these systems sometimes misfire when interpreting text or symbols. The problem is compounded by the fact that AI models are trained on vast datasets, which may inadvertently encode biases or inaccuracies.
For designers and creators, this raises important questions about control and oversight. AI features are marketed as time-savers, but when they alter content without user consent, they can introduce errors that are difficult to detect. Canva’s case demonstrates the need for robust testing and real-world validation before deploying AI features at scale.
Canva’s response and future steps
Canva’s prompt response to the issue reflects its recognition of the potential risks associated with AI automation. The company has not disclosed whether additional AI features are under review for similar vulnerabilities, but it has committed to ongoing evaluations of its tools. Users of Magic Layers and other AI-enhanced features are advised to review their designs carefully, especially when using text-heavy layouts.
As AI continues to permeate creative workflows, incidents like this serve as a reminder of the importance of transparency and accountability. While AI can unlock new possibilities, it must be paired with human oversight to ensure accuracy and reliability. Canva’s experience offers a cautionary tale for both developers and users of AI-powered tools.
Looking ahead, the tech industry will need to strike a balance between innovation and precision. For designers relying on AI to streamline their work, the lesson is clear: always double-check automated changes, no matter how small they may seem.
AI summary
Canva’nın Magic Layers yapay zeka aracı, tasarımlardaki ‘Palestine’ kelimesini izinsiz ‘Ukraine’ olarak değiştirdi. Şirket hatayı kabul etti ve çözüm için adımlar attı.