A false review posted to Google Maps or Google Search can reach thousands of people within hours and stay there indefinitely. The person who wrote it may be anonymous, may be unreachable, or may have no assets worth pursuing. The obvious target for a business owner who wants the review removed and compensation for the damage it has caused is the platform that published it and kept it there. Until recently, Canadian courts had largely shielded online platforms from that kind of liability. That landscape is beginning to shift.
In an earlier post, we examined the 2022 Ontario decision in Thorpe v. Boakye, where the court refused to dismiss Yunaland’s defamation claim against Google at the summary judgment stage, leaving open the question of whether an online platform can be held liable as a publisher for defamatory reviews it declines to remove. That post noted the case as a significant departure from the previously prevailing view that platforms like Google were passive intermediaries, and contrasted the Canadian approach with the broad immunity afforded to platforms under section 230 of the United States Communications Decency Act.
Since then, the law has moved. A 2025 Small Claims Court decision found Google liable as a publisher in materially similar circumstances. A Quebec Superior Court awarded $500,000 against Google in an even more egregious case of platform-assisted defamation. This article brings the thread forward, examining what those decisions add to the framework, and what businesses dealing with defamatory online content can draw from the developing case law.
The Publication Requirement and How It Applies Online
To succeed in a defamation claim, a plaintiff must establish that the defamatory words were published, meaning communicated to at least one person other than the plaintiff. When defamatory material is posted online, publication takes place when a third party downloads or views the content. It can be inferred on a balance of probabilities where, for example, the post appeared on an actively used platform, attracted comments or likes, or showed a measurable number of views.
The mode of publication is irrelevant to whether publication has occurred, though it is relevant to the scale of damages. A post accessible by millions of people through a major search engine carries different consequences than a message sent to a small private group.
The question of who bears responsibility for online publication is where the law has been most actively developing. Under Canadian defamation law, a party who participates in the publication of defamatory words can be liable, even if they did not author those words. But the courts have developed an important qualification: a party whose involvement in publication is purely passive and mechanical is treated differently from one who participates in the content itself.
The Passive Intermediary Defence
The defence of innocent dissemination protects parties whose involvement in publication is purely administrative or mechanical. Retailers, distributors, printers, and internet service providers can all invoke this defence where they had no knowledge of the defamatory content, were not put on notice of circumstances suggesting it, and were not negligent in failing to detect it.
The boundaries of this protection were extensively examined by the Supreme Court of Canada in Crookes v. Newton. The Court held, by a majority, that publishing a hyperlink to a website that contains defamatory material does not, by itself, constitute publication of that content. A hyperlink is more like a footnote than a direct assertion: it points elsewhere without reproducing or endorsing what is found there. The party who creates a hyperlink can be an innocent intermediary in relation to the content it references, provided they had no knowledge of the defamatory nature of that content.
Crookes was widely understood to offer significant protection to online platforms. If even a knowing hyperlink to defamatory content did not necessarily constitute publication, the argument ran, a platform that merely hosts third-party content over which it exercises no editorial control should be equally protected. For several years, that interpretation held.
Thorpe v. Boakye: A Different Question
The 2022 Ontario Superior Court decision in Thorpe v. Boakye raised a question Crookes had not answered: what happens when the platform does not merely link to content, but hosts it, controls whether it stays up or comes down, and has been given specific notice that the content is defamatory?
The facts were straightforward. Yunaland Inc. is a Brampton daycare operated by Collette Thorpe. A former customer, Audrey Boakye, posted a false review on Google Local Reviews stating that the daycare’s owner was a fraud and subject to a police investigation. Her husband and others, who had never been customers, posted further false reviews. The reviews appeared across Google Search, Google Maps, and Local Reviews. When Yunaland demanded Google remove them, Google’s lawyers responded that Google would consider removing the reviews voluntarily if Yunaland obtained a court order declaring them defamatory.
Yunaland eventually obtained that declaration and the reviews were removed. Google then moved for summary judgment, arguing that under Canadian law it was a passive intermediary in relation to third-party content and could not be liable as a publisher. It relied on Crookes to argue that hosting the reviews did not make it their publisher.
Justice Price dismissed Google’s motion. The judge drew a meaningful distinction between hyperlinks and hosted content. Hyperlinks do not give a platform operator control over the third-party content to which they point. Hosted reviews are different: Google created the platform on which the reviews were posted, actively used business information to surface those reviews in search results and maps, and retained full control over whether each review remained visible or was taken down. The Crookes passive intermediary logic, the court held, should not be extended to shield publishers from liability for content they can control.
The court also noted that Google’s position, by which it would remove admittedly defamatory content only after the plaintiff had obtained a court order, raised a genuine question about whether Google had discharged any duty of reasonable care it might owe. Whether Google should be regarded as a passive intermediary, or as something else entirely, was an issue that depended on factual questions that could not be resolved on a summary judgment motion. The case would have to go to trial.
Thorpe v. Boakye did not decide that Google was liable. It decided that the question was a live one, and that the traditional passive intermediary framework was not adequate to resolve it at the summary judgment stage. That was significant enough.
Crookes v. Newton held that hyperlinks, by themselves, are not publications of the content they reference. Thorpe v. Boakye held that this logic does not extend to hosted review platforms, where the platform operator creates the infrastructure for the content, controls whether it remains published, and retains the ability to remove it upon notice. Control over content, once the platform has received specific notice of its defamatory nature, is the critical distinguishing factor.
Jeffery v. Almusslat: Google Found Liable
In 2025, a Small Claims Court decision from Toronto went further than Thorpe. In Jeffery v. Almusslat, an immigration lawyer named Matthew Jeffery was targeted by a former client who posted a series of false and damaging one-star reviews on Google Local Reviews, some under fictitious names. The reviews stated, among other things, that the plaintiff’s five-star reviews were fake, that he treated clients merely as revenue sources, and that he intentionally produced poor results to generate further fees. Over time, the Almusslats posted more than two dozen reviews.
The Almusslats’ defence was struck, and the case proceeded against Google alone. Google’s position was the same one it had taken in Thorpe: it was a passive intermediary, it had not authored the content, and it bore no liability.
Deputy Judge Timms rejected that position. The court identified the critical moment at which passive hosting becomes active involvement: once Google’s own legal and policy team receives a formal complaint, reviews it, and decides whether the review stays or comes down, Google is no longer neutral. A member of the team makes a deliberate decision about whether a specific piece of content will remain on the platform. That is not passive intermediation; it is content control. At that point, the court held, Google becomes a publisher.
The court also examined Google’s approach to anonymous and fictitious posters. Several of the reviews in the case were posted under clearly fabricated names. Google’s evidence was that posts by anonymous users are treated the same as posts by identified ones, and that it was impractical to verify each poster’s identity. Deputy Judge Timms found this inadequate: if the goal of the platform is to provide genuine reviews of customer experiences, there has to be a higher standard for managing content posted by accounts that are demonstrably fictitious.
On the fair comment defence, the court accepted that customer reviews are expressions on a matter of public interest, an approach consistent with the Ontario Court of Appeal’s earlier observations in New Dermamed Inc. v. Sulaiman. However, the defence of fair comment requires that the comment be one that a person could honestly make on true facts. Reviews fabricated to leverage a fee dispute, posted under fictional names, are not protected commentary on matters of public interest. They are instrumental misuse of a public forum.
General damages of $15,000 were awarded against Google and the Almusslats jointly and severally. The court’s reasoning on joint and several liability was direct: Google had been given notice, had acknowledged the content was inappropriate, and had nonetheless made a conscious decision to allow the posts to remain. That choice made it more than a passive conduit. Aggravated damages of an additional $7,500 were assessed against the Almusslats alone, given the deliberate and instrumentalized nature of their campaign.
The Quebec Approach: A.B. c. Google
While Ontario courts were working through the intermediary question, Quebec’s Superior Court addressed a more extreme case of platform-assisted defamation in A.B. c. Google. The plaintiff was accused on a website of a conviction for a particularly serious crime he had not committed. When he searched his own name, a Google hyperlink to the defamatory post appeared at the top of the results.
Google was found to have handled the situation in a way that revealed active engagement, not passive intermediation. It told the plaintiff it could do nothing. It then removed the hyperlink on the Canadian version of Google Search. Then, relying on what the court found was an erroneous interpretation of Crookes v. Newton, it restored the link. The Superior Court of Quebec awarded $500,000 in moral damages and issued an injunction requiring removal of the post for Quebec users.
The court’s reasoning on the floodgates concern, often invoked by platforms in this context, is instructive. The court was not opening the door to unlimited litigation against internet intermediaries. It was responding to a situation in which Google had itself recognized the post was illicit, had removed the link, and had then restored it through what amounted to a deliberate decision. In those circumstances, the platform could not credibly claim the protection of a passive intermediary.
The American Contrast
Canadian businesses dealing with this issue are sometimes told, incorrectly, that Google is protected by the United States Communications Decency Act, specifically section 230, which broadly immunizes online platforms from liability for user-generated content. That immunity applies in American proceedings. It does not govern claims brought in Canadian courts under Canadian law. The Thorpe court expressly addressed and rejected Google’s attempt to invoke section 230 as a defence to a Canadian action, noting that Canadian law does not provide equivalent statutory immunity to internet platforms.
This is a meaningful distinction. The absence of a Canadian equivalent to section 230 means that Canadian courts are free to develop their own framework, which they are now doing, and that framework is moving toward holding platforms responsible in circumstances where they have notice, control, and have chosen not to act.
Where the Law Now Stands
The pattern across these decisions points toward a developing framework, though it remains genuinely unsettled. Several propositions are emerging:
The mere fact that a platform hosts third-party content does not make it a publisher in the full defamation sense. Crookes remains good law for the proposition that passive involvement in the mechanics of online communication does not automatically import liability.
The character of a platform’s involvement matters. A platform that creates the infrastructure for reviews, surfaces them in search results, and retains full control over whether any given review stays up or comes down is not in the same position as a neutral conduit. The degree of control is relevant to whether the passive intermediary defence is available.
Notice is a critical inflection point. Once a platform has received specific and substantiated notice that particular content is defamatory, the basis for treating it as a passive intermediary is significantly weakened. At that point, the platform has made a choice, and the legal consequences of that choice are a live question.
The volume of content a platform handles is not a complete answer to a targeted complaint. Courts in both Ontario and Quebec have been skeptical of the argument that scale alone excuses inaction when a specific complainant has identified specific defamatory content, provided sworn evidence, and given the platform every opportunity to act.
What This Means for Businesses Dealing With Defamatory Reviews
For a business targeted by false online reviews, the practical implications of this developing framework are real but still limited. Our defamation practice regularly advises on these situations, and what follows reflects the key considerations that arise in practice.
You still need to identify and pursue the original poster. The platform cases to date have involved plaintiffs who also sued the individuals behind the reviews. Pursuing only the platform, without making serious efforts to identify and proceed against the actual poster, is likely to be a less effective strategy and may affect the damages available against the platform.
Notice to the platform, in writing and with specificity, matters significantly. The legal analysis in both Thorpe and Jeffery turned in part on what the platform knew, when it knew it, and what it chose to do. A formal written complaint identifying the specific content, explaining why it is defamatory, and requesting removal creates the evidentiary foundation that subsequent litigation may require. If the platform declines to act after receiving that notice, its passive intermediary defence becomes materially weaker.
Court orders declaring content defamatory remain valuable, both to compel the poster to remove the material and, as the Thorpe litigation illustrated, to eliminate the platform’s claimed uncertainty about whether the content is truly defamatory. Google’s policy of requiring court orders before voluntary removal is itself a potential source of liability, as the Jeffery court found.
The anonymity of posters is a growing concern and the courts are beginning to address it. Google’s practice of treating posts by fictitious or anonymous users the same as those by verified users, without adjusting its removal threshold accordingly, was treated in Jeffery as a meaningful aggravating factor. Norwich Orders, which can be used to compel disclosure of user identity from a platform, remain an important tool alongside direct claims against the platform.
Whether you are a business dealing with false reviews that a platform is refusing to remove, or you need to identify an anonymous poster and pursue them directly, our defamation practice advises on all aspects of online defamation in Ontario, including platform liability, Norwich Orders, and the notice requirements under the Libel and Slander Act. Contact Grigoras Law to discuss your situation.
Conclusion
Canadian law on platform liability for third-party defamatory content is in active development. The Crookes passive intermediary principle remains part of the framework, but its scope is being progressively narrowed in the context of hosted review platforms where the operator exercises genuine control over content and has received specific notice of its defamatory character.
Thorpe v. Boakye opened the door by refusing to extend Crookes to that context. Jeffery v. Almusslat walked through it by finding Google liable as a publisher once its own team had reviewed a complaint and chosen to leave the content in place. A.B. c. Google in Quebec added a $500,000 damages award where the platform’s conduct went further still.
What remains to be seen is how these principles will be applied at trial in Thorpe itself, and whether higher courts will affirm and refine what the lower courts have been building. The trajectory, however, is clear. Platforms that host content, that profit from the engagement it generates, that have the technical ability to remove it, and that have received clear notice of its defamatory character are going to find it increasingly difficult to shelter behind a passive intermediary defence designed for a different era of the internet.





