By 'Damola Adediji
Policy researchers and government studies worldwide have continued to express deep concerns about Big Tech firms and their extensive collection of personal digital data, which affects how markets operate and compete. In a paper I coauthored with Professor Kean Birch of York University, we dove into these policy materials, using Nvivo to explore recurring themes in across various regions. Published by the Big Data and Society journal, our work also sheds light on how the collection of personal data is portrayed in the latest review of competition laws, policies, and regulations, and the implications for evolving competition policy
Big Tech firms are powerful political-economic actors within the economy, especially when it comes to the mass collection and use of digital personal data. As Birch notes, in a data-driven digital economy, they can therefore shape and dominate markets by structurally and strategically undermining competition through their constructed platforms—data-driven ecosystems that appear separate from the market. This capacity gives Big Tech firms structural and techno-economic power over their competitors, making it more important than ever for competition law to step up its game. Through a thematic policy analysis, our research reveals a series of key issues that policymakers around the world are identifying as important structural and techno-economic implications of Big Tech for competition.
A significant part of Big Tech firms’ market power lies in economies of scale, which can create tough barriers for new competitors to break through. For example, as Fay points out, the high costs needed to start a business can be a genuine hurdle for newcomers, while established companies can handle regulatory costs much more comfortably. Additionally, the costs involved in switching from one provider to another can make users hesitant to change. As highlighted by Stucke, the digital economy has sped up the impact of these economies of scale, in part because personal data complicates how we understand market definitions in competition policy. The basic assumptions that guide competition policy often use price theory to define markets and identify anti-competitive behaviour. These competition frameworks therefore struggle to address situations involving seemingly ‘free’ goods (like search engines) or the trade of these free goods and services for personal data. (Eben, Fourcade and Kluttz).
Meanwhile, the techno-economic side of the power held by these Big Tech firms includes both the strategic and responsive growth of relationships involving technology and political-economics. This growth is aimed at connecting a range of stakeholders, including governments, businesses, users, and academia, with the infrastructures and platforms created by Big Tech.
Scholars such as Pistor have highlighted the significance of the network effect as a key structural implication of Big Tech for competition policy. These companies have established themselves as intermediaries in building multi-sided market platforms. Network effects result from how the number of users in a network (e.g., social media platforms, search engines) increases the usefulness of the network to its users, thereby raising its attractiveness for new users. Consequently, as the UK Competition and Markets Authority noted in 2020, network effects lead to a self-reinforcing cycle in which users migrate to the fastest-growing network. With this network effect, Big Tech companies are amassing a startling amount of data, providing them with an enormous competitive advantage, creating barriers to rivals entering or thriving in relevant markets, and allowing the incumbent digital platform providers to expand into adjacent markets.
The second structural effect is connected to but distinct from the first: investments made by Big Tech firms mean they can scale up with lower-than-usual costs. As the UK's 2019 Cairncross Review put it, ‘Both the scale and the data that the platforms possess on consumers make it hard for other players, including publishers, to compete.’ Economies of scale have provided significant benefits for Big Tech firms as they have grown quickly to dominate their markets. This is clearly becoming a cause for concern amongst policymakers worldwide (as seen in, e.g., OECD 2016, G7 2021, G7 2022, OECD 2022). The main negative effect of such economies of scale is the loss of market contestability: there are significant barriers to entry into digital markets because Big Tech incumbents benefit from first-mover technology advantages; there are also significant disparities in market information; and then there are disparities in the capacity to adjust prices because incumbents benefit from greater information (e.g., data collection) and higher processing capacity (e.g., computing infrastructure).
The third structural issue identified in our paper is the gatekeeping role of these Big Tech companies in our societies and economies. Policymakers have thus noted that a few digital gatekeepers hold the keys to the crucial digital infrastructure that impacts our everyday lives—whether it's staying in touch with friends, finding job opportunities, or accessing information. Gatekeepers can control access to the users and their data, which can hold significant value for other firms wishing to connect with consumers. The fact that this vital digital infrastructure, including personal data, is largely provided by Big Tech, makes it tough for startups and competitors to enter the market.
The first techno-economic issue we identify is the capacity of Big Tech to enter adjacent markets through data collection. As the Australian Competition and Consumer Commission pointed out in 2019, ‘The extensive amount of data available to Google and Facebook provide these platforms with a competitive advantage and assist with entry into related markets.’ Data-driven business models enable Big Tech to enter adjacent markets through the modular extension of technical standards and terms and conditions (e.g., APIs, SDKs, plugins).
The second techno-economic issue concerns the spread of market power through the creation of digital ecosystems as ‘walled gardens.’ An ecosystem is more than a platform: it is the configuration of technical devices, applications and software, platforms, users and developers, payment systems, terms and conditions, and other legal rights and claims and standards (see: Autoriteit Consument & Markt, 2019). As explained by the Japanese Fair Trade Commission, through this ecosystem, end-users get locked in, reducing the opportunity for competition, even when products and services (e.g., Gmail, Facebook) are notionally ‘free.’
The third techno-economic issue follows the second: Big Tech reinforces its market power by creating ‘enclaves’ in which they govern economic activities. These enclaves are distinct from markets; they sit inside wider markets, as Birch explains, but gatekeepers can also establish the internal ‘rules of the game’ and control market information. Policymakers have highlighted various relevant business strategies and practices—including the setting of defaults, cross-selling, and self-preferencing—that reduce competition within these techno-economic enclaves.
The mass collection and use of personal data by Big Tech therefore has structural and techno-economic implications for competition policy—implications with which policymakers around the world are now grappling.
A key consideration in these policy materials is the techno-economic dimension of data-driven leverage. Policymakers repeatedly observe that Big Tech enjoys a competitive edge, primarily because of its vast personal data reserves and its ability to limit other companies' access to this valuable information. Although any digital firm can gather personal data, having substantial data holdings boosts innovation potential and offers a notable business advantage. This concern has been underscored by the UK Secretary of State for Digital, Culture, Media & Sport, along with the Secretary of State for Business, Energy, and Industrial Strategy.
Already concentrated digital markets are likely to concentrate further without concerted action to change competition policy. Our paper demonstrates the growing awareness among policymakers of the important effects of Big Tech and personal data collection on competition and market power. Of course, there's also a looming concern that the winner-takes-all dynamics fuelled by data control could influence the future development of important technologies like artificial intelligence, which significantly depend on large training datasets.
'Damola Adediji is a Visiting Researcher with IP Osgoode and a Doctoral Candidate with the Centre for Law, Technology & Society at the University of Ottawa.
The post Identifying the implications of Big Tech and digital personal data for competition policy appeared first on IPOsgoode.
IP Osgoode and the Intellectual Property Institute of Canada (IPIC) are thrilled to announce the winners of the 2024 edition of Canada’s IP Writing Challenge.
In the Law Student category, Pasha Kulinich won for his entry, “Shortcomings of the Trademarks Act in the Frontline against Counterfeit Goods”.
Pasha is a 3L student at Queen's University's Faculty of Law.
In the Graduate Student category, Nick Kawar won for his entry, “AI & IP – Anticipating Obvious Issues for the Pharmaceutical Drug Industry”.
Nick recently graduated from Osgoode's Professional LL.M Program in Intellectual Property Law and is an Associate at Fineberg Ramamoorthy LLP.
The winners will be receiving a prize of $1000 and, in addition to having their winning article showcased here on the IPilogue, the article will be considered for publication in the Canadian Intellectual Property Review (CIPR) or the Intellectual Property Journal (IPJ).
We would like to thank our esteemed intellectual property experts who served as judges for this year’s Writing Challenge: Daniel R. Bereskin, QC, Ron Dimock, and our own Professor Ikechi Mgbeoji.
But above all, on behalf of the judges and IPIC, we thank all of the authors who submitted their excellent papers for consideration. We are grateful for the opportunity to support a vibrant public policy discussion on all facets of intellectual property law and technology in Canada.
Stay tuned for more information about these award-winning papers!
The post Announcing the Winners of Canada's IP Writing Challenge 2024 appeared first on IPOsgoode.
Artificial intelligence systems often “give the vibe” of complete automated processing without human involvement. However, as Dr. Tesh Dagne reminds us, upon a closer “vibe check” there are layers of unseen and under-appreciated human inputs, efforts, and labour involved. The efforts of those unseen human hands are, in fact, the engine of AI innovation.
Dr. Dagne is the Ontario Research Chair in Governing Artificial Intelligence and an Associate Professor at York University’s new Markham campus in the School of Public Policy & Administration. He also teaches Property Law at Osgoode Hall Law School, where he is an Affiliated Researcher with IP Osgoode. His current project, which he recently presented at the IP Scholars Africa conference at the University of Cape Town, highlights how copyright enables the proactive exploitation of digital workers’ contributions as inputs to AI training or, in some cases, AI-assisted outputs.
By bringing to the fore the roles of digital workers, Dagne hopes to unearth the collaborative creation that goes into the AI production chain and feeds into the AI output. His paper, “Unseen Hands, Invisible Rights: Unmasking Digital Workers in the Shadows of AI Innovation and Implications for the Future of Copyright Law”, is soon to be published in a forthcoming volume on IP’s Futures: Exploring the Global Landscape of Intellectual Property Law and Policy (Ottawa UP, 2025), which Dagne is co-editing with Alexandra Mogyoros and Graham Reynolds. His chapter probes the future of copyright law, attempting to turn the focus of copyright to collaborative authorship. This move, Dagne argues, could respond to demands for the fair allocation of rights between digital workers, as authors or joint authors in some cases, and AI designers as exploiters of digital works.
As Karen Hao puts it, “[AI] doesn’t run on magic pixie dust… [AI training] is a job that actually takes quite a bit of creativity, insight, and judgment.” Such ingenuity involves the preparation of data works for the datasets used to train and build AI technologies, which consists of a number of decisions as to the kind of data to collect, curate, clean, label, abstract, index, etc. The process of dataset development starts with formulating the problem, which is the conceptualization of the machine learning task by making the problems “into questions that data science can answer”. The task conceptualization is typically the responsibility of the AI designer, which may be an AI company like Open AI or Anthropic AI, for example, or platform company like Microsoft, Meta, or Amazon. After the conceptualization process comes the data collection, refining, and measuring stage. Dagne’s focus is on the “digital workers” who enter the picture at this stage in the AI production process.
According to Tubaro et al., these digital workers contribute to the training process of AI systems in three steps: generating and annotating data (AI preparation), verifying model output (AI verification), and directly mimicking model behaviour to produce a service (AI impersonation). They range “from higher-skilled, ‘macro-task’ […] workers [who] offer their services as graphic designers, computer programmers, statisticians, translators, and other professional services, to [those engaged in] ‘micro-task’ [work] which typically involve clerical tasks that can be completed quickly and require less specialized skills.” (Berg et al.) As described by C. Le Ludic et al, “complex projects are broken down into smaller, easily accomplished tasks, which can then be distributed to a large number of workers.” Micro-task activities mainly involve the AI preparation aspect of AI training processes but can also include the AI verification and AI impersonation steps in AI training.
Much of the debate around copyright and AI has focused on whether using the underlying work of which inputs are constituted (the images, texts, musical works and other subject matter) for unauthorized learning constitutes copyright infringement. However, Dagne’s focus is on the copyright that can subsist over collected data, as we see in some US and Canadian cases, and whether digital workers’ activities in the preparation of training data sets in the AI pipeline could itself give rise to a copyright interest. This question can be answered by examining the nature of digital workers’ contributions to the tasks assigned to them and the ownership of copyright under the contractual agreements that digital workers sign with platforms.
Digital workers in the AI production value chain collect raw data and help add extra meaning by associating each piece of data with relevant attributive tags. Although some have argued that this attributive task is a mundane exercise that could ultimately be automated, others like Ekbia and Nardi have contended that tasks such as attribution will always be assigned to humans because of their capacity to recognize and classify data. Indeed, human intervention is now in demand to recognize the nuances and sophisticated details of specific data. As noted by D’Agostino et al., an example of such demand is in the medical field, where an understanding of scientific vocabulary is required.
From a doctrinal perspective, the copyright question is whether the contribution of digital workers described above meets the threshold of originality—which is defined, in Canadian law, by the Supreme Court of Canada’s ruling in CCH, and requires more than trivial skill and judgment in the selection or arrangement of data. If so, we might ask whether recognizing the copyright status of such contributions could address these workers' invisibility. Even if, on account of originality, the tasks executed by digital workers amount to authorship, of course such authorship does not automatically translate into ownership. The ownership of the creative tasks conducted by digital workers as part of the collaborative venture is determined either by the workers’ status as employees or otherwise by contract—which means that it is determined in the context of significant power asymmetries and the routine exploitation of digital workers.
If copyright entrenches the inequities of an asymmetrical situation—by ensuring that the collective effort of digital workers in compiling essential datasets for AI training and AI development remains unseen and undervalued—Dagne thinks the time has come to confront its complicity. He suggests that, spurred by the arrival of AI, the copyright system needs to restructure the relationship between authors-as-(data)workers and corporate proprietors in pursuit of greater fairness.
‘Damola Adediji is a Visiting Researcher with IP Osgoode and Doctoral Candidate with the Centre for Law, Technology & Society at the University of Ottawa.
The post Dr. Tesh Dagne Shines a Light on the Unseen Hands and Invisible (Copy)Rights Behind AI Systems appeared first on IPOsgoode.
Throughout her doctoral studies, Amanda Turnbull has grappled with the legal consequences of “machines doing things with words.” Her timely dissertation, Law, Language, and Authority: The Algorithmic Turn, completed in August 2024, offers a measured yet unflinching reflection on how artificial intelligence is transforming society and the law. Speaking over Zoom from her home in New Zealand, where she is now a lecturer in Cyberlaw at the University of Waikato’s Te Piringa Faculty of Law Turnbull shared some insights from her research.
"With AI, there’s an algorithm at the end of the hammer.”
At the heart of Turnbull’s thesis is her contention that AI is “more than just a tool.” When we think of a tool, Turnbull suggests, we usually think of something like a hammer. There’s always a person at the end of the hammer, and they’re responsible for what the hammer does. In the context of algorithmic systems, commentators have proposed different alternatives for who that responsible party might be, including the programmer, the end user, and the company that owns the technology. But these approaches obscure the true novelty—and danger—of AI. With AI, Turnbull explained, “there’s an algorithm at the end of the hammer.”
Turnbull’s focus on algorithmically generated language reflects her thesis’s remarkable origins at the University of Ottawa’s Department of English. Although her original supervisor, the late Professor Ian Kerr (Canada Research Chair in Ethics, Law & Technology), soon recognized that it belonged in a faculty of law, Turnbull’s dissertation maintains its indebtedness to mid-century philosopher of language, JL Austin—who, Turnbull was surprised to learn, was a close friend of the legal theorist HLA Hart. Austin’s “speech act theory” emphasized what words do in addition to what they mean. Adapting this framework to contemporary technology, Turnbull is less interested in what a generative AI like ChatGPT says than the difference it makes that a non-human actor says it.
The first of three “pillars” of Turnbull’s dissertation thus explores the consequences of AI’s participation in writing literary works. To be clear, “there is no such thing as an AI author” according to Turnbull—but that doesn’t mean AI should have no legally cognizable role at all. Drawing on her early career as a classical flautist, Turnbull recognized that generative AI’s imitative reproduction of human-authored texts in its training data isn’t so different from the work of human artists. In her words, “there’s an amount of imitation that necessarily occurs when you’re being creative.”
Unexpectedly, Turnbull found inspiration in the “spectrum of authoring” developed by Saint Bonaventure in the 13th century, long before the modern notion of authorship was developed. Generative AI, she asserts, resembles Bonaventure’s “commentator,” a mid-point between an author and a mere scribe, who clarifies and expands on pre-existing texts. By referring to generative AI as a commentator or “expositor,” lawmakers can reserve copyright for human authors without turning a blind eye to the authority embodied in algorithmically generated language.
That authority is at the centre of the second and third “pillars” of Turnbull’s research, which examine the legal implications of algorithmic contracting. As coined by Lauren Henry Sholz in 2017, an algorithmic contract is a contract in which the main terms and conditions are drafted not by human actors, but by computer systems.
Key for Turnbull is how the systems behind algorithmic contracts exercise “derivative” authority without legal intent. For this reason, algorithmic contracting is “in no way” similar to earlier technologies such as click-wrap agreements, standard form contracts, or the archetypal pen and paper. In other words, there is no “functional equivalence” between algorithmic contracting and other platforms. Courts should therefore reconsider both the notion of technological neutrality and the application of intent-based contract doctrines, including the doctrine of unconscionability recently revived by the Supreme Court of Canada in Uber v Heller.
In the third “pillar” of her thesis, Turnbull discusses how unconstrained algorithmic contracting creates the conditions for technology-facilitated sexual violence. She focused on Uber and the instances of sexual violence involving drivers and passengers documented in its 2019 safety report. Sadly, Turnbull described this chapter as “the easiest to write,” since it quickly “became obvious that this is a new way of exerting harm.” Yet the solutions to these problems are far from straightforward. In Uber’s case, the issue permeates the firm’s corporate culture and overall attitude toward innovation, she contends, which has failed to truly consider “the whole web of entanglements” impacting algorithmic language.
Ultimately, dealing fairly with AI will require “extraordinary ways of thinking” on the part of courts and regulators. But Turnbull is confident the law can adapt. The entire law of contracts and copyright, for example, can be seen as areas that have constantly adapted to new technologies. By approaching the algorithmic turn with both bravery and nuance, courts can learn to recognize AI as something that’s more than a tool, but no substitute for genuine human authority and intent.
Going forward, Turnbull is keen to use her dissertation, which was supervised by IP Osgoode Director Carys Craig, as a basis for further explorations of technology-facilitated gender-based violence such as platform violence and “onlife” harm—a term coined by Mireille Hildebrandt to describe the intersection between experiences online and in ‘real life.’ At the same time, Turnbull is interested in how algorithms have played a positive role in certain legal contexts. Although, as she says, “we’re hot to jump on technology and focus on the negatives,” in a forthcoming article on the 1999 Canada-US Pacific Salmon Agreement, she and co-author Donald McRae will explore how, “in this case, the algorithm solved the dispute.” Turnbull also plans to publish her dissertation as a book and to return to another book she began writing even before beginning her PhD—a fictional novel that is, aptly, about Austin, Hart, and the father of computer science, Alan Turing…
John Nyman is a student at Osgoode Hall Law School (JD '26) and an IP Osgoode JD Research Fellow
The post Osgoode PhD Amanda Turnbull Investigates How Algorithms Do Things with Words appeared first on IPOsgoode.
Govind Kumar Chaturvedi is an IPilogue Writer and an LLM graduate from Osgoode Hall Law School.
We sat down to chat about how he registered Suryast in Canada. Mr. Sahni told me that he had been inspired by Ryan Abbott’s DABUS, to take on this intellectual property legal experiment. I wanted to learn more about his A.I. and his legal reasoning.
Ankit shared that his A.I. tool was named “Raghav’. A team of software developers and had gotten the A.I. assigned to him. Raghav’s unique way of working was based on a technique called Neural artistic style transfer, which is inspired by the biological neurons of the nervous system. Just like in the nervous system, the neuron takes in several incoming signals and creates a resulting signal from the inputs. Similarly, an artificial neuron takes input and many artificial neurons form a layer called the neural network. The input can be text, descriptive values, etc. and the output layer can be a label predicting a category like a ‘dog’ or ‘house.’ The user then sees two columns, allowing users to input the image’s style and content. In this case, Sahni chose the Starry Night of Van Gogh for Suryast. The A.I. was already trained on different painters’ data sets. This data set was used that to make the new image and the A.I. was advanced enough to know where to place colours and structures in the painting to mimic Van Gogh’s original work.
According to Sahni, Raghav chooses and creates the brush strokes and colour palette, blurring the lines separating his own contributions. Sahni contributed the style and inputs, so the final product is a mixture of both his and Raghav’s work.
I was intrigued about whether A.I. could be considered an author according to the laws of Canada. Currently, the Copyright Act is silent on the issue. Jurisprudence in cases like Setana Sport Limited v 2049630 Ontario Inchas stated that non-juristic persons cannot be authors as the authors have lifetime and must be human. However, by co-authoring Suryast with the AI, Sahni met the legal recommendations for authorship, as it was an AI-assisted work. His creativity and skill were also present in the final work of Suryast and like he said no line could be drawn between his contribution and that of the AI, so the same qualified for copyright protection. I recalled the Copyright Act recognises joint ownership of work under section 2 as work of joint authorship, defined as a work produced by the collaboration of two or more authors in which the contribution of one author is not distinct from the contribution of the other author or authors. As Raghav contributed its own creativity, it fulfilled the definition of joint authorship under section 2.
When asked if AI is just a tool, Sahni re-affirmed that the AI chose how to apply the data set fed to it, suggesting that it was more than a tool. Sahni believed that this contribution met the threshold of minimum amount of creativity required and cited the American case Home Legend LLC v. Mannignton Mills Inc to support this point. In that case, the defendant’s selection and creative co-ordination of images was found to meet the threshold of minimal creativity as the artistic judgment was exercised. Further, in Feist Publications, Inc., v. Rural Telephone Service Co., para 44 states that “As discussed earlier, however, the originality requirement is not particularly stringent. A compiler may settle upon a selection or arrangement that others have used; novelty is not required”. The judge continues at para53 “It is equally true, however, that the selection and arrangement of facts cannot be so mechanical or routine as to require no creativity whatsoever. The standard of originality is low, but it does exist.” Therefore, Sahni believes that human inputs exceed the minimum recognized originality prescribed by law by the Supreme Court of the United States of America. However, while Sahni was able to register Raghav as author, his ownership of Raghav is also an important factor, and authors who do not own their AI co-author may not be as successful.
The post A.I. Paintings: Registrable Copyright? Lessons from Ankit Sahni appeared first on IPOsgoode.
Gregory Hong is an IPilogue Writer and a 1L JD candidate at Osgoode Hall Law School.
The National Research Council of Canada (NRC) Industrial Research Assistance Program (IRAP) and the Intellectual Property Institute of Canada (IPIC) have partnered to offer the IP Assist program for Canadian small and medium-sized enterprises (“SMEs”). IPilogue readers may have seen Serena Nath’s recent coverage of another CIC program, ElevateIP, which provides funding for a similar purpose through a different government channel. That article outlined the motivation behind these types of programs and summed up that Canadian SMEs often lack access to the means to protect intellectual property (IP) and highlighted a clear economic need for innovative Canadian businesses to improve their IP commercialization.
The NRC IRAP provides a range of innovation support services for Canadian SMEs. The program offers funding, advisory services, and networking opportunities to help SMEs undertake research and development (“R&D”) and to commercialize, and improve their competitiveness in domestic and global markets. IRAP also provides support for technology adoption, productivity improvement, and business expansion. On February 16, 2023, the Government of Canada announced that NRC IRAP will be integrated into the Canada Innovation Corporation (CIC).
The CIC will be a new, operationally independent organization solely dedicated to supporting business R&D across all regions and all sectors of the economy. It is a federal initiative that will be investing $2.6 billion over four years that aims to “play an important role in building a stronger and more innovative Canadian economy for generations to come.” The CIC will include an umbrella of programs, including both IP Assist and ElevateIP, to support the development and exploitation of IP.
IPIC is Canada’s professional association of patent agents, trademark agents and lawyers practicing in all areas of intellectual property (“IP”) law and is comprised of over 1700 members. IPIC’s role in the IP Assist program is to match SMEs with IPIC members who practice in their specific industry. The IP professional will help SMEs better understand the key aspects of IP and how it can support their business goals.
There are three levels to the IP Assist Program — levels 1, 2 and 3 (L1, L2, L3, respectively). Each level brings increased funding:L1 – up to $1k, L2 – up to $20k, L3 – up to $20k+), as well as increasing engagement with an IP professional matched to the SME:
The L1 IP Awareness is a one-to-one IP awareness session during which an IP professional will provide industry-specific IP information and guidance to an SME. Engagement at L1 provides IP professionals with an opportunity to connect, support and guide innovative Canadian SME to help them achieve their business goals. Engagements with SMEs will take, on average, up to 3 hours and include an IP awareness presentation followed by Q&As.
The L2 IP Strategy relates to the IRAP SME’s specific technology space, aligns with the IRAP SMEs business objectives, and provides IRAP SMEs with specific prioritized IP actions. The IP Strategy must be informed by key relevant information relating to the technology and competitor landscapes relevant to the IRAP SMEs.
The L3 IP Implementation relates to detailed IP asset assessments, such as IP audits, trademark clearance searches, prior art searches and analysis, advice on branding strategy, legal analysis of IP landscaping, patentability analysis, licensing strategy formulation, and other activities. However, some patent and trademark preparation services and filing fees may not be covered.
Canada’s investment in the CIC indicates that there is an increased focus on innovation as a driver of economic growth. There is also a clear aim through programs like IP Assist and ElevateIP to ensure that IP generated by innovative SMEs in Canada are carefully strategized for and well-protected. Hopefully, this increases Canadian presence in innovation and brings greater investment in R&D into Canada.
The post IPIC and National Research Council Collaborates to Create the IP Assist Program for SMEs appeared first on IPOsgoode.
Katie Graham is an IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School
In March 2022, the Canadian Intellectual Property Office (“CIPO”) allowed its first artificial intelligence (AI)-authored copyright registration of a painting co-created by the AI tool, RAGHAV Painting App (“RAGHAV”), and the IP lawyer who created RAGHAV, Ankit Sahni. RAGHAV is the first non-human “author” of a copyrighted work. However, Canadian courts claim that “[c]learly a human author is required to create an original work for copyright purposes” (para 88). Though the AI tool is a co-author with a human, the registration suggests that both RAGHAV and Ankit Sahni can constitute an author under the copyright regime and raises concerns amongst Canadian artists. Though the landscape in Canada is still unclear, the US Copyright Office (“Office”) issued a clarification on March 16, 2023, about its practices for examining and registering works that contain material generated by artificial intelligence (AI) technology.
The Office confirmed that the term “author,” used in both the US Constitution and the Copyright Act, excludes non-humans. To qualify as a work of ‘authorship,’ a work must be created by a human being and works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author are not registrable. This threshold reflects the Canadian copyright regime,. The author must contribute significant original expression to the work that is not so trivial to be characterized as a purely mechanical exercise.
The Office provided important guidance on assessing the protectable elements of AI-generated works. It begins by distinguishing whether the ‘work’ is one of human authorship, with the AI tool merely being an assisting instrument, or whether the protectable elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were conceived and executed not by man but by a machine.
If the machine produced the expressive elements of the work, it is not copyrightable. This guidance is critical for authorship issues surrounding Chat-GPT, where the AI tool receives a prompt from the user, and the user does not exercise ultimate creative control of the output. The Office provided an example where a user instructs an AI tool to “write a poem about copyright law in the style of William Shakespeare”. Given that the user contributes little to no expressive elements to the AI-generated output, the output is not a product of human authorship or protected under the US Copyright Act.
However, the Office also clarified that, in some cases, AI-generated works might contain sufficient human-authored elements to warrant copyright protection. This may apply in cases where the human selects or arranges the AI-generated elements or modifies the AI-generated material to a degree where it constitutes original expression. The analysis seeks to determine whether a human had ultimate creative control over the expression and formed the traditional elements of authorship.
This guidance is in response to a recent review by the Office of a comic book titled “Zarya of the Dawn” containing human-authored elements combined with AI-generated images. While the Office ruled that the author, Kristina Kashtanova, owned the work’s text and the selection, coordination, and arrangement of the work’s written and visual elements, copyright protection did not extend to the images generated by the AI tool, Midjourney. Though Kashtanova edited the Midjourney images, the Office held that the creativity supplied did not constitute authorship.
Given the registration of RAGHAV as an author under Canadian copyright law last year, it remains to be seen whether CIPO will follow a similar assessment as the US Office and revisit the decision to register an AI-generated work as a work of joint authorship. However, academics question whether moral rights, which are not part of the US regime, will extend to AI authors and if AI authorship will alter the copyright term of the last living author’s death plus 70 years. The increasing traction of AI warrants similar guidance from CIPO regarding the status of AI authorship under Canadian copyright law.
The post The US Copyright Office Clarifies that Copyright Protection Does Not Extend to (Exclusively) AI-Generated Work appeared first on IPOsgoode.
Anita Gogia is a IPilogue Writer and a 2L JD Candidate at Osgoode Hall Law School.
On November 14 2022, the United States Court of Appeals for the Ninth Circuit ruled in San Antonio Winery Inc v Jiaxing Micarose Trade Co Ltd (“Jiaxing”) that foreign parties to a trademark infringement complaint can be served by trademark owners within the U.S. because of s.1051(e) of the Lanham Act. Statutory interpretation of s.1051(e) in this case provides a new way to serve foreign defendants via the Director of the United States Patent & Trademark Office (“USPTO”). Specifically, Jiaxing provides that a foreign defendant may be served if they have filed an application for a conflicting trademark at the USPTO. This mitigates the traditional temporal, financial, and logistical challenges associated with preventing trademark infringement by foreign companies.
The Los Angeles-based (“San Antonio”) is known for their Stella Rosa brand that they have produced under the trademarks since 1998. They , a Chinese company, for registering the mark “RIBOLI” for wine pourers, bottle stands, containers, cocktail shakers, dishware, and other kitchen products. Jiaxing registered “RIBOLI” in 2018 for clothing and shoes and in 2020 for kitchen products. Accordingly, San Antonio filed a complaint for trademark infringement, trademark dilution, and false designation of origin. They are seeking an injunction against Jiaxing from using the “RIBOLI” mark, and an order to prohibit Jiaxing’s registrations.
The current route to service of foreign defendants is the Hague Convention, but San Antonio sought a faster and inexpensive way to serve Jiaxing. They did so under s. 1051(e) which allows U.S. residents to serve the foreign defendant’s O agent or the USPTO director in “proceedings” that affect the mark. The provision states that if the trademark applicant is not in the U.S., they can designate a person in the U.S. who may be served on their behalf regarding the marks; and if that person is not found then the USPTO director may be served.
Jurisprudence conflicts as to whether s.1051(e) is limited to USPTO proceedings or includes civil lawsuits. As such, the that held the provision only applies to administrative proceedings. The Ninth Circuit reversed this by interpreting that the words “proceedings affecting a trademark” are broad enough to include civil litigation. Since , the provision must encompass serving process for disputes in district court. The Court held that the wording only requires that it’s plain and ordinary meaning be taken. Moreover, since the Lanham Act grants courts the power to affect trademarks in other ways, s.1051(e)’s use of the word “process” must apply to court proceedings. Further, the word “process” cannot be understood as reference to administrative proceedings, and thus it would have been superfluous to include if it were not meant to also include civil proceedings.
Serving foreign defendants through s.1051(e) does not run contrary to the Hague Convention as it governs service amongst foreign countries whereas s.1051(e) governs service within the U.S without international transmittal of documents; which means it falls outside the scope of the convention.
Foreign infringers are increasingly popular and on marketplaces that verify IP ownership, such as Amazon. The decision is significant, in that it may act as a deterrent — it warns foreign companies that an application at the USPTO is all that is needed to be served a U.S. lawsuit. The Court’s adoption of the plain and ordinary meaning is akin to the starting point of statutory interpretation in this context in Canada — Driedger’s Modern Principle as adopted in Rizzo and Bell ExpressVu. This points to an expectation of similar results in Canadian courts, wherein a purposive analysis would be adopted to assess the ability of domestic trademark owners to serve foreign infringers.
The post Statutory Interpretation of the Lanham Act Provides a Path to Bypass the Hague Convention appeared first on IPOsgoode.
Serena Nath is an IPilogue Writer and a 2L JD candidate at Osgoode Hall Law School.
Every year on January 1, works protected under copyright law enter into the public domain due to their copyright protection expiring. Thus, as a new year approaches, those in the field of copyright look to see which works will expire at the end of the year. As the world entered January 2023, many excitedly anticipated that Disney’s copyright protection of Mickey Mouse in the United States (US) would expire at the end of 2023, allowing Mickey Mouse to enter the public domain as of January 1, 2024. This means that Mickey Mouse can be reproduced, adapted, published, publicly performed, and publicly displayed by anyone in the United States without infringing upon Disney’s copyright.
As a general rule in the U.S., for works created after January 1, 1978, copyright protection lasts for the life of the author plus 70 years. However, for works created before January 1, 1978, the duration of copyright protection depends on several factors as set out by chapter 3 of the Copyright Act in the United States. Mickey Mouse was first introduced in the US in 1928 with the film “Steamboat Willie,” so its copyright protection term was dictated by several factors outlined in chapter 3.Additionally, the expiration of the copyright term only applies to the original version of Mickey Mouse displayed in Steamboat Willie; later versions of Mickey Mouse will still be protected by copyright. This original version of Mickey Mouse is a black and white rat-like depiction with a long snout and black eyes, whereas later versions of Mickey Mouse include the version of Mickey with his signature red shorts and white gloves.
Copyright law in the US has evolved many times in part as a result of Disney lobbying for copyright term extension. Originally, the Mickey Mouse copyright was supposed to expire in 1983 because when Mickey Mouse was first debuted to the public in 1928, copyright law only protected works for 56 years. However, in 1976 Congress passed the Copyright Act of 1976 which extended the copyright term to 50 years after the death of the author or 75 years after the death of the author if the author was hired by an employer to create the work. As a result, the Mickey Mouse copyright was then set to expire at the end of 2003.
Starting in 1990, Disney pushed hard for an extension of copyright protections. This resulted in the Sonny Bono Copyright Term Extension Act in 1998 which extended copyright protection to 70 years after the death of the author. This extension is why Mickey Mouse’s copyright protection is set to expire at the end of 2023. The extreme lobbying from Disney to extend copyright protections earned the 1998 act the nickname of the “Mickey Mouse Protection Act.”
Although the original Mickey Mouse’s copyright protection will expire at the end of 2023, Disney will still be able to protect the Mickey Mouse brand through trademark law. Mickey Mouse is protected as Disney’s property because it is a registered trademark. Trademark protection can theoretically last forever if Disney can continually show that Mickey Mouse is associated with its company. Disney will likely be able to continually show an association with Mickey Mouse. In 2007, Walt Disney Animation Studios redesigned its logo to incorporate the original version of Mickey Mouse. Therefore, although someone may use the original version of Mickey Mouse in a work, they are not able to use this version of Mickey Mouse for any branding purposes or any purpose that would cause consumers to be confused about the source of the Mickey Mouse product. These intersections between trademark and copyright law may stop Mickey from strolling into public use for the coming years.
The post Mickey Mouse to Enter Public Domain in 2024 appeared first on IPOsgoode.
Sally Yoon is an IPilogue Writer and a 3L JD Candidate at Osgoode Hall Law School.
Kevin Keller is General Counsel at Super, a Series B startup with business verticals in travel, fintech and commerce. Before Super, Keller worked at many notable technological companies, such as Facebook, Microsoft, Instacart and Amazon. He is a first-generation college graduate who obtained his Bachelor’s Degree in Electrical and Electronics Engineering from Brigham Young University and JD from New York University School of Law. Keller generously offered his time to the IPilogue to discuss his experiences to inspire law students interested in supporting startup companies.
Those of us who are first-generation graduates can fall into one or two groups; some may be overly cautious and conservative with their approach because they’ve gone so far, learned so much, secured the job, and obtained the education. They have already taken so much risk, going outside every expectation, that turning down a solid and more predictable path is one step too far. Then there’s a group of people who will take every chance because they have nothing to lose - you get a lot of entrepreneurs that are first-generation.
I started my career a little more conservative. But, as I went further along, I got more comfortable with risks and decided that I could lean on my own skills and experiences. Taking those risks has, by and large, led to greater outcomes for me and my career, but it can be hard to do as a first generation.
I realized mid-way through law school that there was a part of me that was entrepreneurial.
I shared this feeling with Alex Cohen from Columbia Law, and we decided that if something didn’t exist that gave us the opportunity, we would have to create it. We went to both the law and business schools of our schools and put up posters claiming that we were starting an elite group, with venture capitalists and the hottest startups in the city. We had none of that, but we decided that’s what we were going to have. We eventually got Fred Wilson on board and got some law firms to provide us with space and funding. It came together, partially through force of will because we wanted to create something that didn’t exist.
Oftentimes, when I’m looking at resumes during a hiring process, I look for whether in absence of something, [the applicant] created it - if they were entrepreneurial in some fashion.
Lab126 was formed by Amazon to develop its hardware products. When I joined, I was sitting alongside everyone. It’s one of the things about joining a start-up that is kind of unique and fun for attorneys - you’re there in the thick of it with the rest of the employees. This environment led me to think of ideas for how the products could work together or how we could make something that might help us around a regulatory problem in a customer friendly way. I was super privileged to be able to participate in that creative process.
It’s a fine balance. A good legal team will identify significant risks, but also allow start-ups to be start-ups - they’re going to take some risks and that’s ok. Even with experience, it’s still nerve-wracking as an attorney to know that there are rocks that you haven't overturned, but you have limited time and resources so it’s necessary for you to apply your judgment to best posit which are most likely to harbor significant risks.
Super is a startup with business verticals in travel, fintech and commerce. Altogether, we have SuperCash, SuperTravel, and SuperShop, and they are all under the umbrella of “Super” with the overall mission to help people save and build credit.
For people who want to go into start-ups, you’re probably not going to be right out of law school. The first attorney, the start-up hires because they’re going to want someone who can jump in and do everything across the board. Even if you are that one person with experience, it’s difficult to have all that experience - employment, real estate, compliance, corporate, security, intellectual property… hopefully not bankruptcy. There’s a combination of classes that could be helpful: venture capital or corporate finance courses that talk about funding would be very helpful. Some general knowledge of IP would also help, it doesn’t have to be deep. I would consider myself an IP expert at this point in my career, and the only course I took in school was Trademarks.
I just hired someone in November who was largely in corporate security and M&As. Now she’s two months in supporting our marketing team, doing some trademarks analysis, dealing with consumer complaints, working on our end-user agreements and thinking about privacy and doing a great job of learning that stuff quickly. You’re not gonna have everything but you need to realize that even without everything, you have that one core skill set of being able to learn things fast, and that’s something valuable you can bring to the start-up.
Note from the Interviewer:
I would like to express my gratitude to Kevin Keller for taking the time to participate in this interview and sharing his valuable insights into his experiences across various roles within the tech and start-up industries.
The post Working In-House at a Start-up: an Interview with Kevin Keller appeared first on IPOsgoode.